S
S
SteepNET2020-11-18 15:45:18
MySQL
SteepNET, 2020-11-18 15:45:18

MYSQL on SSD optimization?

Good afternoon!
On the previous hosting they were on NVMe disks, Bitrix24 box was placed on BitrixENV, everything was tolerable and acceptable.
We moved to a new one with SSD disks, 6 cores 12 GB of RAM and away we go, the brakes take a long time to execute requests, 504 \ 502 everything is very long.
The database weighs 47 GB
As is known, Bitrix has such a thing as bvat, this is an automatic script that itself sets up mysql parameters depending on the configuration, so he made them there.

[mysqld]
    query_cache_type = 1
    query_cache_size = 128M
    query_cache_limit = 16M
    innodb_buffer_pool_size = 6144M
    max_connections = 85
    table_open_cache = 14336
    thread_cache_size = 128
    max_heap_table_size = 128M
    tmp_table_size = 128M
    key_buffer_size = 96M
    join_buffer_size = 18M
    sort_buffer_size = 18M
    bulk_insert_buffer_size = 2M
    myisam_sort_buffer_size = 18M


I ran mysqltunner who gave such recommendations

-------- Recommendations ---------------------------------------------------------------------------
General recommendations:
    Control error line(s) into /var/log/mysql/error.log file
    Restrict Host for [email protected]% to [email protected]
    MySQL was started within the last 24 hours - recommendations may be inaccurate
    Increasing the query_cache size over 128M may reduce performance
    Adjust your join queries to always utilize indexes
    Increase table_open_cache gradually to avoid file descriptor limits
    Read this before increasing table_open_cache over 64: https://bit.ly/1mi7c4C
    Read this before increasing for MariaDB https://mariadb.com/kb/en/library/optimizing-table_open_cache/
    This is MyISAM only table_cache scalability problem, InnoDB not affected.
    See more details here: https://bugs.mysql.com/bug.php?id=49177
    This bug already fixed in MySQL 5.7.9 and newer MySQL versions.
    Beware that open_files_limit (5000) variable
    should be greater than table_open_cache (2452)
    Thread pool size for InnoDB usage (6)
    Before changing innodb_log_file_size and/or innodb_log_files_in_group read this: https://bit.ly/2TcGgtU
Variables to adjust:
    query_cache_size (=0)
    query_cache_type (=0)
    query_cache_size (> 128M) [see warning above]
    join_buffer_size (> 18.0M, or always use indexes with JOINs)
    table_open_cache (> 2452)
    thread_pool_size between 16 and 36 for InnoDB usage
    innodb_buffer_pool_size (>= 39.3G) if possible.
    innodb_log_file_size should be (=768M) if possible, so InnoDB total log files size equals to 25% of buffer pool size.
    innodb_buffer_pool_instances(=6)


Making these recommendations only makes the situation worse.
The disk subsystem has a strong influence here, for some reason the SSD works slowly, all this is spinning on VDS not on a dedicated server and you can’t create a RAID and NVMe can’t do a hotser either, but it gives uptime guarantees, so here we are

Someone can help with advice what else can you try?

Скорость ЗАПИСИ
sync; dd if=/dev/zero of=tempfile bs=1M count=1024; sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.79635 s, 384 MB/s

Скорость ЧТЕНИЯ из буфера
dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.362921 s, 3.0 GB/s

Реальная СКОРОСТЬ чтения
/sbin/sysctl -w vm.drop_caches=3
vm.drop_caches = 3
dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 26.643 s, 40.3 MB/s


hdparm -Tt /dev/vda2
/dev/vda2:
 Timing cached reads:   13680 MB in  1.98 seconds = 6894.48 MB/sec
 Timing buffered disk reads:  38 MB in  3.56 seconds =  10.68 MB/sec

Answer the question

In order to leave comments, you need to log in

2 answer(s)
D
Dimonchik, 2020-11-18
@dimonchik2013

1073741824 bytes (1.1 GB) copied, 26.643 s, 40.3 MB/s

this sentence
iron is not what you expect or in the stated marketing

6
65536, 2020-11-19
@65536

query_cache_type = 1
is there myisam or what? if so, and if the tables are large, then you need to change to innodb. This cache is cut down, there are more brakes from it than good, especially when there is a lot of recording going on. then, depending on the size of the RAM, pick up the values ​​​​of the innodbsh parameters
, this article saved me https://ruhighload.com/%D0%9E%D0%BF%D1%82%D0%B8%D0...
for 8 GB it looks like this
innodb_buffer_pool_size = 4G
innodb_log_file_size = 512M
innodb_buffer_pool_instances = 4
innodb_flush_log_at_trx_commit I set it to 0, it's the fastest but it's kind of unreliable, but I don't know where this edge of reliability is and how the database should be loaded to break this line, there have been no problems with data for more than a year.
https://ruhighload.com/innodb_flush_log_at_trx_com...
I think that everything is very individual, it depends on the number of columns and their types, but it's better to leave the maisam. it starts to brake specifically on tables from any thread of 100 records. before leaving, it is advisable to make sure that it does not use any features that have not yet appeared in innodb. Since 5.7, there seems to be nothing like that left. Before her, there were differences in working with variables, the
MaySkltuner gave not the best advice, by the way, but close ones

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question