Answer the question
In order to leave comments, you need to log in
How to understand why MySQL is dumb?
Server C2Q x 2 8G ram. RAID 5 ( 3 hdd ), mysql 5.1.26-rc \ Red Hat 4.1.2-14
When they were assembled (two years ago) they were young and stupid, but the server generally fits into its parameters.
So we have a relatively high load on MySQL - 601.90 requests per second, of which updates / inserts - 2%, and ~ 70% - stmp prepare \ execute \ close, 34.84% remains for the share of pure select
And about a week ago, the database learned to die - heaps of processes were created that worked for half an hour.
Strangeness 1 - exactly in an hour everything was repaired by itself
In general, the raking of the server state began.
As one of the points of this program, a dump of the time of performing operations in the database was added to the engine code back to this database.
This code worked for requests that took longer than 0.1 sec - slow_log does not see them yet, but this is already a brake ...
In general, strange things started here - the most common request, which, run it manually, is executed 0.0001 reports to the database that it was executed 0.5 or even TWO seconds ...
Strangeness number two - the brakes go in small batches, 5-10 braked requests, about once every 11 seconds.
And at this moment, usually, only a few tables slow down (that is, I see a bunch of essentially identical queries in the log at that moment)
Since 99 brake queries fell on innoDB tables, some dances were carried out - file_per_table was enabled and tables from the general dump (11GB) transferred to their small files (the final total size is 4GB, there was fragmentation there)
LA servers, 0.9
utilization of the screw — 15-20%
Config here
Free memory — is.
Ideas where the brakes come from and what to do - no
. Alternatively - Percona or MariaDB (5.1.6?)
Bonus pack - when mysql freezes - the connections do not fall off from it, the processes do not end.
Nothing but kill -9....
Answer the question
In order to leave comments, you need to log in
> Raid is holding a normal piece of iron
> LSI
is already ridiculous. batteries and memory are not there, are they?
>RAID5
>innoDB
>screw utilization - 15-20%
>innodb_flush_log_at_trx_commit = 2
>the most common request… TWO seconds…
Writing to RAID5 is already difficult for the necessary logical operations, and in the LSI implementation, everything usually turns out even worse. As already mentioned here, RAID1, RAID10 and variations are preferable for databases.
It remains to reduce the intensity of write operations. Try first of all
innodb_flush_log_at_trx_commit=0
innodb_support_xa=0
tmpdir = /tmp/ - transfer to tmps
Secondly more dangerous parameters:
innodb_doublewrite=0
delay_key_write=ALL
innodb_flush_method=nosync - I don't know if anyone uses this at all. the value is not even documented, but you can find its use here and there. if the performance requirements are high and the server does not suddenly overload (why would it overload in a good data center?), then you can use it.
read about each parameter, because such a change is a compromise between speed and reliability of data storage.
bin-log - do you know exactly why you need it?
query_cache_size = 0 - is it really inefficient for you? put well at least 16MB.
sort_buffer_size = 256M - be careful with this, it is allocated entirely in each handler, regardless of the real need. When the server is overloaded with requests, it is possible to run out of memory and kill mysqld in the first place.
What state is mysql in when it hangs?
Iron in the eyes looked? The first suspicion falls on the raid. Firstly, it is the fifth with all the consequences, and secondly, for current hard drives, 2 years is quite a retirement age.
It seems to me that the problem is in RAID 5. It is not very suitable for databases. I would put one more screw and make 2 RAID 0. The speed of working with the disk will noticeably increase.
sort_buffer_size = 256M
This parameter is set for EACH CLIENT. If there are heavy requests, or heavy storages, then unscrew it inside the session. Although, according to my observations, there is no sense in twisting more than 1M. 256 *100 clients = 25G RAM :)
www.mysqlperformanceblog.com/ - all sorts of config tunings are chewed in detail there
. And I would try version 5.5 for a day, innodb is noticeably fresher there. Well, collect icc.
Then the topic of transferring tables to the RAM was already discussed - to the search
1. For starters, I advise you to run mysqltuner.pl and see what it says
2. Enable slow_query_log in mysql and look there
3. Do not forget that you can execute a minute query in 0.0001 seconds immediately after this query itself. Cache works corny. This is me to the fact that if you wrote the config “add a zero - it works faster”, then you probably don’t suspect that by performing the same request in a row, you will get completely different numbers. The second execution will be almost instantaneous.
4. the size of the database is under 11GB, 2% goes to updates, but do you reset the index before the update? If the table is large, then such an update can easily load it to the kidneys.
5. RAID is hardly to blame here, after all, 600 requests per second, of which two-thirds are service - this is not such a big load.
Well, if you find bottlenecks in the first 5 points, check EXPLAIN queries, optimize them.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question