S
S
Stepan2012-10-14 03:58:35
MySQL
Stepan, 2012-10-14 03:58:35

Strange glitches after switching to innoDB

Switched the storage type of one of the tables from myisam to innodb.
Hour 3 on the loaded project worked almost without problems (selects were executed too long even by primary keys).
Now I notice that the list of pending requests began to grow at an unrealistic pace.

32899786 | user | 127.0.0.1:48798 | table | query | 1296 | Sending data | select id from site_users

How can a query like this take 1296 seconds to complete?
What am I doing wrong?

The reason for switching to innodb is banal, there are too many table updates.

Answer the question

In order to leave comments, you need to log in

4 answer(s)
P
Puma Thailand, 2012-10-14
@opium

select id from site_users
if you have 10 billion id then why should it be fast then?
In addition, I'm not sure that the muscle can take data from the index and I'm not very sure that taking data from the index is faster than just from the table.

R
rakot, 2012-10-14
@rakot

The reason most likely lies in deadlocks. Some Update requests have locked up a couple of records from site_users, and since you are pulling them all, the request is waiting for them to be unlocked. Judging by Google, the problem exists, you can google "mysql deadlock sending data".
I can recommend switching to Percona, they seem to be paying more attention to this problem.

L
Lev Lybin, 2012-10-14
@lybin

innodb tuned in muscle configs? With the default settings on the highload, it is stupid.
There are many mana, including in Russian:
goo.gl/VrA1R

F
fred, 2012-10-14
@fred

This time shows how much _real_ time the request was executing, including waiting for locks, and not the net time per request. If you lock the table and make an update of one row in another connection, it can run for 100 years, according to this counter

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question