A
A
AntonioK2012-09-06 11:15:30
MySQL
AntonioK, 2012-09-06 11:15:30

Mysql: Deadlock found when trying to get lock; try restarting transaction

Table:

CREATE TABLE `counter_countries_rotates` (
  `country_id` int(11) unsigned NOT NULL,
  `date` date NOT NULL,
  `count` int(11) unsigned NOT NULL DEFAULT '0',
  UNIQUE KEY `UK_country_date` (`country_id`,`date`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8


The table contains counters “how many impressions per day by country”. The counters are constantly updated with a query like:

INSERT INTO
`counter_countries_rotates`
SET
`country_id` = '20',
`count` = 1,
`date` = '2012-09-06'
ON DUPLICATE KEY UPDATE `count` = `count` + 1


There are no deletions from this table.

For the time being, everything works well, but when the number of updates of the same line per second exceeds a certain threshold value, requests fall off with an error:

Deadlock found when trying to get lock; try restarting

transaction Maybe it's worth using the PRIMARY index somehow?

Server version: 5.1.65-log FreeBSD port: mysql-server-5.1.65

Answer the question

In order to leave comments, you need to log in

6 answer(s)
A
AntonioK, 2012-10-03
@AntonioK

The problem was partly solved by processing deadlocks in the application working with the database (it tries to repeat the last request and continue working in the transaction), and to the greatest extent - by abandoning the UNIQUE KEY `UK_country_date` (`country_id`,`date`)
in favor of PRIMARY KEY ( `country_id`,`date`).
Thanks everyone!

M
Melkij, 2012-09-06
@melkij

I could be wrong, but I think prepopulating the table with count=0 and then a clean update will help. Optionally, cleaning the past day from zero records.
PS: if country_id is countries , then why is there a 32-bit int? Are you from another planet?

F
fred, 2012-09-06
@fred

As an option, instead of update'a, insert a new record with a certain field = 1, and calculate the sum for this field, at night or when the load is small, collect all the records for the day into one by setting collapsible records in the non-field = sum().
I don’t know how effective this solution is, I have one service that works this way, with similar functionality. the deadlocks are gone.

V
vsespb, 2012-09-06
@vsespb

calling get_lock before and release_lock after the operation (for all clients) will help. but it will be slower.

M
mayorovp, 2012-09-09
@mayorovp

Everyone wrote to you:
Deadlock found when trying to get lock; try restarting transaction
If this error occurs, you must try to perform the operation again, that's all.
PS but instead of this strange INSERT, I would write the most common UPDATE, and in the case of a zero number of updated rows, I would do an INSERT. In theory, this should enable the server to immediately hang up the desired lock. But not a fact, here you need to try both options.

A
Alexey Akulovich, 2012-09-06
@AterCattus

If there are many simultaneous inserts, then it may be worth thinking about the option, only one writer. Fold the logs of the counter to the insert, and let him rake them.
Another option, if the logs come from many machines, then aggregate them for N seconds/minutes, and then discard the accumulated ones with a much smaller number of simultaneous connections. But judging by your comment “the amount must be known at any time”, this is not very suitable for you.
[offtopic]Well, as a completely alternative, add to any nosql.[/offtopic]

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question