Answer the question
In order to leave comments, you need to log in
Mysql: Deadlock found when trying to get lock; try restarting transaction
Table:
CREATE TABLE `counter_countries_rotates` (
`country_id` int(11) unsigned NOT NULL,
`date` date NOT NULL,
`count` int(11) unsigned NOT NULL DEFAULT '0',
UNIQUE KEY `UK_country_date` (`country_id`,`date`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
INSERT INTO
`counter_countries_rotates`
SET
`country_id` = '20',
`count` = 1,
`date` = '2012-09-06'
ON DUPLICATE KEY UPDATE `count` = `count` + 1
Answer the question
In order to leave comments, you need to log in
The problem was partly solved by processing deadlocks in the application working with the database (it tries to repeat the last request and continue working in the transaction), and to the greatest extent - by abandoning the UNIQUE KEY `UK_country_date` (`country_id`,`date`)
in favor of PRIMARY KEY ( `country_id`,`date`).
Thanks everyone!
I could be wrong, but I think prepopulating the table with count=0 and then a clean update will help. Optionally, cleaning the past day from zero records.
PS: if country_id is countries , then why is there a 32-bit int? Are you from another planet?
As an option, instead of update'a, insert a new record with a certain field = 1, and calculate the sum for this field, at night or when the load is small, collect all the records for the day into one by setting collapsible records in the non-field = sum().
I don’t know how effective this solution is, I have one service that works this way, with similar functionality. the deadlocks are gone.
calling get_lock before and release_lock after the operation (for all clients) will help. but it will be slower.
Everyone wrote to you:
Deadlock found when trying to get lock; try restarting transaction
If this error occurs, you must try to perform the operation again, that's all.
PS but instead of this strange INSERT, I would write the most common UPDATE, and in the case of a zero number of updated rows, I would do an INSERT. In theory, this should enable the server to immediately hang up the desired lock. But not a fact, here you need to try both options.
If there are many simultaneous inserts, then it may be worth thinking about the option, only one writer. Fold the logs of the counter to the insert, and let him rake them.
Another option, if the logs come from many machines, then aggregate them for N seconds/minutes, and then discard the accumulated ones with a much smaller number of simultaneous connections. But judging by your comment “the amount must be known at any time”, this is not very suitable for you.
[offtopic]Well, as a completely alternative, add to any nosql.[/offtopic]
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question