I
I
ilya_compman2010-12-04 23:51:40
PHP
ilya_compman, 2010-12-04 23:51:40

MySQL MyISAM Issues - Duplicate Records and Large Table Crashes

Quite suddenly, strange things began to happen on the working project:

  • the two largest tables fall off - one per gigabyte and about 70 million records, the other for 500 megabytes and 700,000 records. Approximately 100-1000 inserts per second in the first and 2-5 in the second. From the second data is actively selected
  • periodically, for unknown reasons, the database starts to give an error too many connections. Scripts are optimized, one script - one instance of the connection (database class - "loner")
  • today, for no reason at all, the data began to be duplicated, one request passed from two to 13 times. And not just one request, but several at once, which follow each other.
I checked the scripts, everything is in order, nothing has changed for a long time, a project with an average attendance. There are no surges today.
Dedicated server, standard settings, OS - CentOS. MySQL version is 5.0.77

What could be the reason? I've never experienced anything like this so I can't figure out what's going on.

Answer the question

In order to leave comments, you need to log in

5 answer(s)
P
pentarh, 2010-12-05
@pentarh

Too many connections - I will explain this. On myisam it is impossible to do as you described. Active selects lock the entire table and inserts wait for the lock to be released. It turns out that if your select slowed down even for a second, then according to your data, there are already 1000 inserts in the queue. That's too many connections for you.
Switch to innodb, it is slower, but there are locks at the row level and it recovers well from a crash.
And it is better to make one extended insert per 1000 rows than 1000 inserts per row. Think about it.

@
@resurection, 2010-12-05
_

FaceBook is not a serious project in your opinion?

Z
Zorkus, 2010-12-05
@Zorkus

I note for those who fiercely shit bricks above - 70 million records in the largest table - this is not so much. Not much to say - "For example, if there are a lot of records in one table (70 million), then partitions can be used."
7 billion - and tens of terabytes of the total database size - that's a lot.

I
Iskander Giniyatullin, 2010-12-05
@rednaxi

with a large number of inserts, your choice is innodb

W
Wott, 2010-12-05
@Wott

It is necessary to look at locks in mysql and iostat and in general what can be connected with the disk there.
In general, it is somehow strange to see MyISAM for tables that are often written to.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question