I
I
Iskander Giniyatullin2012-01-13 10:36:35
MySQL
Iskander Giniyatullin, 2012-01-13 10:36:35

How to add indexes to a not-so-small InnoDB table?

In general, I ran into a problem.
There is a mysql server, it has an InnoDB table, the table size is 1.4 gigabytes, 19 million records, and every second a hundred more records are added. It contains statistics.
The problem is that the table does not have the necessary indexes, so the selection from it takes a very long time.
The second problem is that I would also like to add a composite unique key to it, so that records, when added, do not stupidly fall to the end of the table, but are grouped through INSERT ... ON DUPLICATE KEY UPDATE count = count + 1
If you just try to set indexes through ALTER TABLE, then I'm afraid that my server hangs in the process. Also, I would not want to lose the data that is constantly being added.
While I came up with this option - create a new table of the required structure, named stats2, then rename stats to stats3 and stats2 to stats, so that all new data falls into the new table, and the old ones remain in the old one. Next, the script gradually transfers all the data from the old table to the new one.
At the same time, obviously, the display of statistics will not work for some time until all the old data is transferred, but the current data is not lost. Are there any other options?

Answer the question

In order to leave comments, you need to log in

6 answer(s)
R
Roman Gogolev, 2012-01-13
@romka777

Create a new table with the desired indexes and write to both tables.
Move the missing records to the second table.
Disable the first table.
You can also get confused with replication, and generally optimize this algorithm.

L
L0NGMAN, 2012-01-13
@L0NGMAN

I think it's better to do all this on another server (not production).

E
egorinsk, 2012-01-13
@egorinsk

On this topic (how to update the structure of large MySQL tables on a live server) last year there was an article on Habré - it seems to be a translation of someone from Facebook - you can search. There are some tricky manipulations with triggers, as well as copying and renaming tables. But in general, the proposed method is similar to the one you described.
> If you just try to set the indexes through ALTER TABLE, then I'm afraid that my server will hang in the process.
So it will be. This is a very slow operation, on huge tables it will take hours or even days.

N
niko83, 2012-01-13
@niko83

To begin with, try to add these indexes on a table of similar volumes on a local wheelbarrow.
And see if there will be something like you are afraid of “then I’m afraid that my server will hang in the process. „
It is quite possible that there will be a delay of only 10 minutes. You are adding indexes instead of changing the structure.
Write about the results here, we are all interested.
It is possible for indexes to be applied in one pass, first you need to turn off ALTER TABLE ... DISABLE KEYS for the table, add injections to the table definition and rebuild the ALTER TABLE ... ENABLE KEYS indexes. But this is purely a guess that it can work this way, I have not tested it in practice.
If you experiment with this, please post the results.

C
charon, 2012-01-16
@charon

I advise you to think about splitting the statistics into several parts / tables, if possible.

K
kenga, 2012-01-16
@kenga

There is a solution from facebook for such things. If you do not have foreign keys on this table, then it should help you Online schema change
In a nutshell, a small script that creates a new table with the indexes and new fields you need, and gradually transfers data from the old table to this new one, adding this on the new table the necessary triggers to access the old data.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question