A
A
Andrey Surzhikov2017-09-08 22:57:02
MySQL
Andrey Surzhikov, 2017-09-08 22:57:02

How to speed up insertion if there are many indexes?

MySQL, InnoDB table.
It has 37 columns; 23 million records (13.2 GiB in total)
Due to the need for a quick search IN ALL columns of the table (this is the point of the project), one of the programmer's mortal sins had to be committed - and indexes were made on almost all fields. In total, we got 18 simple indexes and 10 fulltext :(
The speed of data retrieval has become satisfactory. But once a week you need to update the data (about 10,000 records) - and this takes a painfully long time.
Save requests are fast, but inserting indexes is very long.
insert/ The update request comes quickly, but after a while the SQLSTATE[HY000] [2002] Connection refused error occurs and the records are not saved at all
.

Answer the question

In order to leave comments, you need to log in

2 answer(s)
P
Philipp, 2017-09-08
@zoonman

The solution to this problem is to ditch MySQL and use columnar database engines that are designed specifically for this kind of task.
Look towards Vertica, Cassandra, HBase.

F
Fortop, 2017-09-08
@Fortop

First, decide how much you need all the fields.
What selectivity of indexes on them.
Is aggregation carried out, or is it just selections by conditions ...
If the main task is full-text search with attributes accompanying documents, then drive it all into Sphinx

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question