M
M
My Way2018-09-02 12:42:21
MySQL
My Way, 2018-09-02 12:42:21

How to optimize mysql database?

There is such a problem that the table has grown to 1.5 million records. And now requests to it load the processor at 100% and mysql itself.
There is such a query that I perform every time (ALWAYS) before saving something to this table: I need to find out if there is a record already in the table and I compare by 1 or 2 columns. And I'm looking for a long ID of 20 in the service_id column. Requests sometimes occur at 10 per second approximately - this is still the peak, but there will be more.
How can you optimize? Implement indexes, but how long will they last? What if 1 million records in this table are added per month, what should be used with such volumes?
UPD:
there is such an index so that when a record is added, it will be exactly unique:
messages_account_id_service_id_channel_unique - 3 columns - accound_id, service_id, channel
Such fields id, text, ..., accound_id, service_id, channel - that's about it. There are 20 fields in total.
I make a selection by channel and service_id.

Answer the question

In order to leave comments, you need to log in

1 answer(s)
A
Alexey Cheremisin, 2018-09-02
@leahch

1) first check that the desired indexes are included in the query. Examine the EXPLAIN query.
2) if there are no necessary indexes, build them.
3) try to normalize the base. For example, we always have three columns and a bunch of other parameters in our query. Separate these three parameters into a separate column and one additional ID. Insert two queries, in a transaction, first the parameters into one table, then three columns and the resulting ID into another.
4) if there is still a lot of data, then split the tables into batches and according to some criterion, for example, by hash.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question