W
W
waterlilies2016-01-18 12:04:03
MySQL
waterlilies, 2016-01-18 12:04:03

How to reduce the load on the database with a large number of requests?

A small note: there is a script that spawns a certain number of child processes (also scripts) for background work with the database. In general, there are more write operations than read operations. At the moment, mysql fully loads the CPU, requests fall into the queue. In addition to optimizing queries and reducing their number, what else would be appropriate? Because scripts run in the background whether the user requests a web page or not.

Answer the question

In order to leave comments, you need to log in

5 answer(s)
A
Andrey Surzhikov, 2016-01-18
@Surzhikov

It is possible to change my.cnf to large.cnf if RAM allows.
In 90% of projects, this action solves problems for a long time.

A
Anton Natarov, 2016-01-18
@HanDroid

Are you using transactions in your queries? If you have logs or revisions, take them to hell. Scripts or JS itself is a client language, it works on the client in the browser, not on the server. If the whole problem is in the scripts, then you can run them on demand, and not keep them on all the time.

I
Ivan, 2016-01-18
@LiguidCool

To the above written - clustering.

R
Roman Mirilaczvili, 2016-01-18
@2ord

Not the fact that a problem because of a DBMS. After all, the task of the DBMS is to serve application requests.
In general, it is necessary to monitor the processes in the DBMS in more detail in order to say exactly where the bottleneck is. You can use built-in tools, as well as the Percona toolkit .
I also advise you to check the script program for unoptimized insert queries in the database.
If there is an insertion of a large number of records, then it is possible to optimize with one insert, wrapping it in a transaction.

L
LAV45, 2016-01-29
@LAV45

1) Move to PostgreSql
2) Use batch insert, I write 100,000 records and won’t stutter
3) Delete the B-Tree index, it re-indexes the entire tree with each insert, in PostgreSql it’s a little better with this reindexing is performed deferred.
For Mysql, the brakes start at 2.000.000 records in the PostgreSql table after 5.000.000.
4) If you switch to PostgreSql, you can raise the second database in master-slave mode. Write to master
, read from slave. And synchronize from in asynchronous mode. This way the slave will lag a bit behind the master (about 2 seconds depending on the load) but the slave will just fly.
#PostgreSQLRussia meetup at Avito.ru
https://www.youtube.com/watch?v=2LDAcGZRAEM

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question