I
I
Igor Samokhin2013-11-11 11:01:53
MySQL
Igor Samokhin, 2013-11-11 11:01:53

What are some ways to optimize frequently used MySQL select queries?

Hello,

for example, there is a site with attendance. And there, in real time, the indicator of new events is updated for each user.

Please tell me, what are the ways to optimize such requests from each user (once a minute)? There was just a project and I never got around to solving this problem. And the queries were running for a long time (since there were many users who used the same table in the database when searching)

Thank you

Answer the question

In order to leave comments, you need to log in

4 answer(s)
F
FanatPHP, 2013-11-11
@FanatPHP

A spherical question in a vacuum, and without the possibility of clarification, since "the project was."
"slow requests" - how slow?
"site with traffic" - with what traffic?
“because search requests” - so maybe this search should be taken out separately on the Sphinx?
That is, it is not possible to find out what is there - a crooked mysql server setup, crooked tables or crooked queries.
But the question, as always, is formed in the most general form - "where is that magic nail, which you hit once - and everything flies at once?"
Well, OK. In its most general form, query optimization (whether frequent or infrequent) is query optimization.
The optimized query runs in (let's say) 0.001 seconds. That is, the database can serve 60 thousand simultaneously sitting users.
We take EXPLAIN and look. If he says that everything is OK with the request, we look at exactly as many records as we need - 5-10, but still the request is executed slowly (how specifically slow - in seconds?), then we look, SHOW ENGINE [engine] STATUS. There, again, you need to look at the place, decide what the server lacks.
After we have made sure that all SQL optimization measures have been taken, then only then do we do caching, replacing polls with pushes, etc.
That is, we act as a programmer - namely, we deal with a specific problem and look for a solution specifically for it.
And we don’t come up with a patch from the bulldozer to treat the symptom, leaving the disease to develop further.
Caching, like any other data denormalization, is always fraught with problems and inconveniences. And it should be the last resort when everything else is done.

S
Sergey, 2013-11-11
Protko @Fesor

Let's go in order.
You have a visited resource where users should receive real-time notifications about something. Well, they say, for example, about unread messages. To do this, as I understand it, you poll the server every minute, and from there heaps of identical requests appear. Poidee if mySQL is configured normally, then it will try to cache all these queries itself. Are you sure you have problems with database performance in this vein?
On the other hand, the most optimal way to solve this problem is WebSockets/long pooling. Implement a simple server on which node.js + scocket.io. How to put a data bus which rabbitmq or something like that. In the case of messages, when a message is sent to another user, a message is sent to the queue. It is received by the push server and, if we have a connection with the desired user, we send him a notification. With this approach, we have: the data leaves with minimal delays. The load on the server is reduced by reducing the number of requests (the push server just keeps the connection and only works with a small number at a time). And the last thing is scalability due to rabbitmq: from the main application, small messages are just sent to the queue. And how it parallels messages and their processing is taken out in rabbitmq. Less connected systems. And there are enough ready-made implementations.

U
uzzz, 2013-11-11
@uzzz

If the indicator of new events is common to all users (for example, some global feed on the site), then it would be reasonable to cache the result in memory using memcached or redis.

R
rozhik, 2013-11-11
@rozhik

If query caching is not possible, then there are several ways (in terms of efficiency, lower is more efficient):
1. Using explain SQL, look and make sure that all queries and indexes are optimal. If not optimal, optimize.
2. Use Sharding (split the table into several)
3. Caching
4. Use the push-pub service
5. Use middleware (on a node, perl, java, etc., wrap all operations with the table in RPC. The requirement for middleware is internal caching of all data, sending only insert/update on mysql)

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question