W
W
winfle2015-09-04 00:14:58
MySQL
winfle, 2015-09-04 00:14:58

How can I reduce the load and memory consumption on the statistics web server (MySQL)?

Given :
There is a statistics server that runs on php + mysql.
Http Server: apache.
That is, from a certain page (let’s say so) an http request comes from which all the necessary information is extracted (IP, SEO hits, lifetime, etc.), after which all this information is written to the database.
The server is running on amazon's RDS instance. When adding new records, the whole thing is indexed in several fields. It is logical that memory usage is growing. But when there were too many records 3.2 Billion. The memory is over, and so is the virtual one. As a result, the statistics server crashed (released memory with indexes) and then again began to "load" all these indexes back into memory. could find elements without indexing ) .
I realized that I need to change the architecture and scale the whole thing. Tell me how to do the optimization for my case (distribution and / or optimization). what's there."

Answer the question

In order to leave comments, you need to log in

3 answer(s)
S
sim3x, 2015-09-04
@sim3x

If uptime doesn’t bother you (the customer), then just add more memory to the instance - a quick solution, until the next moment
If you still want to do it normally, then you should start
the next
moment project and do sharding - not sure how it works in muscle

S
Space, 2015-09-04
@ruslite

In general, you take out the base on several machines, having previously broken it. Those. you do branching (it is correct to say - replication), moreover, scatter the fields over these bases, then you will not have to shovel through all 3 billion records with all fields at once.
But here you will need to change the logic of queries to the database - to find the desired field.
You can read here

M
Max, 2015-09-04
@MaxDukov

Well, as an emergency measure - it's really stupid to build up memory.
further check up indexes - whether all are necessary. Although 3.2 mln lines is not weak in any case.
From personal experience, it's worth looking at query parameters - something tells me that not all 3.2 lard rows are always needed. And part (usually - more) can certainly be transferred to the "archive".
Well, if everything doesn’t help at all, think in the direction of elasticsearch.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question