M
M
Maxim Kudryavtsev2015-11-10 00:52:20
Database
Maxim Kudryavtsev, 2015-11-10 00:52:20

How to deal with a huge database?

Good day colleagues.
A project comes to me today - a very small database: 9000+ tables weighing under 300 Gb.
Inside, you can see the “layering” of at least three generations of developers:

  1. We started, apparently, correctly, foreign keys, indexes are visible, there is no redundancy, etc. Actually 3 normal form. All table and field names in correct English
  2. Further, apparently, the support went to another team, which either could not or did not want to deal with what was there and all sorts of "interesting solutions" went: tables that were not connected to anything, living some kind of life of their own, fields with json strings , some kind of cache (very similar to the contents of *.tmp-files from WP's SuperCache) comes from somewhere... The names of tables and fields are already in transliteration.
  3. After that, apparently, the 1C programmers began to support the database, because the names of the fields and tables began to go in Russian, the database got some kind of its own "normal" form, in which krakozyabry lie in its fields, which are then given by the script in readable Russian text...

How many programs / scripts / clients, etc. uses this database, the question is generally difficult ... At least 6, but you can definitely find out only by cutting down the server with it at 11 am and counting the number of angry calls in the next 30 minutes ... I hope everyone understands that this is not an option.
To say that the whole thing is now terribly slow is to say nothing. This works on Dell Power Edge, I won’t name the model, but the motherboard has two slots with Intel Xeon 2.4 GHz and 48 Gb of RAM. In general, there are enough resources, as I think.
The task set before me is to make it work faster. To solve the problem in the forehead - to buy another server and set up replication between them, the company cannot afford. Therefore, this monster needs to be optimized somehow...
Since I see this for the first time, I can’t even imagine from which side one can approach here in order to begin to analyze this work of previous generations.
Actually I ask you, please tell me where to start, what phrase to google, in what direction to dig?
PS: 500 tables have prefixes, apparently from the developers of the first generation. I have already moved this part to a separate database
P.S. 2: there are no views, triggers and procedures in the database
P.S. 3: MySQL DBMS, if it matters. There is no binding to it, you can change it if necessary

Answer the question

In order to leave comments, you need to log in

1 answer(s)
S
Saboteur, 2015-11-10
@saboteur_kiev

300 GB is not very small, but not at all a large base. You would need to take read/write statistics, look at the heaviest queries (such functionality can be obtained using mysql habrahabr.ru/post/31072/).
Find out what exactly is missing - processor, RAM, disk for writing, disk for reading?
From this already and dance. maybe just a couple of ssd in the raid and that's enough?

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question