J
J
johnny_s2012-03-15 11:48:16
MySQL
johnny_s, 2012-03-15 11:48:16

Storage of TEXT fields in large volumes in Mysql

There is a Mysql server and on it a table of contents of articles.

Structure: id (int), article_id (int), content (text)
About 1.5 million records. The size of the table is 5Gb. The total amount of data on Mysql is 9Gb

Recently, the server has become slower
. Does it make sense to transfer this data to MongoDB to reduce the amount of Mysql data and increase its performance

Answer the question

In order to leave comments, you need to log in

5 answer(s)
M
Melkij, 2012-03-15
@melkij

9GB even for mysql is a small amount. Look for the problem elsewhere. For example, when there was little data, the absence of an index for some more or less frequently used query was imperceptible.

S
ShouldNotSeeMe, 2012-03-15
@ShouldNotSeeMe

The source data is not entirely clear. In addition to queries, it is also important whether PRIMARY indexes are worth it.
To reduce the disk space occupied with a lightly loaded CPU, you can use on-the-fly text packing using COMPRESS / UNCOMPRESS.

A
Anatoly, 2012-03-15
@taliban

It makes sense sometimes to take out the text field in a separate plate if the data from this table twitches without this field.

V
Vitaly Zheltyakov, 2012-03-15
@VitaZheltyakov

It is stupid to store the text of articles in the database - it is more correct to store a link or directly access it by key number. With this approach, you will reduce the load on the database and increase the responsiveness of the server, because you can organize caching at the server and file system levels. For search it is necessary to use Sphinx.
You can minus...

B
boodda, 2012-03-16
@boodda

So maybe the base has grown so much that the indexes did not fit into the key_buffer and now the indexes simply stopped using the muscle?

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question