Answer the question
In order to leave comments, you need to log in
What are good practices for dealing with a large number of records in a table (1M+)?
I am working on a project in which data is accumulated quite intensively in one of the tables. At the moment it has 60+ columns and more than 500k entries. Recordings are added by about 3-4k per day, in the future the recording intensity will only increase. The interface of working with it began to slow down, in connection with which I am now engaged in optimization.
For myself, I singled out the following methods:
1. Indexes, selectively for the fields by which the search is most often carried out.
2. Combining several columns into one, for the same type of data. They will be stored in JSON format.
3. Partitioning. Of course, the work in the project does not go simultaneously with all the records, most of the old records are kept more for history. However, they may be needed at any time. Based on this, while I plan to create a new, "hot" table with an identical structure. The "hot" table will store fresh data, say for a month. In the "cold" source table - in general for the whole time, including fresh records. The hot table will be for operational work, the cold table will be for search on demand. In the future, the cold table can be divided into several, sharded.
Please tell me what else can be done?
Answer the question
In order to leave comments, you need to log in
Combining several columns into one, for the same type of data. They will be stored in JSON format.
What are good practices for working with a large number of records in a table (million+)?
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question