Answer the question
In order to leave comments, you need to log in
How to optimize the database?
Hello. There is a database with a large number of records (hundreds of millions). The structure is something like this:
Currently MySQL (MyISAM) is used. The database is required to quickly perform a SELECT with an arbitrary number of conditions after the WHERE. The problem is that either the database scans all records with such a query, which is very slow, or a huge number of indexes are required, which takes a long time to create them and slows down the addition of new records. How can this problem be solved?
`id` INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
`a` TINYINT UNSIGNED NOT NULL,
`b` TINYINT UNSIGNED NOT NULL,
`c` TINYINT UNSIGNED NOT NULL,
`d` TINYINT UNSIGNED NOT NULL,
`e` TINYINT UNSIGNED NOT NULL,
`f` TINYINT UNSIGNED NOT NULL,
PRIMARY KEY (`id`)
Answer the question
In order to leave comments, you need to log in
It depends on where, which fields are often used, which are rare, what values.
In general - to beat into parts in 2 ways
1) On several tables by "rows". For example, two - in one a from 0 to 127, in the other from 128 to 256.
2) For several tables by "columns". For example, ace and bdf, and maybe even acdf - depending on which fields are searched more often or in groups.
They are essentially the same indexes, but as you reduce the size of the raw lookup, performance can improve.
It may also make sense to make not 4 abcd fields, but to shrink them into one field of the int type, let's say, and put an index on it (it may turn out to be better than composite by abcd).
And finally, if you're not too sorry for the memory, even with 100 million records, you can try to shove tables into memory :) Depending on how thick the lines are.
store in a vertical table - EAV. only one index is needed. switch to innodb and make it the primary key to save space.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question