J
J
Jamaludin Osmanov2018-04-11 20:14:39
MySQL
Jamaludin Osmanov, 2018-04-11 20:14:39

How to optimize a database or queries for a large amount of data across multiple fields?

Hello! Faced the following problem. I have a database that stores 4 entities: Deals, Contacts, Companies. which are interconnected. Contacts can have multiple deals, deals can have multiple contacts, and so on.
In order to link these entities, I use another table (links). In addition, each entity has its own fields and a unique id.
Deals: deal source, tag, deal creation/modification date, etc. These fields are not unique and may be repeated. Contacts and companies have the same situation with fields.
The database is needed for compiling reports on transactions, the report includes linked contacts/companies and, accordingly, the values ​​of their fields.
If you make a report on deals created from 01/01/2017 to 01/01/2018 (dates are stored in a timestamp), there are no particular problems. But sometimes you have to make a report on transactions whose contacts have the city-Kazan field and age >25 years. This is where the problem with query execution time arises.
Each entity has at least 10 fields that can be sampled.
It was a sinful thing to think about dividing one table into several tables of 100,000 records (contacts_1, contacts_2), but I realized that I would only aggravate the situation with this.
I found table partitioning on the net, but I realized that it doesn’t suit me either. Prompt please how it is possible to optimize architecture of a DB or requests.

Answer the question

In order to leave comments, you need to log in

1 answer(s)
A
Artem Spiridonov, 2018-04-11
@customtema

Contacts can have multiple deals, deals can have multiple contacts, and so on.

Make a clear hierarchy.
To speed up the selection, you need an index (you can "self-made"). To do this, the data must be sufficiently denormalized.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question