M
M
Max2015-11-24 12:34:18
MySQL
Max, 2015-11-24 12:34:18

How to optimize the database for a highload project in the future?

Hi all. Please tell me who dealt with highload projects. Let's take as an example a conventional CRM system in which there are companies that start transactions. As a rule, each company creates its own subdomain within which it operates. Now the question itself is how to properly organize the database for such purposes? I have 2 options:
1. Each company creates its own database with which it works. What it gives:
+ Companies with 100% probability will not see each other's data if there is an access rights check bug in the code
+ In a few years there will be much less data in this database than if there was one common database for all, which means sampling will be faster and performance will be higher
- When developing the project, you will have to make changes to the database for absolutely all databases, and if there are 10k or more of them, this is very problematic
2. All companies work with the same database. Pluses and minuses as well as in the first point, only on the contrary.
If you do according to the second point, and in a few years there will be more than 20 million data in the table. How to be? How do they do it in this case? Maybe they make a duplicate of the table and upload new data there, then what if old transactions are required?

Answer the question

In order to leave comments, you need to log in

5 answer(s)
O
OnYourLips, 2015-11-24
@matios

tr;dr: One base .
Dividing into different bases with sharding at the application level is only necessary for a very wild highload.
But at the same time, it will still have to be divided not by company.

With the development of the project, you will have to make changes to the database for absolutely all databases, and if there are 10k or more of them, this is very problematic
Not more problematic than one big one. But having so many bases is a problem.
If you do according to the second point, and in a few years there will be more than 20 million data in the table. How to be?
This is a very small number of entries. No need to worry if it is like this.

A
Alexey Lebedev, 2015-11-24
@swanrnd

Of course the first option. The bases do not intersect, but write scripts to make changes. It will be much cheaper than breaking.
In general, what is Higload for you. 20 million lines is small things. I remember I tested speed on 1 billion lines.
The following data is needed:
1) row size
2) what is read or written more often
3) what is the acceptable response rate
4) the number of simultaneous requests
5) types of requests

A
Alexey, 2015-11-24
@justru

One database + a good programmer.
It depends less on the records, more on the hands. In our project for 2 years, a little more than 20 mln records (in the largest table), it works very fast, at least 40 tons of rows are added / deleted / changed per second.

S
sim3x, 2015-11-24
@sim3x

- With the development of the project, you will have to make changes to the database for absolutely all databases, and if there are 10k or more of them, this is very problematic
no - migrations and automated testing
are used Use a combined option - small projects go in one database, large projects that pay more money - on separate nodes

R
Robot, 2015-11-24
@iam_not_a_robot

To you on normal DB to be tightened. Even in the same database, they will not see each other's data.
Delete old data from the database or transfer it to an archive table or archive database.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question