D
D
denis542018-04-11 08:03:38
Database design
denis54, 2018-04-11 08:03:38

What database to use for 93 million rows (products)?

There are 93 million rows (products) 32 columns
Which database should I use?
What else can be used in the architecture to quickly output, write and rewrite data?
Perhaps your advice.
Thanks in advance...

Answer the question

In order to leave comments, you need to log in

7 answer(s)
A
awesomer, 2018-04-11
@awesomer

93 million is in itself a ridiculous load for modern DBMS on modern computers.
The choice of DBMS depends on what exactly you are going to do with this database. - it is not mentioned in the question.
Well, for example, if your goal is to quickly search for goods in this database, and your 30 columns are filters, then the DBMS is perfect for exactly what full-text search is (don't be confused by the name, it is also great for faceted search). This is, for example:
If the task is different, then another DBMS can be an ideal choice.
Need details.
I think the thing is that you saw these 90 million and decided that some specific solution was needed and did not even specify the details - but in fact, there is nothing like that in these 90 million. But the details of the task are important.
Let's consider the task of fast overwriting - did you mean overwriting all 90 million entirely? Not partially. And this is going to be a real problem. Few DBMSs are capable of such rapid changes.
Well, the third time I will say - the fastest access to data - this is if the data is located in RAM. One of the most developed tools with RAM placement and DBMS functionality is Tarantool. Faster than in-memory DB like Tarantool is - and there are no options.
But you will need the appropriate amount of RAM.
If there is not enough RAM, then you can look at Aerospike. It's "almost in-memory DB". But the amount of data can be huge, with small requests to the RAM. The RAM is required only to fully accommodate the indexes, and not the data itself.
In short, I'm tired of wang.
You do not have a problem statement - therefore, it is impossible to answer you, and there is nothing specific.

R
res2001, 2018-04-11
@res2001

From free PostgreSQL, optimize indexing, DBMS storage system and disk subsystem, and add memory to the server if necessary.
In general, the question is abstract.
If you are not satisfied with the existing option, then you need to find what exactly led to this - perhaps some specific operation (or several) slows down the server, you need to find them and deal with them.
If you simply replace the DBMS, leaving the application in the same form, then you will most likely catch the same problems on the new DBMS, perhaps not immediately, but after some time.

A
Andrey Skorzhinsky, 2018-04-11
@AndyKorg

Too vague TK. 93 million in one table? Columns in a table 20 bytes long? One table in the database?
In general, hire an architect, so that later you don’t suffer from low performance, sudden locks and other charms of errors in architecture.

V
vanyamba-electronics, 2018-04-11
@vanyamba-electronics

In my opinion, it is quite obvious that no matter what database you take, it makes no sense to write all these goods into one table - all operations with such a table will take a long time.

Y
Yerlan Ibraev, 2018-04-11
@mad_nazgul

Yes, whatever.
It is possible to do without a DB at all.
For example, some hadoop or kafka.
<:o)

X
xmoonlight, 2018-04-11
@xmoonlight

Own type of goods - own table and conversion to DNF3. DB - any.

A
asd111, 2018-04-11
@asd111

If the number of columns is constant and the table is denormalized, then postgresql.
If the number of columns changes and the table is denormalized, then mongodb.
Instead of mongo, you can use postgres jsonb, but the query syntax there is quite specific. Postgres jsonb is fast like mongo.
If the table is normalized, it will slow down on weak hardware.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question