A
A
atis //2016-06-16 11:05:59
Database
atis //, 2016-06-16 11:05:59

Which database to choose for a project with more than 3 million insert/updates per hour?

Hello.
There is a project in which more than 3 million data are regularly updated / added. The essence of the problem is that it is necessary to update the data as quickly as possible.
Mysql/PostgreSql seems to work fine, but when adding 5-10 indexes, the insertion speed drops drastically. First, the insertion speed is about 10k per minute when inserting in batches (10k per batch). Then it drops to zero... How to update the data is not known at all because all rows are unique.
MongoDB is now screwed on. Insertion speed ~1k per second. With updates, too, everything is fine.
Mongo strains by the fact that it is structureless.
What do experts advise?

Answer the question

In order to leave comments, you need to log in

4 answer(s)
A
Artemy, 2016-06-23
@atis2345

At such volumes, this is rather not a database issue, but a processing architecture decomposed into storage systems.

C
chupasaurus, 2016-06-16
@chupasaurus

Cassandra. One node on a 7200RPM HDD (dedik in hetzner) easily coped with 300 records / s with an average delay of 3 ms (record size from 1MB :) ).

T
terrier, 2016-06-16
@terrier

Well, here, there are a few points
- By itself, the speed of 1k inserts per second is not prohibitive for any storage on normal hardware.
Well, for example, postgres.
- Question: what do you do with this data?
Judging by ten indexes, you are actively reading from the same table at the same time, and this is probably not entirely reasonable.
If you have a bunch of inserts coming in bursts, then it’s easier to first insert data into a table without indexes ( prepared statements, X rows per statement, Y statements per transaction, where X and Y need to be tweaked depending on the hardware), and then build indexes. If possible, maybe do "COPY ... BINARY " instead of inserts.
You can also copy a table without indexes to UNLOGGED, and then either make it LOGGED, or quietly copy it to normal tables from which to read.
>> Then it falls to zero
Checkpoint apparently came. You can turn it off in the settings.

S
safenoob, 2016-06-16
@safenoob

I advise you to write to one database or text documents, and from them gradually pull everything into another database for reading (already with indexes)

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question