N
N
nioterzor2018-04-12 08:33:33
MongoDB
nioterzor, 2018-04-12 08:33:33

Performance degradation of mongodb?

There are several tables in mysql, totaling about a billion records.
Due to insufficient storage efficiency (the actual volume is 200 GB, it takes 500 GB (conditionally) on the disk), it was decided to transfer them to another database.
Started with mongodb. Offhand, it takes up half the disk space.
Specificity: many insertions and selections, very few deletions.
Wrote a small benchmark that checks the insertion speed. It turned out that on the first tens of thousands of records, monga processes insert very quickly (100-200 / second, hardware is a laptop with I5 and 6GB RAM), but after a certain threshold, inserts start to slow down a lot (5 records per second, where mysql on the same hardware gives a hundred).
Data format: each line represents several integer fields, three timestamps (created_at, updated_at, deleted_at) and a pair of varchars (in mysql formats).
The application using the base is written in Laravel, so the move will be "cheap" (the API of interaction with the base does not change).
Not much experience with Mongo. I tried only to turn off the journal, it gave an increase of 5 percent, no more.
What am I doing wrong? Maybe I don't know how to cook monga?
PS As an option, advise a database suitable for such data (not inmemory, since at least 1% of the data is required at each point in time).

Answer the question

In order to leave comments, you need to log in

2 answer(s)
A
awesomer, 2018-04-12
@awesomer

Aerospike.
In-Memory Persistent.
But - does not keep the entire database in memory, but only indexes.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question