Z
Z
zhna2018-09-02 02:40:00
Database
zhna, 2018-09-02 02:40:00

Which database to choose for node debian?

The database will store phone numbers + a description of approximately 200 bytes per entry. 1,500,000 numbers minimum.
The base will constantly work for reading and writing. Speed ​​is critical more on reading - data comparison. Than for the record. What do you advise ? Before that, I used mysql.
Approximate algorithm of work. The user entered N - numbers in response to him, the base returned matches. If there is no such number, then the number is written to the database.
N - can be any number between 1 and ∞.
This will be a web application on vue + ssr on the back of express graphQl
What data should I give to select a database for the task?
If possible, tell me another ORM for the database you proposed.

Answer the question

In order to leave comments, you need to log in

3 answer(s)
A
Alexey, 2018-09-03
@zhna

The most uncompromising options for read and write speed are the Redis database and the like (Aerospike, Tarantool). The data is stored in memory, their reading and writing is as fast as possible. These NoSQL databases in the form of a key-value, then in fact your entire task is more than covered. Key = phone number, value = description. You can make the database a little more complex by learning a little about Redis data structures - if you wish, of course.
The only limitation in the case of Redis is the amount of RAM on the server. Under 1.5 million records of 200 bytes, more than 700MB will be needed. Accordingly, with an increase in the number of records, memory consumption will grow proportionally. However, now you can find inexpensive servers with a lot of memory, not like before. Memory consumption can be distributed using a cluster solution, which looks kind of interesting, but to be honest - not very convenient. At least not at the current stage of development. The reliability of such a database can also be increased if replication is made to another server, where data is constantly written to disk in AOF mode - then it will be more difficult to lose them, and work with the database on the master will not be slowed down by slow disk operations. You can do this AOF on one server, if the SSD - then the load will be almost imperceptible.
If you have time and enough experience in development, you can figure out analogues in which the limitation in RAM does not affect so much. The same Aerospike, for example, is optimized to work on an SSD, and Tarantool can store much more data than the amount of RAM allows, without much performance loss (as they say).
Also, Amazon / Azure / Google cloud services have their own bases for key-value, you can study their cost, maybe it will suit you.
Theoretically, you can also use MySQL or PostgreSQL, the structure is not so complicated for you that these DBMS cannot cope with them. But their performance will be inferior to Redis by an order of magnitude.

K
Kirill Kudryavtsev, 2018-09-02
@Deissh

It will be enough to use MongoDb with in-memory caching, since 1.5k records (200 bytes each) will take no more than 0.4 GB.

A
Alexey Cheremisin, 2018-09-02
@leahch

You can attach elasticsearch, just for your task. Mongo is also okay, it will do. I'm for the first.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question