R
R
ruboss2015-11-06 20:55:43
linux
ruboss, 2015-11-06 20:55:43

How to organize in Linux with 10,000,000,000 (billions) inodes, fast access to them and their processing (Linux database replacement)?

You need to search and insert billions of records. I tried elasticsearch - after 50,000,000 records, it's hard to insert. I also thought about trying Cassandra. But I thought about why for elementary actions to take such machines - huge and clumsy. Yes, they are all well thought out and cool scalable. The problem in all this is universality, as in all mass products. As usual, you have to pedal.
There is data of the form:

Хэш          Инфо
aDs3g9     2:1,2,4;11:1  (где 2, 11 - ключи, а 1,2,4 и 1 - поля к этим ключам, остальное - разделители)
3trhn        2:9,7;3:3,4

At the moment the hash length is 1-6 characters taken from (max int 32) - gives 4,000,000,000 variations.
Let's try to organize everything as a FS on Linux.
Task 1: quick insert, like upsert in ES
Knowing buzzwords like sharding and partitioning, the following comes to mind:
1) We take the aDs3g9 hash, and create folders with every 2 characters and throw our hash file into the last one, it turns out like this - aD/s3/g9/aDs3g9
A request comes - put in aDs3g9 "2:1,2"- here the question arises: " Should I create all possible folders before the inserts or create them during the inserts? ". Let's say we have folders, then we insert key 2 and data 1.2.
The next request comes - put inaDs3g9 "2:4"- we see there is already a file in the folders, we read it, we drop it to the right place "4" and we get "2: 1,2,4"
Next - put it in aDs3g9 "11:1"- we get "2:1,2,4;11:1"
What we have - sacrificing time to add information to the file without duplicates when indexing we save space and time when sampling. If we don’t want to sacrifice time when inserting, then it will turn out like this "2:1,2;2:4;11:1"
Task2: quick selection of all keys and values, even with duplicate keys
Comes - give me everything from aDs3g9, 3trhn(here a list of up to 1000 hashes can come) - return "2:1,2,4;11:1 2:9,7;3:3,4"
Question in parallelism - linux can get in parallel files? We divide 1000 hashes into 10 threads and each of them works until it gets all the data from its files.
What type of FS to choose so that you can hold up to 10 billion inodes and what amount of memory may be needed for this? It is clear that it can be divided into servers, each of which will store its own hash area, but let's say it will all be on 1 machine.
Different characters in the hash = 26 + 26 + 10 = 62 (26 small, 26 large and 10 digits)
Let me remind you that the hash is taken with a max int of 32, so folders give 4 and files give 4 billion
Total 8,000,000,000 files at the moment.
What are your thoughts on this decision? And what is the best way to write this for maximum c/c++ speed?

Answer the question

In order to leave comments, you need to log in

2 answer(s)
L
lega, 2015-11-07
@ruboss

The classic fs is not suitable for this, if your data size per "hash" is small, for example, up to 100 bytes, then just make a large 400GB file and write the data by index, while the hash is not needed. With a normal ssd, it will be possible to write up to 1M records per second. regular script. In this case, 75% of the space will be "idle". If you want to save space, then you need to use an index, for example, use leveldb or the like.

S
Sergey, 2015-11-06
@begemot_sun

You should also take into account that information on the hard drive is stored in minimal portions, for example, 16 kb or 4 kb each (I don’t remember exactly).
It turns out you will have a significant overhead on the hard.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question