Answer the question
In order to leave comments, you need to log in
What database to use?
Communication with the database will be carried out through the site API. The database must withstand huge loads, the approximate number of rows in the database: 5-10 billion, the number of accesses per second: about 150k.
It was suggested to use BigTable from Google, but information about it is only on the site, so I would like to hear the opinions of people who have used it in practice. There is another option to use Firebase Firestore, but there is a limit on the size of documents, which does not suit me.
PS: The data will be regular strings and integers.
Answer the question
In order to leave comments, you need to log in
Which database to use depends on how you are going to use this database.
Predominantly fetch/read or update/add records?
Selection/Reading: on one or several fields?
Are there any general properties of records from which groups / trees can be made (domain, link, image size, etc.)?
Where (how) did they calculate 150,000 RPS?!
Maybe it's easier to throw everything into the RAM and / or connect haproxy for horizontal sharding?
So far - the task is completely abstract...
According to current information:
Using a mysql database on a fast SSD will be enough. ( limits per table )
Communication with the database will be carried out through the site API. The database must withstand huge loads, the approximate number of rows in the database: 5-10 billion, the number of accesses per second: about 150k.
It was suggested to use BigTable from Google, but information about it is only on the site, so I would like to hear the opinions of people who have used it in practice. There is another option to use Firebase Firestore, but there is a limit on the size of documents, which does not suit me.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question