Answer the question
In order to leave comments, you need to log in
Where and how to store a table with 1 billion records?
At the moment there is a table with 500 million records in postgresql. It increases by about 100k records per day. The table contains 3 columns id, factor_id, parcel_id, primary key, and two links to other tables, respectively. This table does not need to be parsed or aggregated. You just need to be able to quickly insert and read the record. I'm going to partition the table by the factor_id key, but there can be more than 10k of such tables, I'm afraid it will just be a mess, especially when the related factor_id records are deleted and the table will always be empty. Please advise your option. Thanks in advance.
Answer the question
In order to leave comments, you need to log in
Decide how you will delete this data.
There is a very big doubt that you need an id here instead of a composite pk.
Those. now you have data for 15 years of work and you are worried about what will happen in another 15 years? Well, just counting the indicated insertion volumes.
In a full-fledged PostgreSQL MVCC transactional, very small rows are somewhat expensive to keep exactly in place, the row header is large.
24 yes 3 * 8 bytes if these fields are considered bigints, yes per lard of records ... about 25GB plus or minus. Bullshit question in general. Do you have a problem? Or just itchy hands?
Without sarcasm Oleg Bunin
He is at the end, tells a similar case. Maybe you will find something useful for yourself.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question