Answer the question
In order to leave comments, you need to log in
Large table optimization in PostgreSQL?
Hello
In PostgreSQL there is a table where a bunch of data flies.
The tablet has grown, already by 70 gigs , and requests, the DBMS processes very tightly.
Is it possible to configure PostgreSQL so that a new table is created in a month , and the information starts to get there, but when querying, the DBMS knows where the old data is and where the new ones are, and makes a selection from the desired table, or even from two at once.
I read about "Partitioning" , but as I understand it, this option does not suit me. I can’t beat the table by columns, there are only 2 of them , "itemid" and "clock"
Please point to let it be true
Answer the question
In order to leave comments, you need to log in
Partitioning is exactly what you need. You misunderstood it, it is done not by columns, but by column values. That is, if you store the time of creating a record in clock, then you can split a huge table into tables containing records that have only values with a daily period in clock.
Maybe it makes sense to look towards scalable DBMS like hbase?
It was correctly indicated above that partitioning is essentially a splitting of a table into several, for example, on a monthly basis.
The plate has grown, already by 70 gigs, and requests, the DBMS processes very tightly.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question