I
I
idibis2019-04-25 09:12:00
PostgreSQL
idibis, 2019-04-25 09:12:00

Large table optimization in PostgreSQL?

Hello
In PostgreSQL there is a table where a bunch of data flies.
The tablet has grown, already by 70 gigs , and requests, the DBMS processes very tightly.
Is it possible to configure PostgreSQL so that a new table is created in a month , and the information starts to get there, but when querying, the DBMS knows where the old data is and where the new ones are, and makes a selection from the desired table, or even from two at once.
I read about "Partitioning" , but as I understand it, this option does not suit me. I can’t beat the table by columns, there are only 2 of them , "itemid" and "clock"
Please point to let it be true

Answer the question

In order to leave comments, you need to log in

5 answer(s)
P
Pasechnik Kuzmich, 2019-04-25
@idibis

Partitioning is exactly what you need. You misunderstood it, it is done not by columns, but by column values. That is, if you store the time of creating a record in clock, then you can split a huge table into tables containing records that have only values ​​with a daily period in clock.

M
Max, 2019-04-25
@MaxDukov

this is most like sharding.

T
TheRonCronix, 2019-04-25
@TheRonCronix

Maybe it makes sense to look towards scalable DBMS like hbase?
It was correctly indicated above that partitioning is essentially a splitting of a table into several, for example, on a monthly basis.

I
idShura, 2019-04-25
@idShura

The plate has grown, already by 70 gigs, and requests, the DBMS processes very tightly.

To speed up queries, you need to make indexes and optimize the queries themselves.

N
nrgian, 2019-04-25
@nrgian

70 G for modern DBMS and modern servers is ridiculous.
The table from two fields brakes?
Yes, this cannot be if you have indexes.
Vacuum, I hope, did not guess to turn off?

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question