Answer the question
In order to leave comments, you need to log in
PostgreSQL - how to archive old records in a large table?
There is a table:
Answer the question
In order to leave comments, you need to log in
How to split a table, leave hot data on SSD, cold data on HDD. To do this, first partitioning to split the table into two. https://habrahabr.ru/post/273933/ (as usual, pay attention to the comments and pg_partman)
Then, before data migration (or immediately when creating partitions), transfer the archive files to another tablespace www.postgresql.org/docs/current/ static/sql-createt... stackoverflow.com/a/11228536 on HDD.
Then data migration to partitions.
Actually, that might be enough. 1-2 million lines * 365 days is not too much. Although the nature of the data is not specified.
Transparent transfer of tables to another piece of hardware for the application - FDW, foreign data wrapper. The more up-to-date postgresql, the better. The thing is sawn very actively in terms of the optimal distribution of the request. Whether he is already friends with partitioning - honestly, I don’t know.
Transparently send a request to two databases and glue - elementary view with union all from the local table and FDW. Only this is an uninteresting option, why pull the cold part of the database to request hot data?
Additionally, you can look towards postgresql-xl, greenplum . The first one and a half years ago was not quite production-ready, now I don’t know, the second one is used even in the banking sector, but as I remember it is catastrophically unsuitable for OLTP, only OLAP load.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question