N
N
Nadir N2018-12-04 11:15:50
PostgreSQL
Nadir N, 2018-12-04 11:15:50

How to implement migration of large tables without locks?

There is a relatively large table with 5+ million rows.
Task: change the structure of the table by adding a column, for example, and at the same time avoid table locks.
In the table constantly there are requests both on record and on reading.
Are there any techniques or tools that allow you to correctly make such migrations with minimal downtime? Of course, data cannot be lost.
Temporary tables, forced downtime? What are the options?

Answer the question

In order to leave comments, you need to log in

2 answer(s)
M
Melkij, 2018-12-04
@nadirku

May I refer to my essay on SO? https://ru.stackoverflow.com/q/721985/203622
However, I will add a few words:
It matters which one. If default is null - just take it and add it with statement_timeout of 1 second. alter table will still take a lock on the table, but the default null field is just a quick refresh of the system catalog.
If another default - then for pg11 just take it and add it with a timeout, for older ones there are a few adventures.

L
longclaps, 2018-12-04
@longclaps

Adding a new column is actually quite cheap in Postgres .

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question