Answer the question
In order to leave comments, you need to log in
What are the causes of the "Out of shared memory: You might need to increase max_locks_per_transaction" error?
Good day.
Today I received such an error in the logs. Prior to this, the server was spinning without a restart for about three months, the database is not that very large, but in some tables there are hundreds of thousands of rows. Increased this same "max_locks_per_transaction", raised "shared_buffers" and "work_mem", restarted postgres, the problem disappeared. But forever?
It was:
shared_buffers = 128MB # min 128kB
work_mem = 4MB # min 64kB
temp_buffers = 8MB # min 800kB
max_locks_per_transaction = 64 # min 10
shared_buffers = 256MB # min 128kB
work_mem = 16MB # min 64kB
temp_buffers = 32MB # min 800kB
max_locks_per_transaction = 1024 # min 10
Answer the question
In order to leave comments, you need to log in
https://www.postgresql.org/docs/current/runtime-co...
Has nothing to do with table sizes. Has to do with the size of the internal lock table and the number of affected objects (eg tables) in the transaction. Strictly speaking: the maximum number of object locks simultaneously by all active transactions.
Has nothing to do with shared_buffers (not subtracted from it, but added from above. shared_buffers is strictly a page buffer, even without taking into account the data structures for managing this buffer. The shared memory segment is always greater than shared_buffers) or work_mem or something else (except for max_connections + max_prepared_transactions). Moreover, it has nothing to do with deadlocks.
The default value is usually more than enough. You may be abusing partitioning.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question