S
S
Semyon Semyonov2018-08-11 02:15:48
PostgreSQL
Semyon Semyonov, 2018-08-11 02:15:48

At what point do you need to do something?

I have a small home site on which users have already made so much data that there are tables of 2 million records (and only about 90 tables), and all this in six months. When deploying the project near the toaster, I didn’t really think that it would grow so much, so Postgres took the standard settings.
It seems that now there are no problems, everything works without interruptions, but this is the first time I encounter such volumes, so I worry that at one moment something will be bad with the base. It will start to slow down there or something like that.
Tell me, if you have enough experience, what would you advise me to do? I can double the server capacity, it's not a problem. What scares me more, how will Postgres behave? Maybe he needs to do some preventive work or something like that? I read that there are all sorts of vacuums and analyzers, but usually I didn’t go deep and don’t know what to do next ?, maybe there is no problem, add memory, increase the hard drive and everything will be fine.

Answer the question

In order to leave comments, you need to log in

1 answer(s)
M
Melkij, 2018-08-11
@man_without_face

2 million records are such pennies.
They usually do something when the project does not fit into the description "now there are no problems."
The main things you can do:
tweak the checkpointer settings (so that checkpoint works less often - dramatically less random i / o, but when it crashes, the database can start longer), autovacuum (lower both scale_factor somewhere in 0.05 - so that autovacuum starts more often, but does less work , plus reduce autovacuum_vacuum_cost_delay - it depends on the disks)
shared_buffers somewhere in 25% of the amount of RAM
enable pg_stat_statements and look at the top requests from there

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question