Answer the question
In order to leave comments, you need to log in
Why are there so few database connections on 1GB of RAM?
Hello.
Please help me figure out why digitalocean has such numbers .
If I understand correctly, it says that one connection eats about 10mb.
At the same time, they write further that a node with 1GB of RAM will be able to hold approximately 25 connections.
At first I thought that the rest of the memory is for servicing the node itself and the operating system, etc., but no: even if you take a node with 4GB of RAM, there will still be +25 connections for each GB (that is, there are only 100 connections for 4GB).
I ask you to help with answers to questions, whoever can, I will be very grateful:
1. Why are there so few connections for each GB of RAM, because based on "about 10MB per connection" there should be at least three times more than 25?
2. Do I understand correctly that the connection pool does not help in terms of saving RAM and each connection in it still eats the same 10MB, and the pool only reduces the load on the processor due to the lack of the need to create / kill a connection every time?
3. Do I understand correctly that these 10MB of RAM per connection are for the process that belongs to the database, and not to the server code? I mean if the database and the server code are on the same machine, we connect to the database from the code, then the database will eat 10MB of RAM per connection and the server will eat its N MB per connection, and then on one machine one connection will already be 10 + N MB?
Thank you very much in advance.
Answer the question
In order to leave comments, you need to log in
I honestly can answer - I have no idea how to abstractly say how much memory the client backend database process will take.
This will vary from:
- obviously the base version, plus compiler and build settings
- session_preload_libraries, work_mem (speaking of work_mem - you know one request can use multiple work_mems?), temp_buffers. Moreover, maintenance_work_mem for some operations
- the value of the system catalog - both pinned tables, and then cached when accessed
- previously executed queries. Each backend has its own cache of stored procedures.
One backend can use tens of GB of memory, and it may make sense to make such settings for, for example, building an index.
In addition to its own private memory per active connection, the entire max_connections set reserves some space for itself in the shared memory segment, no matter how many connections you then use.
To administer calculators and teapots, DBAs are usually not hired. Especially if on the same calculator, in addition to the base, the application also eats up unpredictably how much memory. What will be left of this 1GB? Apparently even shared_buffers with 128MB have nowhere to raise, otherwise it may have to be reduced. So from experience it is difficult to say something about such a configuration.
Most likely do not touch max_connections. Leave the default 100.
It depends on which pool and how the application works.
pgbouncer in pool_mode = transaction may well reduce a couple of hundred connections to the bouncer to a dozen connections in the database. Well, 10 base processes will probably use less memory than 200.
For pool_mode = session - yes, only to smooth out the cost of fork.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question