Answer the question
In order to leave comments, you need to log in
What should be the size of the database when dumping?
I look at the size of the base through pretty
postgres=# SELECT pg_size_pretty( pg_database_size( 'soft' ) );
pg_size_pretty
----------------
2524 MB
(1 row)
pg_dump -x -O -U postgres -h localhost soft > db_soft.sql
du sh *
it turns out 346 mb. Answer the question
In order to leave comments, you need to log in
First, the indexes. In an sql dump, any index is a few tens of bytes of a request to create an index. In the database, this is a sprawling structure that can take up a lot of space. One index is usually small - but you don't have one, do you?
Secondly, as already mentioned - binaries can get bloated due to delete and update requests if autovacuum is poorly configured. It concerns both the tables themselves and indexes.
Third: for example, a plate of connections from a triple of ints. In pg_dump, data is written through copy - one header, then all lines, values are separated by tabs. Total 3 bytes per overhead data line + string representation of the value. In a real table, each row has a header of 23 bytes. Plus data alignment. The total minimum will be 36 bytes per line. (the description of the table structure is again stored separately) Ie. the text view is suddenly more compact over almost the entire range of int values for this plate.
So yes, a text dump can weigh much less than a binary dump due to the exclusion of all service structures.
I think this happens because of the way postgress works with tuples : when changing / deleting data, new data is written to the database in a new tuple, and the old one remains. When exporting the database to sql, the deleted data does not get into the file, so the database "loses weight".
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question