Answer the question
In order to leave comments, you need to log in
Pg_basebackup, does the copy speed depend on the type of data prevailing in the database, and is it possible to somehow increase the copy speed using postgres?
There is a DB on 1, it is created artificially. contains a bunch of tablets with text fields containing a hash from random (each about 10Gb).
Copying the entire cluster to a neighboring machine takes about 14 hours. At the same time, it does not seem to be a network system, nor does it experience a disk load.
is there any way to increase the speed (ideally reduce the time to 5 hours) (maybe there is any multithreading factor?).
or somehow copy the cluster directly from the FS without using pg_basebackup. (I tried to press tar in basebackup but it did not affect the speed)
I know that ideally pg_basebackup does not affect the performance of the cluster and you can make a full copy once a week and keep wal archives at the same time. but I'm still interested in the option with daily full backs (if it's possible, of course)
Answer the question
In order to leave comments, you need to log in
There is a DB on 1, it is created artificially. contains a bunch of tablets with text fields containing a hash from random (each about 10Gb).
Copying the entire cluster to a neighboring machine takes about 14 hours.
Look towards using Barman . Just right for your situation.
Supports two modes:
And combinations of the above
docs.pgbarman.org/release/2.7/#two-typical-scenari...
Run a backup and measure the load - disk queue, network, processor, the same on the machine where you copy.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question