Z
Z
zxmd2014-07-18 13:37:38
linux
zxmd, 2014-07-18 13:37:38

What are the normal options for synchronizing large amounts of data?

At the moment there are 2 servers, one of them has a directory with a huge pile of files with a total volume of 440GB. The average file size is 140Kb. At the same time, new files are constantly being created in the folder.
I copied via
rsync -aPhW --protocol=28 -e ssh [email protected]:/folder/ /folder/
At first, the process was pouring data quite quickly. In 10 minutes I copied about 100GB (network 1GB). I left it for the night - I thought it would be all over in the morning. Where there. The process continues, while the average on both servers is quite tangible.
What am I doing wrong? Maybe there is something faster than rsync?

Answer the question

In order to leave comments, you need to log in

2 answer(s)
I
Igor, 2014-07-18
@merryjane

You can try to pour not through ssh but through the rsync protocol. In this case, encryption will not be needed and everything will work faster. True, on the receiving side, you will need to run rsync as a daemon.

S
SilentFl, 2014-07-18
@SilentFl

you can look at the experience of other comrades

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question