Answer the question
In order to leave comments, you need to log in
What are the normal options for synchronizing large amounts of data?
At the moment there are 2 servers, one of them has a directory with a huge pile of files with a total volume of 440GB. The average file size is 140Kb. At the same time, new files are constantly being created in the folder.
I copied via
rsync -aPhW --protocol=28 -e ssh [email protected]:/folder/ /folder/
At first, the process was pouring data quite quickly. In 10 minutes I copied about 100GB (network 1GB). I left it for the night - I thought it would be all over in the morning. Where there. The process continues, while the average on both servers is quite tangible.
What am I doing wrong? Maybe there is something faster than rsync?
Answer the question
In order to leave comments, you need to log in
You can try to pour not through ssh but through the rsync protocol. In this case, encryption will not be needed and everything will work faster. True, on the receiving side, you will need to run rsync as a daemon.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question