Answer the question
In order to leave comments, you need to log in
How to transfer a large amount of data from server to server?
Servers are being migrated to the cloud. There was a question of transfer of the data. These data are about 2 TB. The average file size is about a megabyte, that is, we get around 2 million files.
Are there other options besides rsynca? If not, then maybe some features of his work with such volumes.
Can there be an option with compressing what is now and transferring archives (ftp / scp), and then synchronizing what appeared during the transfer with what was unpacked (production is transferred, so files will slowly continue to be added).
Answer the question
In order to leave comments, you need to log in
tar zcf - tobearchived | ssh [email protected]_server_ip 'tar zxf -'
Compresses with compression to tar, gives the stream compressed via ssh, tar is launched on the other side and decompresses.
You can add any archiver you like to the chain.
To speed up the channel - we throw out complex encryption in the SSH settings, leave some RC4, compression will most likely only slow it down.
I do a server dump periodically :)
Rsync. Read carefully man on it, practice on cats. There is also compression during transmission as well.
Syncthing is a specialized thing for such cases.
But the speed, although good, is not guaranteed.
By default, Syncthing tries not to load the channel.
But yes, small files are always better to pack.
Torrent (in principle, it can work without a tracker). Well, or simpler - Syncthing.
You can also try. Still shove compression into the archive, perhaps the transfer will be faster. But how long it will take to collect the archive, who knows, maybe the difference just won't be worth it.
It's easier to try as always. And measure time in different ways.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question