Answer the question
In order to leave comments, you need to log in
How to transfer large amount of data between servers?
Two Linux Debian servers. There is a direct 10Gbit optical link between them. I am
trying to copy via NFS, CIFS - I can’t raise the transfer speed above 1 gigabit / s
. At the same time, Iperf - shows that the real speed is 9.57 Gigabit / s.
There are ideas how I can transfer a bunch of terabytes of information and not freeze for a week pending?
The processor is not loaded. SAS disks in the 10th raid.
I think the bottleneck is NFS or CIFS
Answer the question
In order to leave comments, you need to log in
If there are a lot of small files - rsync
or tar c | ssh | tar x
through an SSH tunnel without encryption. If the files are large, you can simply scp
.
Neither NFS nor CIFS are suitable tools for your task.
https://github.com/hjmangalam/parsyncfp
Utilize 10Gbit with this
Specify, do you have a network stack on both servers prepared to work at such speeds? By default, in operating systems of the Linux family, the parameters are set to average values, which give a circle of 500-600 megabits. To transmit / receive data at speeds above gigabit, you need to slightly adjust the hosts (MTU / SlowStart / buffers, etc.).
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question