S
S
Sergey2021-05-07 15:39:28
linux
Sergey, 2021-05-07 15:39:28

How to transfer large amount of data between servers?

Two Linux Debian servers. There is a direct 10Gbit optical link between them. I am
trying to copy via NFS, CIFS - I can’t raise the transfer speed above 1 gigabit / s
. At the same time, Iperf - shows that the real speed is 9.57 Gigabit / s.
There are ideas how I can transfer a bunch of terabytes of information and not freeze for a week pending?
The processor is not loaded. SAS disks in the 10th raid.
I think the bottleneck is NFS or CIFS

Answer the question

In order to leave comments, you need to log in

5 answer(s)
K
ky0, 2021-05-07
@bk0011m

If there are a lot of small files - rsyncor tar c | ssh | tar xthrough an SSH tunnel without encryption. If the files are large, you can simply scp.
Neither NFS nor CIFS are suitable tools for your task.

A
Andrey Barbolin, 2021-05-07
@dronmaxman

https://www.digitalocean.com/community/tutorials/h...

C
ComodoHacker, 2021-05-07
@ComodoHacker

NFS and CIFS have a pretty big overhead. Try FTP.

I
Igor Gorgul, 2021-05-14
@xXxSPYxXx

https://github.com/hjmangalam/parsyncfp
Utilize 10Gbit with this

V
Vladimir Pilipchuk, 2021-05-16
@SLIDERWEB

Specify, do you have a network stack on both servers prepared to work at such speeds? By default, in operating systems of the Linux family, the parameters are set to average values, which give a circle of 500-600 megabits. To transmit / receive data at speeds above gigabit, you need to slightly adjust the hosts (MTU / SlowStart / buffers, etc.).

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question