H
H
HighMan2020-12-07 14:15:33
linux
HighMan, 2020-12-07 14:15:33

How to quickly upload large amounts of data to the server (Debian)?

I used scp all the way and it more or less suited me, but in the near future, I will have to transfer about 1.5 TB of information (about 15 files, but a monstrous size).
scp - too long! Even sending a file of 50 GB is a horror. Computer processors are melting.
As a solution, they suggest using less secure encryption algorithms. Like arcfour, but I don't understand how to make ssh use this algorithm. The new ssh has neither go nor blowfish. Attempts to register them in ssh_config lead to nothing. SSH starts to swear.
In theory, I'm quite satisfied with the transfer, in general, without encryption. There, it is not easy to get to information without encryption.
I decided to install vsftpd on the server. I don't know why, but I'm already wrecked by FTP. Set up without hesitationhttps://howitmake.ru/blog/debian/63.html . Earned. Eureka!...
But no. A 20 GB file has been copied for over an hour!
I copy using mc. It even takes longer than scp.
At the same time, the speed of the channels is beyond your eyes! On a remote server 1 GB, on a home computer 500 MB. And everything is absolutely terrible.
Maybe it's ftp stupid in mc?
By the way, for some unknown reason, the fastest way to copy is using sshfs. Why???
What is wrong in setting up vsftpd according to the above article? Or mc is to blame?

Answer the question

In order to leave comments, you need to log in

9 answer(s)
R
Roman Sokolov, 2020-12-07
@HighMan

Break it with an archiver (tar, zip) into smaller parts and download gradually.

S
Sanes, 2020-12-07
@Sanes

try rsync

D
Dmitry, 2020-12-07
@Tabletko

Check the channel between your server and your home computer with iperf. If it shows numbers approximately similar to those declared by the provider, look for a bottleneck further. Perhaps the disks do not have time to read or write faster. Perhaps there is some extraneous load on the receiving side that does not allow faster recording. Also make sure that there is a guaranteed band on both sides, otherwise all these tests can be divided by 10, because. now the wire has no load and speed according to the tariff, and after ten minutes the wire is overloaded and you share the external channel with all the provider's clients.

R
robin777, 2020-12-07
@robin777

nc,
briefly

D
Drno, 2020-12-07
@Drno

Fileszilla on the home computer and further via sFTP...
Rsync
Well, they wrote about the channel correctly. The fact that your provider says about 500mb / s does not mean at all that this speed is available to your server .... for example, house ru, Rostelecom and many more who will cut the speed to Europe up to 10-50mb / s ...

G
ge, 2020-12-07
@gedev

I somehow saved information for myself on the issue of transferring large files. In practice, I have not yet had to deal with it, but perhaps these snippets of text will help you, the author. Everything is under the cut. Taken from the comments on this video. The idea of ​​compressing files on the fly looks extremely good.

Paste
Если объем данных огромен и содержит огромное количество файлов, а также нет места (времени) для создания архива, но передать нужно сейчас и очень быстро (чтобы получить более высокую скорость передачи данных), рекомендую использовать следующие команды.
На стороне получателя данных переходим в каталог, в который необходимо разместить полученные данные, и выполняем такую команду:
nc -l 12345 | tar xvf -
На стороне источника данных так же переходим в каталог, в котором размещён передаваемый каталог, и выполняем такую команду:
tar -cf - ./our_directory/ | nc target_host 12345
вариант с выводом объема трафика
tar -cf - ./our_directory/ | pv | nc target_host 12345
Где:
12345 - номер порта, по которому будет производиться обмен данными;
target_host - ip-адрес или hostname компьютера, на который будут передаваться данные.
---
ещё какие-то доки
moo.nac.uci.edu/~hjm/HOWTO_move_data.html#tarnetcat
---
У nc есть один существенный минус. Порт на сервере для передачи данных надо открыть в файрволле, а после передачи не забыть закрыть. А для передачи данных с локального ПК на сервер, надо пробрасывать порт на локальный комп на маршрутизаторе (если конечно сквозная маршрутизация не настроена средствами ВПН или напрямую). Для ssh порт как правило открыт для подключений извне, так что дополнительных лействий совершать не придется.
Далее, хорошо сжимаемые данные (текстовики, образы виртуалок, дампы баз) принято гонять в сжатом виде. Жать можно на лету, либо встроенными в scp инструментами, либо жать сторонними утилитами вроде pigz и гонять данные через ssh и пайпы. Например так:
dd bs=4M if=template.img | pv | pigz -9c | ssh [email protected] "unpigz -c > /vm/template.img"
Подобным способом можно не только гонять файлы, но и переливать образ на удаленное устройство.
sudo dd bs=4M if=/dev/ssd1/vm | pv | pigz -9c | ssh [email protected] "unpigz -c | sudo dd bs=4M of=/dev/ssd2/vm"
---
https://www.stableit.ru/2012/03/gzip-vs-pigz.html

C
chupasaurus, 2020-12-08
@chupasaurus

In new ssh
there is chacha20-poly1305 instead of RC4

M
Maxim Korneev, 2020-12-08
@MaxLK

flash drive!

Z
zvl, 2020-12-21
@zvl

http on one side wget/curl on the other

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question