N
N
Nikolay2021-09-08 16:05:37
linux
Nikolay, 2021-09-08 16:05:37

How to create multi-volume tar with ftp download on the fly?

Good afternoon.
There are several websites with folders containing a large number of photos/videos, with rather infrequent updates.
On one project, the folder weighs 250 gigs, several files are added per week.
Otherwise, sites are updated and improved quite often, so all this needs to be backed up somehow to a remote FTP.
It is not an option to drive terabytes of static data every night, besides, the storage is the same and does not export the channel / write speed.
As a way out, it was decided to make a separate backup, large folders and everything else.
Everything else weighs 200-400 megabytes and is perfectly backed up every night.

With large folders, the issue has not yet been fully resolved.
There were different ideas like scanning the site for new files and downloading only them, but unnecessary complications come out there.
We decided to stop at a full backup once a week, but there is a nuance here.
There is no free space on the servers to store the backup, so tar (without compression) is supposed to be poured directly to FTP, but even here a nuance comes out - the file size.
There may be problems with a 250 gig file, so it needs to be split into parts of 2 gig for example.
In ordinary cases, helmet backups via curl

tar -c -C ./  site.com/big_folder | curl -T - -u username:password ftp://backup.ru/disk2/user/file.tar

I tried to attach here with the linux console "on you" | split -b 2048M still can't get it to work.
What team to overcome it?

PS: I thought about mounting the remote ftp as a separate partition, but the ftp storage is turned off during the day, there is a fear that there may be problems with this.

Answer the question

In order to leave comments, you need to log in

2 answer(s)
P
pfg21, 2021-09-08
@hoindex

split does not work with streams, so the solution is to mount ftp as a file system and save the chopped files there.
---------------
put syncthing on the server and on the storage.
enable versioning in storage syncfing.
because Since it is a daemon, any change to files will be immediately merged into the repository ("back up" in versioning).
By the way, the files will also be perfectly poured into the return line. it’s convenient if you still put synchfing on a working machine

D
Drno, 2021-09-08
@Drno

rclone

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question