W
W
wcyb2019-12-05 21:15:57
System administration
wcyb, 2019-12-05 21:15:57

The organization of storing a large number of small files in several containers with one mount point. How?

There are approximately 1.5 terabytes of audio data (audio libraries, music, etc.) both in lossy and lossless, at the moment there are more than 200 thousand files. There is both idle data and frequently updated data (changes and new ones are added). This data is backed up to the cloud (mirrored between different cloud drives). Previously (and so far now) data was stored and backed up in an open form (I mean as it is - regular files) so that they can be viewed / used (both in the cloud and locally) and edited.
Since there is more and more data, this option of storage and backups no longer suits, now downloading over the network takes a very long time (more than a week of the entire audio library), with significant changes (conditionally from 50GB, although it can be more) partial download / synchronization too goes for a long time. In fact, with the available 100Mbps, the average download speed is ~ 13Mbps (this is already with disabling md5 file verification).
I want to switch to storage in dynamic containers (both locally and in the cloud) and work only by mounting the container locally. But here comes another problem of storing and transferring one large container, which can easily get screwed up if problems with the hard drive start and are generally also inconvenient. Therefore, vhd, veracrypt and similar solutions are not suitable.
Now I am unsuccessfully looking for a container that can be split into parts of a certain size and will be mounted as one disk, and ideally it will be mounted without errors if there are missing parts, let's say there is a broken container file (part of the container), and the container will mount and work correctly, but with the absence data from the broken file. Is there even such a container that can do this? Of the systems, both windows (version 10) and linux are used.
In principle, I am ready to create separate containers with my hands, fill them, the main thing is that they are mounted as one disk. It seems like there is a storage option in several vhds in soft raid 0, but even if it works, everything will break if at least one container fails

Answer the question

In order to leave comments, you need to log in

4 answer(s)
A
Artem @Jump, 2019-12-06
@wcyb

No way.
Firstly, containers with such characteristics do not exist.
Secondly, containers will not help here.

In fact, with the available 100Mbps, the average download speed is ~ 13Mbps (this is already with disabling md5 file verification).
Containers won't help here.
Files without containers are easier and faster to work with.
and that's it, backup and synchronization can already take about half a day or more
What do you backup?
I think all the problems are due to the fact that the backup goes directly to the cloud.

W
wcyb, 2019-12-06
@wcyb

An almost completely satisfying option has so far come up with the following: turn each final directory of projects and grouped collections into a dynamic vhd and mount it as a folder, while recreating the directory skeleton to the target directories. The scheme is something like this:

main
-- projects
---- project1.vhd folder
---- project2.vhd folder
---- ...
-- sub1
---- sub1_1
------ library1.vhd folder
------ library2.vhd folder
------ ...
---- collection1.vhd folder
---- ...
-- music
---- discography1.vhd folder
---- ... 
-- ...

a little more manual work, but each final vhd is self-sufficient and, if necessary, can be used both separately and transparently in a grouped directory structure, the
only thing that in the process of creating / editing temporary files in one project can be under 300GB or more, and there is a chance project with a broken file, that is, the problem with small files was solved, but there was a problem with large files - in theory, it is solved by hand with a violation of the directory structure ...
well, in any case, in this case, you will have to upload / download more than usual, which of course a minus, but in practice it will be many times faster when working within the same project. Editing 2-3 files in a relatively large container of course results in a failed scenario

A
athacker, 2019-12-07
@athacker

Is the option of storing all files in the cloud with mounting locally via WebDAV not suitable for you? Windows can mount WebDAV storages as a local drive.

L
loderunner84, 2019-12-12
@loderunner84

take a closer look at Synology

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question