Answer the question
In order to leave comments, you need to log in
How to implement distributed reliable storage of large files?
I would like to find an open source solution that will allow organizing a distributed reserved storage of large files ~ 10GB.
Functionality is needed for downloading files, searching, uploading on demand and automatically mirroring to backup servers to ensure reliability.
Is it worth bothering with Hadoop for such a task, or are there simpler solutions?
Thanks in advance.
Answer the question
In order to leave comments, you need to log in
LizardFS, MooseFS
CephFS , GlusterFS - but there are negative reviews for these two
you can still consider openstack swift, but it is very capricious. It must be set up carefully.
why do you want to make your own? Ready-made solutions are not considered?
For example, there are no restrictions on the volume of uploaded files and it is backed up in 3 DCs at once.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question