I
I
Ingtar2015-01-13 20:57:08
linux
Ingtar, 2015-01-13 20:57:08

Choice of DFS, which one to start watching?

Good day!
It is planned to implement DFS for the needs of the company, I want to ask for advice on choosing a direction (there are many DFS, everyone has + and -, testing all will be very long) for study.
FS requirement:
- Distribution on a small number of nodes (we will not build a giant of 30 servers. 2, 4 maximum. Disk volume up to 40Tb)
- I would like HA (I read that the MooseFS master is redefined by hand. Not critical, but still not less)
- Guaranteed data writing to nodes
- Without split-brain :)
For now I'm looking at MooseFS and Ceph, but I would like to ask experienced comrades who have already solved such goals for advice. at the moment NFS + rsync which is not very convenient. But it works :)

Answer the question

In order to leave comments, you need to log in

4 answer(s)
A
Armenian Radio, 2015-01-13
@gbg

We use Heartbeat-Pacemaker-OCFS2. It's very nice to be able to reflink - a snapshot.
Since the reflink, the contents of the file are remembered, and copy-on-write is used to save the changed blocks. That is, we backup> 20 virtual machines is done in a couple of seconds.
4 nodes work on top of FibreChannel SAN HP-EVA6000.
About split-brain - as you set up a quorum, a decision will be made to send the cluster to reboot when the component crashes.
All components are in Opensuse 13.2 and are configured by the wizard out of the box. Without picking in configs.

S
Supme, 2015-01-13
@Supme

Ceph. Virtualization systems are not supported, it may come in handy in the future. Fujitsu has now begun to release their storage systems on it. There is no master.

A
Alexey Cheremisin, 2015-01-13
@leahch

Ceph + rbd + nfs + pacemaker. 4 nodes on ceph (118tb), two on rbd + nfs server + pacemaker, there are kvm virtual machines directly via librbd. Clients of 15 servers on nfs with one hundred users. Everything goes on infiniband qdr. Satisfied, we are not planning cephfs yet...
And yes, do not distribute rbd + nfs server from OSD nodes, it hangs! Virtual machines with librbd and kvm on osd nodes live normally, but we removed them from sin, and the memory is at the limit there (1 gig per terabyte is recommended)
Yes, before that we tried glusterfs, it is very strange ... And even earlier gpfs ​​lived purchased from intelligent business machines, but the cost of upgrading to the new version exceeded all reasonable limits ...

A
abir_valg, 2016-04-11
@abir_valg

Here they consider GLUSTERFS vs HDFS . According to the test results, GLUSTERFS was in the lead.
What about Ceph instability?
My question is on topic .

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question