Answer the question
In order to leave comments, you need to log in
Dual-primary DRBD: why ocfs2/gfs/other clustered FS?
Why is it necessary to use a cluster fs, for example, ocfs2, when deploying DRBD, both of whose nodes work in Primary mode, for normal operation?
When using ext3, synchronization occurs, but only after the partition is remounted, as if the file table is not transferred from the first node to the second.
For example, on node 1 we delete the 1.txt file, on node 2 it is present in the ls output, but an attempt to open it ends with an error. Remount the partition on node 2 and the file disappears. Why does this happen, because DRBD works with blocks and does not know anything about the FS deployed on top?
Answer the question
In order to leave comments, you need to log in
Why is this happening, because DRBD works with blocks and does not know anything about the file system deployed on top
because you need a distributed lock manager (DLM) to control who reads / changes what files. since clustered fs uses this mechanism, they don't have the problems that you encountered when using ext3... but in the case of ocfs/gfs, and there is a hole in the old woman;)
ps I don't like either drbd or cluster systems)))
pps in general, many distributed systems use such a mechanism in one form or another, name node in hadoop, gtm in postgres-xc, server metadata in ceph, etc.
@jcmvbkbc @lesovsky i.e. it turns out that "metadata" (who reads/changes what) OS stores outside of this section? Simply otherwise, my brain refuses to understand how ls on node 2 sees a deleted file if the section is out of sync and becomes a complete copy of section 1 of the node.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question