Answer the question
In order to leave comments, you need to log in
Tell me about Linux HA + shared storage?
Good afternoon,
I have a task to build a failover cluster with virtual machines running in failover mode. The cluster as storage must use shared file storage.
For hardware, I will have:
the first node - 2xXeon5607, 24GbRAM, 4x2Tb (I plan to raid10);
second node — 2xXeon5404, 24GbRAM, 4x2Tb (planning raid10);
storage - XeonE31275, 8GbRAM, 16x2Tb (planning raid10).
I found a howto for building a failover cluster with a KVM hypervisor alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial , the example uses local disks and synchronizes using DRBD, which does not suit me, because. I will use a separate server for storage.
Who has experience successfully building clusters with shared storage? I'm interested in sharing md disks for two nodes, for general use.
Answer the question
In order to leave comments, you need to log in
Use iSCSI/FC/SRP to access storage (the former is probably preferable). If there is only one storage (which, by the way, already makes the configuration non-fault-tolerant), then you do not assemble the raid, but on the virtualization hosts you raise cLVM over the raid arrays / array initiated over the network and allocate LVs to the virtual machines.
Tips to use clustered file systems can be skipped right away (GFS or OCFS), NFS is still back and forth, but in this configuration there are no advantages over raw block devices, rather, on the contrary, there are continuous disadvantages (less manageability, more layers of abstraction and overhead).
And a couple of final tips:
- forwarding the disks themselves via iSCSI and collecting them in mdraid on virtualization hosts is a BAD idea in this case. Mdraid does not support full-fledged work with the total storage and will more or less function only if there are no changes in the metadata (because there is no support for competitive work with them), otherwise (for example, when resynchronizing disks after replacing a failed one), so as not to damage it, you will have to dismantle all the raids except one;
- shelf in this situation - SPOF. If you install the second one, then either use DRBD between the shelves (as selectel does), or still export disks from storage and collect network raids on virtualization hosts using mdraid (similarly done in scalaxy, only separate SAN proxy nodes are used there for assembly raids).
I have this in my plans until the money is not allocated to the iron (
I just wanted to create it using Proxmox. And do not touch DRBD, but store the VM image on storage connected as a NAS via regular NFS. IMHO, in this option it will be easier to add 3 -th node.
Proxmox, free, easy to set up pve.proxmox.com/wiki/High_Availability_Cluster
Well, or KVM with some cluster manager like Eucalyptus, OpenNebula, OpenNode
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question