A
A
Artem2016-03-01 15:58:43
Clustering
Artem, 2016-03-01 15:58:43

How do I plan for failover storage for Hyper-V Cluster?

Good afternoon, colleagues.
At the moment, there are several separate Hyper-V hosts with a high density of virtual machines on Windows 2012 R2.
The task is to ensure uninterrupted service:
- remove downtime when installing updates
- remove / reduce the turnaround time of machines from the backup in case of failure of one of the hosts (now we use the Instance Recovery feature in Veeam B&R)
How I see it:
As for virtualization hosts:
- we collect Hyper-V Cluster from existing hosts (using live migration, we get rid of downtime during updates of the hosts themselves)
- migrate the storage of virtual machines to the shared storage of the cluster (fortunately, Hyper-V can do this on the fly, we get rid of data loss when hosts fail )
As for shared storage - that's my question.
How best to implement it with the following parameters:
1. Required fault tolerance
2. Required scalability
3. There is no money for branded hardware yet
4. I would not like to use FC
5. It is possible to use 10GE
I would like to get recommendations on technologies and hardware, taking into account the fact that infrastructure is being built thoroughly and for a long time, and in a couple of years I would not want to bang my head on crutches and the shortcomings made at this stage.
I have no experience in building storage systems, that's why I'm asking for help.

Answer the question

In order to leave comments, you need to log in

5 answer(s)
E
Evgeny Ferapontov, 2016-03-03
@gangz

In Q3 2016, WS2016 will be released with its Storage Spaces Direct feature - an analogue of VMware VSAN. Requirements: at least 4 hosts, 10 Gb network and Datacenter edition.
Already now on WS2012R2 it is possible to make SOFS as it was written above.
JBODs can be cascaded. Two JBODs and a mirror pool will avoid a complete failure of one shelf. You will give storage to Hyper-V hosts via SMB3.0, and if the budget allows you to also buy RDMA network cards, then you will not know any problems at all.
If you're building any of your Storage Spaces for the long haul, take a close look at the Storage Spaces team's Technet blog, especially their recommendations regarding the number of SSDs per shelf.
I also strongly recommend reading
the "guide" to creating SOFS: www.aidanfinn.com/?p=13176
planning SDS based on WS2016: www.aidanfinn.com/?p=18608
and in general it would be nice to read this entire blog - - this guy has an MVP and writes mainly about Hyper-V and storage for it.
Also, you should never discount branded storage systems from any HP or Dell. They will cost much more for the same performance, but in case of problems, it will not be you that will be hit on the head, but the vendor's support.
PS If you still decide to look towards Storage Spaces, I recommend that you read: www.supermicro.com/products/nfo/CiB.cfm
and also google Cluster-in-a-Box solutions from other vendors. This will be the easiest way to deploy SOFS: both nodes and a disk shelf in one package with pre-designed interconnects. Turn it on, set it up, use it. If necessary, you can expand by cascading JBODs of the same Supermicro, which already exist at a completely abnormal density: up to 90 3.5 disks in 4 units.

D
Dmitry, 2016-03-01
@Tabletko

For the comfortable existence of a cluster, reliable storage is needed.
I see two options, either to take an iron two-head branded storage system or to implement a converged solution using Starwind or a similar software product (for example, HPE has something similar, called VSA)

C
Cool Admin, 2016-03-01
@ifaustrue

IMHO, it’s better for each host to have its own DAS (you can do without storage at all, but simply assemble a Storage Pool with SSD) storage and VM replication using HV tools. It's much more reliable and cheaper.

T
tedkuban, 2016-03-01
@tedkuban

Storage Pool on SSD cannot be assembled on 2012 R2, only SAS is supported there. I recently solved a similar problem. 10Gb of the grid should generally be enough, although, of course, you need to calculate in more detail. If you really need automatic updating of hosts, it's hard to do without a cluster. There are several solutions for creating iSCSI HA Storage on relatively ordinary equipment, including Starwind, Open-E, Nexenta, you can also independently assemble two Linux storages and combine them into HA. There is another option - we take two servers with SAS HBA and connect them to one disk shelf. In general, the topic is too extensive for one post, in a personal I am ready to advise.

O
outlaw_cp, 2016-04-07
@outlaw_cp

Storage Spaces Direct is a great product from what we've seen in TP, but it has very serious requirements, both in terms of the amount of hardware in the cluster and financially. To be honest, software solutions would be appropriate here. As already mentioned, from the budget ones it is Starwind or HP VSA. They even speak Russian in Starwind)

Similar questions

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question