E
E
Evgeny Ferapontov2015-01-12 14:54:21
Hyper-V
Evgeny Ferapontov, 2015-01-12 14:54:21

Hyper-V, iSCSI SAN on gigabit: single adapter or converged networks?

Information:
There are two stand alone virtualization hosts (Hyper-V) based on these platforms: SYS-6017R-TDF. There is also a seemingly fault-tolerant file cleaner (RAID10 on hardware, a Windows Server file cluster of two nodes on top of hardware in virtual machines) based on this platform: SYS-5017C-MTF. The server rack switch is like this: Cisco SG300-10.
All three hosts with 2 Gigabit network adapters, united in NIC Teaming using WS2012R2, have weight-based QoS configured. For each type of traffic (replication, management, access to the network for virtual machines) there are separate virtual adapters with the corresponding QoS weights.
Inside the virtualization hosts spin: AD DC - 2 pcs., RDS session hosts - 2 pcs., RemoteApp session hosts - 2 pcs., for each virtualization host, one of each type. This whole zoo loads the network at about 100Mbps at the peak, the average load is about 30Mbps. In the case of replication of a virtual machine, the network, of course, is disposed of "to zero", but this does not happen often during business hours.
Now there is a need to raise fault-tolerant SQL, in connection with which the file washer is hastily completed with a pair of SSDs, an iSCSI target and is called a "data storage system". Since after installing the SSD, the file washer began to have sufficient functionality (fail-safe FS with iSCSI) and performance (1000+ IOPS),
Due to limited financial and hardware resources, 10Gb networks are not expected, as well as RDMA-capable network adapters. It must be done on the existing hardware or not done at all. There are two Intel® PRO/1000 CT network adapters in reserve. There are several schemes for organizing a storage network:

  1. In the already established converged network, we cut another network for iSCSI, set the minimum throughput weight to 50% (this should be about a gigabit) and launch it.
  2. The same as in the previous paragraph, only we deliver those "backup" network adapters to virtualization hosts, i.e. we get a NIC Team of three, not two gigabit network adapters on the side of virtualization hosts. There are still two adapters in the file cleaner, well, to hell with them.
  3. (here I don't understand why I came up with this) We deliver "backup" network adapters to the virtualization hosts, do not include them in the NIC Team. We get a converged network from the side of the file cleaner and a separate gigabit from the side of virtualization hosts.
  4. By hook or by crook, we knock out a "two-headed" network card from the authorities, stick it into the file storage, stick "backup" network adapters into the virtualization hosts, connect everything with a crossover. So certainly gigabit will be.
Questions:
First , is the game worth the candle. Instead of old concerns (lack of fault tolerance at the disk system level - there is no raid, no fault tolerance at the level of virtualization hosts - no cluster), we get new ones regarding performance: will the delays increase, will there be enough read / write speeds. At the moment, one of the virtualization hosts is communicating with the disk at 10mb / s, the average daily and peak metrics will only be in the morning.
Secondly , if it's still worth it, which way to organize a storage network to choose? And will it be necessary to configure QoS also on the switch?
With SAN and iSCSI in particular, I had no business before, I have neither experience nor sensible knowledge, so I hope for your advice.

Answer the question

In order to leave comments, you need to log in

1 answer(s)
S
shmalien, 2015-04-21
@shmalien

3 months have passed since the publication, I want to ask, did you find the answer to your question? Just wondering.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question