M
M
Maxim Grishin2021-03-01 15:48:28
VMware
Maxim Grishin, 2021-03-01 15:48:28

How to allow VM to use all local storage bandwidth?

Given: VMware 6.7 host, on it raid-5 of 8 6-TB disks, on the VMka host for backups, to which this storage is almost completely given over to data. On VM WS2012R2, VeeamBackup 10.0. The problem is that when trying to backup a VM from a neighboring host, sustainedthe write speed to the disk from inside the VM shows 38-31MB / s (depending on the phase of the moon and unknown parameters, the first couple of minutes can be even higher), while the reading speed of Veeam itself is equal, sometimes 60 + MB, sometimes more than a hundred if the disk , which is backed up, is partially filled with zeros. This would be fine if the array didn't allow for faster and much faster work. I suspect that in the VMware 6.5 region, ESXi developers, along with policies prohibiting the saturation of network storage with requests from one VM, stuck the same or themselves in relation to local storages (DAS in the sense). At least when a neighboring VM began to accept backups from another source in parallel with the backup on this one to the same physical storage, the total write speed measured in esxtop reached 76MB / s, 38 per VM. The same problem with the host itself - the speed of data exchange with the storage also rests at 38MB / s, measured by running inflate on the disk with the storage and monitored esxtop. That is, the oil painting looks like this: any write source receives 38MB / s of guaranteed bandwidth and some credit for exceeding, after which some level restrictions are turned on between the raid and the ESXi core, judging by the esxtop counters (DAVG ~ 20ms / request, the rest 0- 2ms) preventing it from speeding up above the detected limit. Other symptoms: when the backup is running, Windows perfmon shows the speed of the same 38MB / s, the length of the queue to the disk with storage = 1, esxtop shows a decrease in the length of the queue to the disk shortly after the start of the backup from the starting 64 to 4, is not corrected by parameters like `esxcli storage core device set -d eui. d260037300d00000 -s 0 -q 0`, which is also disabling throttling on storage, or `esxcli storage core device set -d eui.d260037300d00000 -O 16 -m 16 -f`, which is also setting the minimum queue length value, regardless of the values ​​( do not apply, -m should limit the queue from the bottom, anyway, in the process, the queue falls under 16). Aacraid 6.0.6.2.1.59002 controller driver, Adaptec ASR8805 hardware model. Where to dig? VMware docks on storage optimization have already gone through, but nothing else is found. hardware model Adaptec ASR8805. Where to dig? VMware docks on storage optimization have already gone through, but nothing else is found. hardware model Adaptec ASR8805. Where to dig? VMware docks on storage optimization have already gone through, but nothing else is found.

Answer the question

In order to leave comments, you need to log in

1 answer(s)
R
rPman, 2021-03-01
@rPman

What type of virtual machine? OK, what OS is in the guest? if, for example, Windows, then look at the drivers, the fact is that by default there may be an emulated network controller that will load the processor and not work very efficiently, but if you install vmware tools and select network adapters in the virtual machine settings ...
https:// kb.vmware.com/s/article/1001805 (summary, pick paravirtual)
Same for disk controller

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question