P
P
Programmerus2020-04-07 18:26:23
ZFS
Programmerus, 2020-04-07 18:26:23

What is the reason for such degradation of IO performance between proxmox with ZFS and WS19 VM?

For a week now I have been trying to find the reason for the terrible drop in IO performance between a host with proxmox on ZFS and Windows Server 2019 virtual machines.

Given:

  • proxmox hypervisor, single node, no cluster, pve-6.1-8, FS = ZFS
  • Several WS19 virtual machines with minimal load
  • ZFS sync=disabled, volblocksize for VM disks = 4k
  • VMs have the latest VirtIO drivers (0.1.173)

Tested with the following fio command (both on the hypervisor and in the VM:

fio --filename=test --sync=1 --rw=$TYPE --bs=$BLOCKSIZE --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=1G --runtime=30

Results:
oes1A.png
Graphs

График чтение:
2zeOW.png
График запись:
xr3D5.png

What I have already tried: different volblocksize on ZFS, different ZFS sync parameters (stopped at disabled, because the server is in a DC). Switched between virtio-blk and virtio scsi single (not much difference), enabled writeback cache (it got worse).

Give me ideas plz ;)

Answer the question

In order to leave comments, you need to log in

3 answer(s)
G
Gem, 2020-04-08
@Gem

In proxmox 5 I also tried zvol, with Linux - the result is the same
I got to the current ZFS developers - there were patches, the problem was confirmed, but their fixes did not help
I sit on LVM

P
Puma Thailand, 2020-04-08
@opium

If you need performance, you will have to abandon ZFS, I suffered for a long time but could not get a good performance for a high disk load in
ZFS

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question