A
A
Alexander Drozdov2014-10-13 14:00:30
NAS
Alexander Drozdov, 2014-10-13 14:00:30

FIO, testing over the network does not give adequate results, why?

Greetings dear ones.
I'm trying to test what the Synology 1813+ storage with 8 WD4000F9YZ 4 TB drives is capable of using the fio utility and based on the article .
I have a virtual machine on the Ubuntu 12.04 Server network installed on WD 1Tb Green, and I'm testing from it.
On storage, I created 3 test partitions in ext4 and made shared folders of the same name.
1 - hdd (just one disk)
2 - raid5 (3 disks assembled in the 5th raid)
3 - raid6 (4 disks assembled in the 6th raid)
Mounted folders (hdd, raid5, raid6)
sudo mount //ip/hdd -o username=***,password=***
I tried to write files there, everything is ok.
df -h shows disk sda1 and mounted disks like //ip/hdd
Installed fio:
sudo apt-get install fio
create a file in the home folder
sudo touch read.ini
I decided to start by testing the local disk, wrote it there

config
[readtest]
blocksize=4k
filename=/dev/sda1
rw=randread
direct=1
buffered=0
ioengine=libaio
iodepth=32
[writetest]
blocksize=4k
filename=/dev/sda1
rw=randwrite
direct=1
buffered=0
ioengine=libaio
iodepth=32

Ran sudo fio read.ini
Got
result
readtest: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=32
writetest: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=32
fio 1.59
Starting 2 processes
^Cbs: 2 (f=2): [rw] [1.7% done] [450K/233K /s] [110 /57 iops] [eta 16m:27s]
fio: terminating on signal 2
Jobs: 2 (f=2): [rw] [1.8% done] [393K/184K /s] [96 /45 iops] [eta 16m:42s]
readtest: (groupid=0, jobs=1): err= 0: pid=16725
read : io=7456.0KB, bw=445836 B/s, iops=108 , runt= 17125msec
slat (usec): min=3 , max=12869 , avg=40.51, stdev=577.54
clat (msec): min=28 , max=874 , avg=293.53, stdev=96.39
lat (msec): min=28 , max=875 , avg=293.57, stdev=96.40
bw (KB/s) : min= 346, max= 608, per=99.76%, avg=433.97, stdev=50.87
cpu : usr=0.23%, sys=0.09%, ctx=1870, majf=0, minf=53
IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=98.3%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued r/w/d: total=1864/0/0, short=0/0/0
lat (msec): 50=0.21%, 100=0.32%, 250=35.30%, 500=60.73%, 750=3.00%
lat (msec): 1000=0.43%
writetest: (groupid=0, jobs=1): err= 0: pid=16726
write: io=4276.0KB, bw=254319 B/s, iops=62 , runt= 17217msec
slat (usec): min=2 , max=1801 , avg=10.49, stdev=54.92
clat (msec): min=10 , max=2642 , avg=514.77, stdev=365.65
lat (msec): min=10 , max=2642 , avg=514.78, stdev=365.65
bw (KB/s) : min= 0, max= 520, per=49.14%, avg=121.86, stdev=138.33
cpu : usr=0.02%, sys=0.16%, ctx=1066, majf=0, minf=22
IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=97.1%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued r/w/d: total=0/1069/0, short=0/0/0
lat (msec): 20=2.99%, 100=0.28%, 250=21.14%, 500=36.30%, 750=18.90%
lat (msec): 1000=10.10%, 2000=9.64%, >=2000=0.65%
Run status group 0 (all jobs):
READ: io=7456KB, aggrb=435KB/s, minb=445KB/s, maxb=445KB/s, mint=17125msec, maxt=17125msec
WRITE: io=4276KB, aggrb=248KB/s, minb=254KB/s, maxb=254KB/s, mint=17217msec, maxt=17217msec
Disk stats (read/write):
sda: ios=1943/1173, merge=222/76, ticks=544196/701172, in_queue=1256584, util=100.00%

I made rough conclusions, saying that 108iops for reading a local disk and 62iops for writing. It looks like the truth.
Logically, I change the line filename=//ip/hdd/ in the config
get
readtest: (g=0): rw=randread, bs=4-4/4-4, ioengine=libaio, iodepth=32
writetest: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=32
fio 1.59
Starting 2 processes
readtest: you need to specify size=
fio: pid=0, err=22/file:filesetup.c:704, func=total_file_size, error=Invalid argument
writetest: you need to specify size=
fio: pid=0, err=22/file:filesetup.c:704, func=total_file_size, error=Invalid argument

I take another manual, they recommend this
config
[readtest]
rw=randread
size=128m
directory=/mnt/hdd/
ioengine=libaio
[writetest]
rw=randwrite
size=128m
directory=/mnt/hdd/
ioengine=libaio
get
readtest: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=1
writetest: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=1
fio 1.59
Starting 2 processes
Jobs: 1 (f=1): [r_] [100.0% done] [14561K/0K /s] [3555 /0 iops] [eta 00m:00s]
readtest: (groupid=0, jobs=1): err= 0: pid=20860
read : io=131072KB, bw=9919.1KB/s, iops=2479 , runt= 13213msec
slat (usec): min=213 , max=136037 , avg=398.81, stdev=3808.18
clat (usec): min=0 , max=29 , avg= 1.25, stdev= 0.94
lat (usec): min=215 , max=136042 , avg=400.77, stdev=3808.26
bw (KB/s) : min= 31, max=14816, per=102.82%, avg=10198.48, stdev=6175.17
cpu : usr=1.42%, sys=4.09%, ctx=32802, majf=0, minf=20
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w/d: total=32768/0/0, short=0/0/0
lat (usec): 2=80.30%, 4=15.84%, 10=3.78%, 20=0.07%, 50=0.02%
writetest: (groupid=0, jobs=1): err= 0: pid=20861
write: io=131072KB, bw=627139KB/s, iops=156784 , runt= 209msec
slat (usec): min=2 , max=178 , avg= 4.84, stdev=10.13
clat (usec): min=0 , max=91 , avg= 0.38, stdev= 2.26
lat (usec): min=2 , max=179 , avg= 5.37, stdev=10.56
cpu : usr=21.15%, sys=57.69%, ctx=703, majf=0, minf=19
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w/d: total=0/32768/0, short=0/0/0
lat (usec): 2=99.83%, 4=0.01%, 10=0.01%, 20=0.05%, 50=0.02%
lat (usec): 100=0.09%
Run status group 0 (all jobs):
READ: io=131072KB, aggrb=9919KB/s, minb=10158KB/s, maxb=10158KB/s, mint=13213msec, maxt=13213msec
WRITE: io=131072KB, aggrb=627138KB/s, minb=642190KB/s, maxb=642190KB/s, mint=209msec, maxt=209msec

That is, it seems that I have 2479iops for read and 156784iops for write, well, that is, complete nonsense.
Experienced comrades assume that this is a cache, and offer to test it with different file sizes.
Here's what came
of it. Maybe I'm digging in the wrong direction at all?
I would appreciate any advice, thanks.

Answer the question

In order to leave comments, you need to log in

2 answer(s)
T
tgz, 2014-10-13
@tgz

Test the disk itself, without mounting and creating fs on it.
The config is something like this:
[global]
blocksize=4k
filename=/dev/sdX
direct=1
buffered=0
ioengine=libaio
iodepth=32
[readtest]
rw=randread
[writetest]
rw=randwrite
Testing time needs more, several hours, which if the storage caches had time to get tired and start loading the disks themselves

A
Alexander Drozdov, 2014-10-13
@drozdovich

@tgz
Thank you.
But how do I test the disk itself, or rather the raid array, if it is in a NAS device, Synology1813+
Just with testing the disk itself, there are no problems.
Testing took a very long time, the 8-gig one was loaded for the night.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question