D
D
DmitryKoterov2013-12-13 12:53:00
System administration
DmitryKoterov, 2013-12-13 12:53:00

ServerLoft: Why does /dev/sdb get 5 times slower on an SSD machine, even after changing the drive?

The story is absolutely mystical: after "pouring" the OS on the server (HP ProLiant DL320E, 2 Samsung SSD 840 PRO Series 256G DXM05B0Q, Ubuntu Server 12.04), the /dev/sdb disk becomes 5 times slower than /dev/sda. At the same time, if you replace sdb with a fresh one and reboot, it becomes fast, but after a new reloading of the server from scratch, everything is healthy again (this procedure was done twice, i.e., the disk was replaced 2 times). On the RAID-1 server, but it was disabled at the time of the test, and in general - the server boots into recovery mode (over the network, from the same image that gets to the RAM disk).
Moreover, if at the moment when sdb is slow, you swap sda and sdb, then sda becomes slow! Those. it obviously remembers somewhere which disk "should be" slow.
The motherboard was also replaced (twice). What could it be?

# hdparm -t /dev/sd?
/dev/sda:
Timing buffered disk reads: 1576 MB in 3.00 seconds = 524.82 MB/sec
/dev/sdb:
Timing buffered disk reads: 322 MB in 3.01 seconds = 107.02 MB/sec
# cat /proc/mdstat
Personalities : [raid1 ] [raid0] [raid6] [raid5] [raid4]
unused devices:
# cat /dev/sda | pv > /dev/null
59GB 0:00:05 [531MB/s] ]
# cat /dev/sdb | pv > /dev/null
68MB 0:00:05 [137MB/s]
The whole history:
- sdb was slow
- you replaced sdb with the new SSD
- it remained slow
- you replaced the whole hardware around
- it became fast, but after the restoration it became slow again
- you replaced the whole hardware around the disks again
- sdb remained slow
- you swapped sda and sdb
- sda became slow! so the slowness was tied to a particular SSD
- you replaced sda with the new SSD, you removed the 2nd NIC
- sda became fast and sdb remained fast! I saw it!!!
- I executed the "Restore" procedure
- sdb became slow, sda remained fast

Answer the question

In order to leave comments, you need to log in

1 answer(s)
D
DmitryKoterov, 2013-12-17
@DmitryKoterov

Suddenly the solution appeared to be:
hdparm -W 1 /dev/sdb
This turns on internal write caching in SSD. RAID1 resync speed increased greatly with this option (from 20 MB/s to 350 MB/s), "iostat 1" also shows the increased throughput, so it's not a fake.
I've also been experimenting with disabling/enabling write caching (-W0/-W1) at a machine in Hetzner with Intel SSDs. Turning off write caching (-W0) reproduces the same effect of overall slowdown and slow RAID1 resync. Turning it back on restores everything back.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question