Y
Y
Yoh2018-02-28 13:46:29
linux
Yoh, 2018-02-28 13:46:29

After updating lvm, write / read on disks increased when using caching, what could it be?

Hello.
There are 4 servers running the CentOS 7 operating system, all servers have 2 HDDs and 2 SSDs. Created 2 software RAID-1 arrays (one on HDD, the other on SSD).
From the specified HDDs, an LVM-based storage was created with caching on SSD disks in writeback mode.
After updating the lvm packages to version 2.02.171-8 (the latest available version in the official repository), reading from SSD disks increased by 2-3 times and at the same time writing to HDD disks increased (in proportion to reading from SSD disks).
The storage is used for virtual machines based on QEMU-KVM, the load on their part has not changed. Simultaneously with the update of the lvm packages, the entire system was updated (that is, the kernel, qemu were also updated).
For the sake of experiment, on one of the servers with a low load, I switched the caching mode from writeback to writethrough, reading from the SSD and writing to the HDD immediately decreased.
At the link https://yadi.sk/d/0a7fULcv3SrmrM you can see the schedules for writing and reading from disks. Server 1 shows a spike in reads and writes after updating and rebooting the system, server 2 shows a sharp drop in reads and writes after switching caching to writethrough mode.
I tried to search for similar problems on the net, but could not find anything. I also looked at https://bugzilla.redhat.com/buglist.cgi?quicksearc... and https://bugs.debian.org/cgi-bin/pkgreport.cgi?pkg=... and didn't find anything similar.
What could it be? Statistics error or does the system really write more than it should for some reason? Where to dig in this case?
Thanks in advance for the replies.

Answer the question

In order to leave comments, you need to log in

2 answer(s)
A
Alexey Cheremisin, 2018-03-01
@leahch

It looks like the caching algorithm has changed. But I have a counter question, why not use ceph ?! Your configuration just suits this business very well, KVM / QEMU works fine with ceph directly. At the same time, you will get almost instant migration of virtual machines, very flexible work with distributed storage, snapshots, backups, recovery, cloning, migration, forget about lvm and raid, get either a fast ssd cache or a fast pool. In addition, the almost unlimited growing data storage and cloud storage, the departure of any one server will not affect the availability of data for virtual machines.
Of the minuses - the memory for each terabyte of disks needs a gigabyte of memory, and a 10GB network is needed between the servers.
Setup will not take more than 30 minutes and an hour of reading the documentation. Disks do not need to be done in raid! On each of the servers, it is enough to allocate 8-100 gigs for the root partition and boot, everything else just needs to be given to ceph.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question