Answer the question
In order to leave comments, you need to log in
If two out of four disks fail in soft-raid10, does the array fall apart in Linux?
I'm testing a situation with disk failure in raid10 (mdadm) under Linux. There are four disks in the raid. When disconnecting two disks from different pairs, the system does not boot (the pairs are definitely different). It shouldn't be like that, should it?
[email protected]:~$ sudo file -s /dev/sda
/dev/sda: x86 boot sector
[email protected]:~$ sudo file -s /dev/sdb
/dev/sdb: x86 boot sector
[email protected]:~$ sudo file -s /dev/sdc
/dev/sdc: x86 boot sector
[email protected]:~$ sudo file -s /dev/sdd
/dev/sdd: x86 boot sector
Linux ubuntu 3.19.0-25-generic #26~14.04.1-Ubuntu SMP Fri Jul 24 21:16:20 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[email protected]:/home/pezzak# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Feb 14 18:46:47 2016
Raid Level : raid10
Array Size : 15717376 (14.99 GiB 16.09 GB)
Used Dev Size : 7858688 (7.49 GiB 8.05 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sun Feb 14 21:24:31 2016
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Name : raidtest:0
UUID : 922b875a:c2759fc2:576f394d:7fe333f6
Events : 330
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
5 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
4 8 49 3 active sync /dev/sdd1
Answer the question
In order to leave comments, you need to log in
In general, yes, when two disks crash, raid10 can lose some data with some probability - so this is the "protection" mode, it only works in ro without two disks, emnip.
Twist the layout, but in general, from a dozen, IMHO, you can’t make it survive the crash of two arbitrary disks. If you assemble it into a classic layout (1+0, 0+1) - then it will experience the departure of two disks, if you guess which ones.
In mdadm, raid10 is assembled not as a stripe over a mirror and not as a mirror over a stripe, but as a single array, while the data is smeared and duplicated according to the principle specified when creating the array with the --layout option, see man mdadm.
I don’t remember whether it was loaded from a degraded array during my experiments, but the fact that the data was not lost and the array continued to work with the loss of several specific disks is yes.
But near2 guarantees integrity only when a single disk exits. The second - already as lucky.
after all, it will not only fall apart in Linux, it will fall apart in any scenario if you have a normal level of 1-2 mathematics.
It all depends on the mirror assembly sequence.
With the 0 + 1 scheme, the departure of one disk is already critical. If you have 1 + 0, then there are options when two disks fly out of two different raid1 arrays.
And if raid10 consists of two disks. (I got confused instead of raid1, because in Linux the read speed of raid10 software is 2 times higher than just raid1) and one crashes, the system in read only will not fall down?
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question