L
L
loskiq2022-01-15 17:47:46
Debian
loskiq, 2022-01-15 17:47:46

System won't boot from mdadm after disk failure?

Hello!

I assembled a raid1 array of two sections:

mdadm --create /dev/md1 /dev/sdb1 /dev/sdc1 --level=1 --raid-devices=2
mdadm --examine --scan > /etc/mdadm/mdadm.conf
update-initramfs -u
mkfs.ext4 /dev/md1
mount /dev/md1 /mnt/

Also registered in /etc/fstab to automatically mount
UUID=bd11a49b-25c1-4e7b-84ef-9eb5df436e99 /mnt ext4 defaults 0 2

Now, when simulating a disk failure (turned off the system, manually disconnected one of the disks) and then turned on the system, it starts in recovery mode, apparently because it does not find the uuid from /etc/fstab, then requires the root password to fix the problem. As soon as I remove the entry from /etc/fstab, I reboot, the system starts normally. But even after that, you have to manually stop and rebuild the array with one disk:
mdadm --stop /dev/md1
mdadm --assemble --scan

I note that when the system is turned on, when a disk fails (for example, / dev / sdc1), everything is OK, I can just cut a new disk and execute:
mdadm /dev/md1 -a /dev/sdd1
mdadm /dev/md1 -f /dev/sdc1
mdadm /dev/md1 -r /dev/sdc1

Question: how to make it so that when a disk fails and the system is subsequently rebooted, it starts up normally, the array itself is rebuilt and mounted where necessary?

Answer the question

In order to leave comments, you need to log in

1 answer(s)
L
loskiq, 2022-01-17
@loskiq

The problem was in /etc/fstab. All that was needed was to add the nofail option to the volume being mounted. The system will boot without problems, and then you can rebuild the array in the system.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question