M
M
multifinger2011-11-11 12:45:10
RAID
multifinger, 2011-11-11 12:45:10

How to hot swap disks in RAID1 using mdadm in Ubuntu Server 10.04?

Given:

  • server hot swappable hdd
  • 2 hdd
  • ubuntu server 10.04, during installation, RAID1 was created for the root partition and the swap partition according to the manual from the ubuntu off-site, I can’t find the original article, but here liski.vsi.ru/ubuntu/index.php?page=33 describes a similar algorithm installations


Questions:
  • How to make sure that after the release of one of the disks, the system will remain operational? Keep running and reboot
  • How to replace a failed disk without stopping the system?
  • How to add a "third" spare disk that will be automatically/manually used after one of the disks fails
  • How to set up an email notification about the need for replacement (the failure of one of the disks)


I will give my thoughts, which I managed to think of myself:

1. to make sure - we pull out the disk and see how the system works ... everything is OK; reboot - everything is OK. then the pulled out disk needs to be somehow turned back on so that it works and synchronizes
(it didn’t work out for me: the system does not see the connected disk, the article advises: “We connect a new disk. Using the fdisk utility, we create the appropriate partitions on it: sdb1, sdb2 and sdb3 partitions. We mark them with the fdisk t command, like fd. ”It seems to be simple, but I would like an example, and I also want to automate partitioning a new disk so as not to do it manually with each replacement ... you can write a script or you can automate?)

1.1. so that the system can reboot from both disks, the grub bootloader must be installed on both disks: the ubuntu installer from version 9.04 seems to do it itself (by special request), but the new disk needs to be supplemented with a bootloader manually or, again, with an automation script

2. actually, how to replace disk: mount, split, add to array using sudo mdadm –add /dev/mdN /dev/sdbM add bootloader to the

disk it is not used, but it is ready to start working at any moment ... well, it will probably take more to synchronize the time ...

Answer the question

In order to leave comments, you need to log in

2 answer(s)
G
Gregory, 2011-11-11
@multifinger

Article on xgu
A disk in an array can be conditionally made a bad disk using the --fail (-f) switch: A failed disk can be removed using the --remove (-r) switch: Adding a new disk You can add a new disk to the array using the --add switches (-a) and --re-add: And, if you need to update mdadm.conf The dmadm daemon itself can send mail about the status of the raid. In order for the system to be able to boot from a program array, when creating an array, you must specify --metadataversion=0.90. I make 500mb under boot (md0) the rest is md1, then lvm. Accordingly fdisk'om it is necessary to break hards equally. I can't say anything about the "spare disk". But in my opinion mdadm and 1 raid can build on 3 disks. I do not vouch for the result)))
%# mdadm /dev/md0 --fail /dev/hde1
%# mdadm /dev/md0 -f /dev/hde1

%# mdadm /dev/md0 --remove /dev/hde1
%# mdadm /dev/md0 -r /dev/hde1

%# mdadm /dev/md0 --add /dev/hde1
%# mdadm /dev/md0 -a /dev/hde1

S
shadowalone, 2011-11-11
@shadowalone

In order to copy the partition table to a new disk, and not worry about fdisk, there is a utility sfdisk

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question