Answer the question
In order to leave comments, you need to log in
Mdadm: array configured as md0 turns into md127 after reboot and doesn't work?
Description of the problem
I create a JBOD (linear) RAID array according to this instruction:
www.linuxhomenetworking.com/wiki/index.php/Quick_H...
The /dev/md0 array is created, mounted in a folder, content is uploaded to it and shared via SMB. Amazing!
After the reboot, the array disappears, instead of it, the supposedly serviceable /dev/md127 array dangles. Accordingly, fstab does not mount it.
But you can’t mount it manually either: it swears at a broken FS. It has a different UUID.
It is treated simply:
mdadm -S /dev/md127<br/>
mdadm -As<br/>
mount -a
md127 : active linear sdb[0] sdc[1] sdd[2]<br/>
3907036680 blocks super 1.2 0k rounding
ARRAY /dev/md/echo:0 metadata=1.2 name=echo:0 UUID=8dc82c4a:f6b1fecf:f5468f88:c237aedf
md0 : active linear sdb1[0] sdd1[2] sdc1[1]<br/>
3907028931 blocks super 1.2 0k rounding
ARRAY /dev/md/0 metadata=1.2 name=echo:0 UUID=6c857d7b:4cddca25:d567dc18:774dfca3
# by default, scan all partitions (/proc/partitions) for MD superblocks.<br/>
# alternatively, specify devices to scan, using wildcards if desired.<br/>
DEVICE partitions<br/>
<br/>
# auto-create devices with Debian standard permissions<br/>
CREATE owner=root group=disk mode=0660 auto=yes<br/>
<br/>
# automatically tag new arrays as belonging to the local system<br/>
HOMEHOST # instruct the monitoring daemon where to send mail alerts<br/>
MAILADDR [email protected]<br/>
<br/>
# definitions of existing MD arrays<br/>
ARRAY /dev/md/0 metadata=1.2 name=echo:0 UUID=6c857d7b:4cddca25:d567dc18:774dfca3<br/>
<br/>
<h4>Какие меры пытался принимать</h4><br/>
Нагуглил следующие решения:<br/>
<br/>
1) Чистил директиву ARRAY в mdadm.conf от необязательных параметров.<br/>
2) Пересобирал массив с параметром --update=homehost<br/>
3) Пересоздавал массив.<br/>
<br/>
Ничего из этого не помогло.<br/>
<br/>
Более того, даже удаление конфига не меняет ситуации: после ребута все равно создается устройство /dev/md127.<br/>
<br/>
Между правкой конфига и ребутом я всегда делал "update-initramfs -u".<br/>
<br/>
СПАСИТЕ!
Answer the question
In order to leave comments, you need to log in
Try creating a raid with
mdadm -v --create /dev/md0 --auto=yes --level=raid1 --raid-devices=2 /dev/sdb1 /dev/sdc1
then
mdadm --examine --scan >> /etc/mdadm/mdadm.conf
update-initramfs -u
reboot
VICTORY!
In mdadm.conf we find the HOMEHOST directive and change <system> to the hostname of your computer, for example, vasia .
Six hours of desperation to find out. Unix way is so unix. >_<
Why exactly, I described here (English).
the same problem is observed in Ubuntu Server Edition 11.04 on reboot renames from md0 and md1 to md126 to md127 and does not boot
Good night all. I have the same problem :(. I tried a lot of what is written on the internet - it doesn’t help (or rather, it helps, but before reboot). Wheelbarrow 11.04 ubuntu server. During installation, the raids were not configured,
it
was done manually later via mdadm.
So on a wheelbarrow where everything is fine - no HOMEHOST is set and everything plows.
Runs :
Linux dionis 2.6.38-8-server #42-Ubuntu SMP Mon Apr 11 03:49:04 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST # instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 UUID=34566eca:bdb8575c:592076d5:9d523fca name=dionis:0
ARRAY /dev/md/1 metadata=1.2 UUID=01085237:585b5a78:1e8356cd: 5f99e5c2 name=dionis:1
ARRAY /dev/md/2 metadata=1.2 UUID=18ef7141:19fde7f4:711104fc:00bd9e32 name=dionis:2
ARRAY /dev/md/3 metadata=1.2 UUID=c896c651:0bb74807:126044e555 name =dionis:3
ARRAY /dev/md/7 metadata=1.2 UUID=afdc372f:e3771ccb:6829c9f5:2dc2a5bb name=dionis:7
ARRAY /dev/md/4 metadata=1.2 UUID=12cf5cc9:479407ae:d26ab5cb:27fa089b name=dionis :4
ARRAY /dev/md/6 metadata=1.2 UUID=ef588daf:eed64c26:fc21b08f:88be939e name=dionis:6
# This file was auto-generated on Fri, 05 Aug 2011 03:01:19 +0400
# by mkconf $Id$
mdadm -Ds
ARRAY /dev/md/3 metadata=1.2 name=dionis:3 UUID=c896c651:0bb74807:126044dc:3142e555
ARRAY /dev/md/4 metadata=1.2 name=dionis:4 UUID= ARRAY /dev/md/0 metadata=1.2 name= dionis
:0 UUID=34566eca
:
bdb8575c:592076d5:9d523fca
ARRAY /dev/md/7 metadata=1.2 name=dionis:7 UUID=afdc372f:e3771ccb:6829c9f5:2dc2a5bb
ARRAY /dev/md/1 metadata=1.2 name=dionis:1 UUID=01085237:885b5a7: 1e8356cd:5f99e5c2
ARRAY /dev/md/2 metadata=1.2 name=dionis:2 UUID=18ef7141:19fde7f4:711104fc:00bd9e32
Where it doesn't work (only 1 array because I experimented)
Linux thea 2.6.38-8-server #42-Ubuntu SMP Mon Apr 11 03:49:04 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file .
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST # instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
# This file was auto-generated on Sun, 04 Sep 2011 04:21:24 +0400
# by mkconf $Id$
ARRAY /dev/md/0 metadata=1.2 UUID=bebc3a96:6c5c179a:54c08e89:b8c0dc78 name=thea:0
mdadm -Ds
ARRAY /dev/md/thea:0 metadata=1.2 name=thea:0 UUID=bebc3a96:6c5c179a:54c08e89:
b8c0dc78 costs on a disk (hardware raid was made on system screws).
PS setting HOMEHOST DOES NOT HELP me. After reboot, arrays reappear as /dev/md/thea:N
Ha, I'm sorry, everything actually works for me with this config
ARRAY /dev/md/0 metadata=1.2 name=hyperion:0 UUID=5645d497:9e0a45f1:e8b80c0a:327aa0d2
I just never ran it on this wheelbarrow before
update-initramfs -u
and the installation on the second wheelbarrow apparently launched.
PS The solution with HOMEHOST seems to me extremely dumb, because. if you want to change the hostname, because you will forget to change it in mdadm.conf and after the reboot, the pipets will come right away :(
Vyacheslav Ulyushov discovered the following:
I had this problem after installing updates.
There are two mdadm packages in the Ubuntu repository (4 and 4.1):
mdadm_3.1.4-1+8efb9d1ubuntu4_amd64.deb
mdadm_3.1.4-1+8efb9d1ubuntu4.1_amd64.deb I
suspect that the first one was installed from the installation disk. These packages
have differences in the
/usr/share/initramfs-tools/scripts/init-premount/mdadm file,
namely the 14th line
in the ubuntu4 version:
if! mdadm --misc --scan --detail >/dev/null 2>&1; then
in ubuntu 4.1:
if! mdadm --misc --scan --detail --test >/dev/null 2>&1; then
After the update, I removed the --test parameter and the disks were named as
usual /dev/md0, /dev/md1, etc.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question