D
D
darkAlert2019-06-29 16:18:49
Hard disks
darkAlert, 2019-06-29 16:18:49

How to split two disks in RAID1 into two independent ones?

Two HDDs were combined in RAID1 under ubuntu.
Now I want to split them into two independent disks with data saved on them (contain only files, nothing system).
I stopped md0:

mdadm --stop /dev/md0

fdisk shows:
[email protected]:~# fdisk -l
Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 4E0BB34F-91D3-46B2-898F-3E60C180F1D0

Device     Start        End    Sectors  Size Type
/dev/sda1   2048 7814035455 7814033408  3.7T Linux filesystem


Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 8CC7B52A-3F9E-49C8-9A84-C1A15DD86737

Device     Start        End    Sectors  Size Type
/dev/sdb1   2048 7814035455 7814033408  3.7T Linux filesystem


Disk /dev/sdc: 447.1 GiB, 480103981056 bytes, 937703088 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x948a3a1f

Device     Boot Start       End   Sectors   Size Id Type
/dev/sdc1        2048 937701375 937699328 447.1G 83 Linux

Next, I try to clear the superblock on sdb1, but I get:
[email protected]:~# mdadm --zero-superblock /dev/sdb1
mdadm: Couldn't open /dev/sdb1 for write - not zeroing

I can't mount it as a separate drive either:
[email protected]:~# mount /dev/sdb1 /media/HDD2/
mount: /media/HDD2: unknown filesystem type 'linux_raid_member'.

Help me please!
===UPDATE1:
So it looks like I broke sdb1. I tried to do this:
mdadm --zero-superblock /dev/sdb --force
The superblock got lost. Trying to mount sdb1:
[email protected]:~# mount /dev/sdb1 /media/HDD2/
mount: /media/HDD2: mount(2) system call failed: Structure needs cleaning.

Trying to mount untouched sdb1 to RAID1 but getting:
[email protected]:~# mdadm --assemble /dev/md0 /dev/sda1
mdadm: /dev/sda1 is busy - skipping

===UPDATE2:
Anyway, I managed to mount the RAID back from one disk using --run:
[email protected]:~# mdadm --assemble /dev/md0 /dev/sda1 --run
mdadm: /dev/md0 has been started with 1 drive (out of 2).

===UPDATE3:
I had to format sdb1:
fsck.ext4 /dev/sdb1
As a result, the problem was partially solved:
- Released sdb1 (what I wanted)
- But sda1 remains as a degraded RAID1

Answer the question

In order to leave comments, you need to log in

3 answer(s)
M
Melkij, 2019-06-29
@melkij

linux raid (mdadm) uses the initial part of the partition to store array data metadata. Therefore, the file system itself starts a little later. Usually after 2048 sectors, you can look in mdadm -E / dev / array_volume, Data Offset column.
Accordingly, if you no longer want to use linux raid, then you need to mount the file system with an offset (-o offset=...) or move the partition boundary in the disk layout (delete the partition and create the same one, but point the starting sector to the desired number of sectors later).

Z
zersh, 2019-06-30
@zersh

According to the latest update 3.
You can change the number of disks in the raid, for example:
Or mount it not by the uuid of the raid, but by the uuid of the file system of a particular partition. You can view via:
lsblk - o name, uuid

D
darkAlert, 2019-06-29
@darkAlert

Those. they vseravno will remain as a part of the degraded RAID1? And somehow you can make them forget that they were once part of a RAID (with data retention)?

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question