How to convert mdadm RAID1 array to RAID0

How to convert mdadm RAID1 array to RAID0

Sometimes crazy changes need to be performed to adjust how storage is laid out and arranged. In this instance converting an existing (but non-critical) RAID1 mdadm array to RAID0 (even less critical) is the far-fetched requirement.

Start by checking the status of the existing array and note down the necessary identifiers:

[root@srv /]# mdadm --detail /dev/md1
/dev/md1:
Version : 1.0
Creation Time : Tue Aug 21 17:27:41 2012
Raid Level : raid1
Array Size : 524287860 (500.00 GiB 536.87 GB)
Used Dev Size : 524287860 (500.00 GiB 536.87 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sat Feb 18 14:43:21 2023
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Consistency Policy : bitmap

Number Major Minor RaidDevice State
3 8 17 0 active sync /dev/sdb1
2 8 1 1 active sync /dev/sda1

Ensure that any filesystem residing on the array is unmounted (hence why I called it non-critical before).

Then run the necessary command:

[root@srv /]# mdadm /dev/md1 --grow --level=0
mdadm: level of /dev/md1 changed to raid0

This will cause mdadm remove one of the members from the array and basically turn it into a single-disk RAID-ready array:

Number Major Minor RaidDevice State
2 8 1 0 active sync /dev/sda1

Proceed to add back the dumped/removed/missing member:

[root@srv /]# mdadm /dev/md1 --grow --raid-devices=2 --add /dev/sdb1
mdadm: level of /dev/md1 changed to raid4
mdadm: added /dev/sdb1
mdadm: Need to backup 8K of critical section..

The message says that array is being converted to RAID4 – that will only be temporary while the array is rebuilt and will change to 0 when completed.

[root@srv /]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]

md1 : active raid4 sdb1[3] sda1[2]
524287860 blocks super 1.0 level 4, 4k chunk, algorithm 5 [3/2] [U__]
[=>...................] reshape = 0.1% (1046620/524287860) finish=208.2min speed=41864K/sec

In case you get a mdadm: failed to update superblock during re-add error at this step, try zero-erasing the problematic partition to remove its (previous) RAID signature:

[root@srv /]# dd if=/dev/zero of=/dev/sdb1 bs=10M status=progress
536745082880 bytes (537 GB, 500 GiB) copied, 3527 s, 152 MB/s
dd: error writing '/dev/sdb1': No space left on device
51201+0 records in
51200+0 records out
536870912000 bytes (537 GB, 500 GiB) copied, 3565.52 s, 151 MB/s

Then retry the add.

When the reshape is done, check the status:

[root@srv ~]# mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Sat Feb 18 19:22:47 2023
Raid Level : raid0
Array Size : 1048311808 (999.75 GiB 1073.47 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Sat Feb 18 19:22:47 2023
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Layout : -unknown-
Chunk Size : 512K

Consistency Policy : none

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1

Leave a Reply