Managing RAID on Linux - Derek Vadala [56]
mke2fs /dev/md2
mount /dev/md2 /mnt/array
You could also add a spare disk to each of the mirroring arrays to make the solution more robust. And you can combine more than two mirrors into a RAID-0:
A RAID-10: Three 2-disk mirrors are combined into a RAID-0
# Each mirror has its own spare disk
raiddev /dev/md0
raid-level 1
chunk-size 64
nr-raid-disks 2
nr-spare-disks 1
device /dev/sdb1
raid-disk 0
device /dev/sdc1
raid-disk 1
device /dev/sdd1
spare-disk 0
# Mirror #2
raiddev /dev/md1
raid-level 1
chunk-size 64
nr-raid-disks 2
nr-spare-disks 1
device /dev/sde1
raid-disk 0
device /dev/sdf1
raid-disk 1
device /dev/sdg1
raid-disk 0
# Mirror #3
raiddev /dev/md2
raid-level 1
chunk-size 64
nr-raid-disks 2
nr-spare-disks 1
device /dev/sdh1
raid-disk 0
device /dev/sdi1
raid-disk 1
device /dev/sdj1
raid-disk 0
# Mirrors /dev/md0, /dev/md1 and /dev/md2 are
# combined into a RAID-0, /dev/md3.
raiddev /dev/md3
raid-level 0
chunk-size 64
persistent-superblock 1
nr-raid-disks 3
device /dev/md0
raid-disk 0
device /dev/md1
raid-disk 1
device /dev/md2
raid-disk 2
Given the preceding file, run mkraid on each component RAID-1 and finally on /dev/md3, the RAID-0. Or, with mdadm:
# mdadm -C -n2 -l1 -x1 /dev/md0 /dev/sd{b,c,d}1
# mdadm -C -n2 -l1 -x1 /dev/md1 /dev/sd{e,f,g}1
# mdadm -C -n2 -l1 -x1 /dev/md1 /dev/sd{h,i,j}1
# mdadm -C -n3 -l0 -c64 /dev/md2 /dev/md{0,1,2}
Clearly, it's a waste of resources to provide a separate spare disk to each component array. Unfortunately, the md driver does not directly support the sharing of spare disks. However, mdadm does let you share spare disks virtually. (See Chapter 4 and Chapter 7.)
While RAID-10 is both fast and reliable, the wasted disk space can make it undesirable. Half of all disk space on a RAID-10 is unusable.
RAID-50 (striped parity)
Since the disk requirements for RAID-10 are so high, you might find it more economical to combine RAID-5 into a RAID-0, a hybrid configuration called RAID-50. This hybrid array offers good read and write performance and can survive multiple disk failures, in the same manner that RAID-10 can. RAID-50 uses only one disk's worth of space for each RAID-5 component array, making it more cost-effective. The following /etc/raidtab file defines two RAID-5 arrays, each consisting of three disks at /dev/md0 and /dev/md1. Those arrays are combined in a RAID-0 at /dev/md2.
# First RAID-5
raiddev /dev/md0
raid-level 5
chunk-size 64
persistent-superblock 1
nr-raid-disks 3
parity-algorithm left-symmetric
device /dev/sdb1
raid-disk 0
device /dev/sdc1
raid-disk 1
device /dev/sdd1
raid-disk 2
# Second RAID-5
raiddev /dev/md1
raid-level 5
chunk-size 64
persistent-superblock 1
nr-raid-disks 3
parity-algorithm left-symmetric
device /dev/sde1
raid-disk 0
device /dev/sdf1
raid-disk 1
device /dev/sdg1
raid-disk 3
# The two RAID-5's are combined into a single
# RAID-0.
raiddev /dev/md2
raid-level 0
chunk-size 64
persistent-superblock 1
nr-raid-disks 2
device /dev/md0
raid-disk 0
device /dev/md1
raid-disk 1
Use the following commands to create the same RAID-50 using mdadm:
# mdadm -C -n3 -l5 -c64 -pls /dev/md0 /dev/sd{b,c,d}1
# mdadm -C -n3 -l5 -c64 -pls /dev/md1 /dev/sd{e,f,g}1
# mdadm -C -n3 -l0 -c64 /dev/md2 /dev/md{0,1}
Since each RAID-5 must undergo its initial synchronization, the CPU will be heavily utilized when you create a RAID-50. If the system is performing other tasks, then you might want to wait until each initial synchronization has completed before creating a filesystem on /dev/md2 and carrying out any other administrative tasks. It might also be worthwhile to initialize each RAID-5 individually, waiting for its initial synchronization to complete before creating the second one.
Finishing Touches
If you use mdadm to create arrays, then you should probably take a minute to create an /etc/mdadm.conf file before you