Managing RAID on Linux - Derek Vadala [51]
* * *
Warning
Be warned that some distributions (Red Hat, for one) halt system initialization if an /etc/fstab entry could not be properly checked and mounted. So if the kernel doesn't automatically start your array, an entry in /etc/fstab might be preventing the system from booting successfully. It's a good idea to place commands that will manually start arrays in your initialization scripts before filesystems are checked and mounted, even if you're already successfully using autodetection. This will provide additional stability and, at worst, display some innocuous warnings on the console.
* * *
Linear mode is also good for reusing old ATA disks that vary in speed and size because the variations between these disks will have minimal impact on the overall performance of the array. The following example shows four ATA drives as members of a linear array:
# A linear array with four ATA member disks
raiddev /dev/md0
raid-level linear
chunk-size 64
persistent-superblock 1
nr-raid-disks 4
device /dev/hda1
raid-disk 0
device /dev/hdb1
raid-disk 1
device /dev/hdc1
raid-disk 2
device /dev/hdd1
raid-disk 3
Use mdadm to create an identical array:
# mdadm -Cv -llinear -n4 /dev/md0 /dev/hd{a,b,c,d}1
RAID-0 (Striping)
You can create a stripe with raidtools by making a few simple changes to the /etc/raidtab file used earlier for the linear mode array:
# A striped array with two member disks
raiddev /dev/md0
raid-level 0
chunk-size 64
persistent-superblock 1
nr-raid-disks 2
device /dev/sdb1
raid-disk 0
device /dev/sdc1
raid-disk 1
Since you've changed the array type to striped (0), the chunk-size now has an impact on array performance. Since the chunk-size defines the amount of data that gets written to the member disk during each write, choosing a chunk-size that approximates the average write size (average file size) is desirable. Remember that unless you first erase the RAID superblocks from previously used disks, mdadm will prompt you for confirmation.
Run mkraid to create and activate the RAID-0:
# mkraid /dev/md0
handling MD device /dev/md0
analyzing super-block
disk 0: /dev/sdb1, 17920476kB, raid superblock at 17920384kB
disk 1: /dev/sdc1, 17920476kB, raid superblock at 17920384kB
Alternatively, use mdadm to create a two-disk stripe with a 64 KB chunk-size on /dev/md0, using disk partitions /dev/sdb1 and /dev/sdc1:
# mdadm -Cv -l0 -c64 -n2 /dev/md0 /dev/sd{b,c}1
mdadm: array /dev/md0 started.
/proc/mdstat now reports that the new RAID-0 array has been created and is online:
# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5]
read_ahead 1024 sectors
md0 : active raid0 sdc1[1] sdb1[0]
35840768 blocks 64k chunks
You can now use mke2fs to create a filesystem:
unused devices: Separating disks in a RAID-0 onto different controllers will help improve your overall array performance. You can arrange device/raid-disk entries in your /etc/raidtab file contrary to the physical layout of disks and controllers. In this example, I have four disks connected to a two-channel SCSI controller. /dev/sda and /dev/sdb are on channel A, and /dev/sdc and /dev/sdd are on channel B. Notice how I alternate the device entries in this example /etc/raidtab file: raiddev /dev/md0 raid-level 0 chunk-size 64 persistent-superblock 1 nr-raid-disks 4 device /dev/sda1 raid-disk 0 device /dev/sdc1 raid-disk 1 device /dev/sdb1 raid-disk 2 device /dev/sdd1 raid-disk 3 When using mdadm, simply alternate devices on the command line to achieve the same effect: # mdadm -Cv -l0 -n4 -c64 /dev/md0 /dev/sd{a,c,b,d}1 You can follow this methodology for any number of controllers. Remember that Linux will logically arrange disks in detection order, beginning with /dev/sda. RAID-1 (Mirroring) Setting up a mirror is slightly different from using linear mode or RAID-0. We already know that mirroring