Online Book Reader

Home Category

Managing RAID on Linux - Derek Vadala [54]

By Root 1429 0

unused devices:

As with RAID-1, you don't have to wait until the initial resynchronization is complete before you create a filesystem. But remember that until the process is finished, you won't have data redundancy. Notice that this time, the resynchronization is slower than with the RAID-1 we created earlier. That's because parity information must be generated for each stripe. Also, RAID-4 has a write bottleneck caused by its dedicated parity disk. You will also notice that resynchronization for a RAID-4 requires a lot more CPU overhead. Examine the processes raid5d and raid5syncd, which handle the resynchronization, using top or another processes-monitoring program. On my test system, these processes use about 60 percent of the CPU during the resynchronization. That compares to about 2 percent for a RAID-1 initial synchronization.

RAID-5 (Distributed Parity)

RAID-5, for the same reasons as RAID-4, requires a minimum of three disks to be more useful than a RAID-0 or RAID-1 array. Configuration is nearly identical to other levels, except for the addition of the parity-algorithm variable. parity-algorithm is used to select the algorithm that generates and checks the checksum information used to provide fault tolerance. A simple /etc/raidtab for RAID-5 is shown here:

# RAID-5 with three member disks

raiddev /dev/md0

raid-level 5

chunk-size 64

persistent-superblock 1

parity-algorithm left-symmetric

nr-raid-disks 3

device /dev/sdb1

raid-disk 0

device /dev/sdc1

raid-disk 1

device /dev/sdd1

raid-disk 2

The left-symmetric algorithm will yield the best disk performance for a RAID-5, although this value can be changed to one of the other algorithms (right-symmetric, left-asymmetric, or right-asymmetric). While left-symmetric is the best choice, it is not the default for raidtools, so be certain to explicitly specify it in /etc/raidtab. If you forget to include a parity-algorithm, then the array will default to left-asymmetric.

Execute mkraid to create this array:

# mkraid /dev/md0

handling MD device /dev/md0

analyzing super-block

disk 0: /dev/sdb1, 17920476kB, raid superblock at 17920384kB

disk 1: /dev/sdc1, 17920476kB, raid superblock at 17920384kB

disk 2: /dev/sdd1, 17920476kB, raid superblock at 17920384kB

Create the same RAID-5 using mdadm:

# mdadm -Cv -l5 -c64 -n3 -pls /dev/md0 /dev/sd{b,c,d}1

mdadm: array /dev/md0 started.

mdadm defaults to the left-symmetric algorithm, so you can safely omit the -p option from the command line.

After you issue mkraid or mdadm to create the array, /proc/mdstat will report information about the array, which, as in RAID-1 and RAID-4, must also undergo an initial resynchronization:

# cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid5]

read_ahead 1024 sectors

md0 : active raid5 sdd1[2] sdc1[1] sdb1[0]

35840768 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

[= = = = = = = =>............] resync = 40.2% (7219776/17920384)

finish=6.0min speed=29329K/sec

unused devices:

RAID-5 provides a cost-effective balance of performance and redundancy. You can add more disks, using device/raid-disk, or spare disks, using device/spare-disk, to create large, fault-tolerant storage. The following example is for a five-disk RAID-5, with one spare disk. Notice once again how I've ordered the disks so they alternate between I/O channels.

# A 5-disk RAID-5 with one spare disk.

raiddev /dev/md0

raid-level 5

chunk-size 64

persistent-superblock 1

nr-raid-disks 5

nr-spare-disks 1

device /dev/sdb1

raid-disk 0

device /dev/sdf1

raid-disk 1

device /dev/sdc1

raid-disk 2

device /dev/sdg1

raid-disk 3

device /dev/sdd1

raid-disk 4

# The spare disk.

device /dev/sdh1

spare-disk 0

Or, create the same array using mdadm:

# mdadm -C -l5 -c64 -n5 -x1 /dev/md0 /dev/sd{b,f,c,g,d,h}1

Hybrid Arrays

One of the most important features of software RAID is its ability to use existing arrays as member disks. This property allows you to not only create extremely large arrays, but to combine different RAID levels to achieve

Return Main Page Previous Page Next Page

®Online Book Reader