Online Book Reader

Home Category

Managing RAID on Linux - Derek Vadala [53]

By Root 1325 0
by (F) in the following listing. The md driver has automatically inserted spare disk /dev/sdd1 and begun recovery.

# cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid5]

read_ahead 1024 sectors

md0 : active raid1 sdd1[2] sdb1[1](F) sdb1[0]

17920384 blocks [2/1] [U_]

[= = = =>................] recovery = 20.1% (3606592/17920384)

finish=7.7min speed=30656K/sec

unused devices:

RAID-1 is certainly not limited to arrays with only two member disks and one spare disk. The following example describes a four-disk mirror with two dedicated spare disks.

raiddev /dev/md0

raid-level 1

nr-raid-disks 4

nr-spare-disks 2

chunk-size 64

device /dev/sdb1

raid-disk 0

device /dev/sdc1

raid-disk 1

device /dev/sdd1

raid-disk 2

device /dev/sde1

raid-disk 3

device /dev/sdf1

spare-disk 0

device /dev/sdg1

spare-disk 1

In this example, data is mirrored onto each raid-disk, so there are four copies of the data, while the remaining two disks (/dev/sdf1 and /dev/sdg1) are spares that will be inserted automatically as members in the event of a disk failure.

mdadm users can replicate this setup with the following command:

# mdadm -Cv -l1 -n4 -x2 /dev/md0 /dev/sd{b,c,d,e,f,g}1

Failed disks and spare disks can also be manually removed from and inserted into arrays as well. See the Managing Disk Failures section in Chapter 7 for more information on how to manage disk failures.

While this array can withstand multiple disk failures, it has a write overhead equal to its number of member disks. So each block of data is written to disk four times, making this arrangement very reliable, but extremely slow for write operations. Distributing member disks across multiple controllers or I/O channels will help alleviate the write performance bottleneck. In contrast to the write performance hit, read performance is potentially fast because data can be read in parallel from all four members. A solution like this might be ideal for applications that are mission-critical and read-intensive, but that are generally read-only. Video-on-demand is a good example of such a situation.

RAID-4 (Dedicated Parity)

Since RAID-4 requires that a single drive be dedicated for storing parity information, a minimum of three drives are needed to make RAID-4 useful. Using less than three drives would offer no increase in storage capacity over RAID-1.

A two-drive RAID-4 system would not offer better performance or fault tolerance when compared with RAID-1 or RAID-0. Therefore, in situations in which only two drives are available, RAID-0 or RAID-1 should be used. Furthermore, RAID-5 offers much better performance when compared with RAID-4; almost everyone should choose the former.

The following is a sample RAID-4 configuration using /etc/raidtab:

# RAID-4 with three member disks

raiddev /dev/md0

raid-level 4

chunk-size 64

persistent-superblock 1

nr-raid-disks 3

device /dev/sdb1

raid-disk 0

device /dev/sdc1

raid-disk 1

device /dev/sdd2

raid-disk 2

Use mkraid to construct this array:

# mkraid /dev/md0

handling MD device /dev/md0

analyzing super-block

disk 0: /dev/sdb1, 17920476kB, raid superblock at 17920384kB

disk 1: /dev/sdc1, 17920476kB, raid superblock at 17920384kB

disk 2: /dev/sdd1, 17920476kB, raid superblock at 17920384kB

Or mdadm:

# mdadm -Cv -l4 -c64 -n3 /dev/md0 /dev/sd{b,c,d}1

mdadm: array /dev/md0 started.

When this array is initialized, the last member disk listed in /etc/raidtab, or on the command line using mdadm, becomes the parity disk—/dev/sdd1, in this case. RAID-4 also supports spare disks.

Like other arrays with redundancy, /proc/mdstat will indicate that the initial resynchronization phase is underway. Parity RAID resynchronization ensures that all stripes contain the correct parity block.

# cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid5]

read_ahead 1024 sectors

md0 : active raid5 sdd1[2] sdc1[1] sdb1[0]

35840768 blocks level 4, 64k chunk, algorithm 0 [3/3] [UUU]

[= = = = = = = =>............] resync = 40.2%

(7206268/17920384) finish=8.1min speed=21892K/sec

Return Main Page Previous Page Next Page

®Online Book Reader