Managing RAID on Linux - Derek Vadala [64]
nr-raid-disks integer
The nr-raid-disks directive defines the number of active member disks in the current array. This number does not include any spare disks that might be used in an array that supports failover. (Use the nr-spare-disks parameter to indicate the number of spare disks in the current array.) nr-raid-disks takes an integer value greater than zero and is required once for each array that is defined using the raiddev parameter. Subsequently, a number of device and raid-disk entries equal to the number defined with nr-raid-disks is required to specify the block special file and disk order for each member disk.
nr-spare-disks integer
Spare disks provide a mechanism for hot failover in the event of a drive failure. nr-spare-disks takes an integer value greater than zero and equal to the number of available spares. As with nr-raid-disks, spares must be specified later by using the device and spare-disk parameter. Spare disks are optional for arrays that support failover (mirroring, RAID-4, and RAID-5). RAID-0 and linear mode do not support the use of spare disks, so nr-spare-disks is never used with these RAID levels. Spares need to be defined in /etc/raidtab if you want automatic failover. You can manually replace a failed disk using raidhotremove and raidhotadd, but that requires user intervention.
persistent-superblock boolean
The persistent-superblock directive determines whether an array contains a RAID superblock. The RAID superblock allocates a small area for metadata at the end of each member disk. This metadata allows the kernel to identify disk order and membership even in the event that a drive has moved to a different controller. It is essential for autodetection. persistent-superblock should be enabled for any newly created array.
This parameter should be set to zero only when you need to provide backward compatibility with versions of the md driver that did not support a RAID superblock (version 0.35 and earlier). Set persistent-superblock to zero when a legacy array is being used with the new md driver.
parity-algorithm algorithm_name
The parity-algorithm directive specifies the algorithm used to generate parity blocks. Note that this directive is used only with RAID-5. Parity is used to reconstruct data during a drive failure. There are four choices available, and they determine how parity is distributed throughout the array (Figure 4-1). Left-symmetric, right-symmetric, left-asymmetric, and right-asymmetric are all valid choices, but left-symmetric is recommended because it yields the best overall performance.
Figure 4-1. Each algorithm distributes parity and data blocks differently.
You can specify the parity-algorithm by name in /etc/raidtab or use its numerical equivalent (see Table 4-2). If you fail to specify a parity-algorithm in /etc/raidtab, the md driver will default to left-asymmetric, which is not an optimal choice. So be certain to select left-symmetric explicitly.
Table 4-2. Parity algorithms
Name
Numeric value
left-asymmetric (default)
0
right-asymmetric
1
left-symmetric (best choice)
2
right-symmetric
3
chunk-size size
chunk-size specifies the size of the array stripe in kilobytes. Values may range from 4 to 4096 kilobytes and must be powers of two. A bigger chunk-size will work well for large, sequential operations, but a smaller chunk-size will yield better performance for smaller, random operations. Most users should choose a chunk-size of about 64 KB.
With linear mode, the chunk-size specifies the rounding factor for the array. The rounding factor helps evenly group I/O operations. It's similar to the chunk-size, except it does not spread I/O across multiple disks.
chunk-size has no effect on RAID-1, but to satisfy error checking in raidtools, you must specify a valid chunk-size for any RAID-1 defined in /etc/raidtab.
* * *
Warning
The chunk-size