Managing RAID on Linux - Derek Vadala [61]
RAID-4 and RAID-5 arrays show a combination of the information provided for RAID-0 or RAID-1 arrays:
md1 : active raid5 sde1[3] sdd1[2] sdc1[1] sdb1[0]
53761152 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
Here, a RAID-5 array defined at /dev/md1 contains four member disks. The second line provides information about the chunk size and health of the array (four out of four disks are operational). In addition, the output shows the parity algorithm used, which is algorithm 2 in this case, corresponding to the left-symmetric algorithm. The numeric value reported here comes from a case switch found in the kernel RAID-5 code in the file /usr/src/linux/drivers/block/raid5.c. Each of the usable algorithms is defined there by name.
You might be wondering why the second line contains redundant information about which RAID level is in use. Let's look at a RAID-4 example to clarify.
md1 : active raid5 sde1[3] sdd1[2] sdc1[1] sdb1[0]
53761152 blocks level 4, 64k chunk, algorithm 0 [4/4] [UUUU]
Notice that raid5 is listed as the array type on the first line, but level 4 is listed on the second line. That's because RAID-4 uses the RAID-5 driver. So when working with these RAID levels, be certain to examine the second line of md device output to make certain that the proper RAID level is reported. Finally, while parity algorithm 0 (left asymmetric) is listed, the parity algorithm has no effect on RAID-4. The entry is simply a placeholder and can be safely ignored.
Failed disks
When a disk fails, its status is reflected in /proc/mdstat.
md1 : active raid1 sdc1[1] sdb1[0](F)
17920384 blocks [2/1] [_U]
* * *
Warning
The first line lists disks in backward order, from most recently added to first added. In this example, the (F) marker indicates that /dev/sdb1 has failed. Note on the following line that there are two disks in the array, but only one of them is active. The next element shows that the first disk (/dev/sdb1) is inactive and the second (/dev/sdc1) is in use. So a U denotes a working disk, and an _ denotes a failed disk. The output is a bit counterintuitive because the order of disks shown in the first line is the opposite of the order of U or _ elements in the second line. Furthermore, the order in both lines can change as disks are added or removed.
* * *
Resynchronization and reconstruction
/proc/mdstat also provides real-time information about array reconstruction and resynchronization. The following mirroring array is nearly halfway done with its initial synchronization.
md1 : active raid1 sdc1[1] sdb1[0]
17920384 blocks [2/2] [UU]
[= = = = = = = = =>...........] resync = 46.7%
(8383640/17920384)
finish=5.4min speed=29003K/sec
In this example, the process is 46.7 percent complete (also indicated by the progress bar). The first number in parentheses indicates how many blocks are ready, out of the total number of blocks (the second number). The resynchronization is expected to take another 5.4 minutes, at the rate of roughly 29 MB (29003K) per second.
Recovery looks nearly identical, except that the failed disk and the newly inserted disk are both displayed.
md1 : active raid1 sdd1[2] sdc1[1] sdb1[0](F)
17920384 blocks [2/1] [_U]
[=>...................] recovery = 6.1% (1096408/17920384)
finish=8.6min speed=32318K/sec
Note that on the third line, the process is called recovery. Remember from Chapter 3 that recovery occurs when a new disk is inserted into an array after a disk fails. Resynchronization, on the other hand, happens when a new array is created or when disks aren't synchronized.
Although three disks are listed on the first output line, only two disks appear