Managing RAID on Linux - Derek Vadala [75]
-r, --remove
Removes a member disk from an active array. This option works like the raidtools command raidhotremove.
-f, --fail, --set-faulty
Marks a member disk in an active array as failed. This option works like the raidtools command raidsetfauly.
-R, --run
Starts an inactive array. When mdadm assembles arrays that are missing component disks, it will mark the arrays as inactive, even if they can function with disks missing (for example, a RAID-1, RAID-4, or RAID-5 that is in degraded mode). The --run option will start an inactive array that has already been assembled. --run works as a standalone option, but it can also be combined with --assemble to automatically start a degraded array.
-o, --readonly
Marks an array as read-only.
-w, --readwrite
Marks an array as read/write.
--zero-superblock
Erases the RAID superblock from the specified device.
Example usage
The query option outputs brief information about an array or member disk. For example:
# mdadm --query /dev/md0:
/dev/md0: 34.18GiB raid5 3 devices, 1 spare. Use mdadm --detail for more detail.
/dev/md0: No md super block found, not an md component.
When used on member disks, --query will output disk sequence information. The following example uses the short form of the command:
# mdadm -Q /dev/sdc1
/dev/sdc1: is not an md array
/dev/sdc1: device 2 in 3 device active raid5 md0. Use mdadm --examine for more
detail.
When an array is also a member disk, as in the case of a hybrid array, --query displays information about both of its roles:
# mdadm -Q /dev/md1
/dev/md1: 17.09GiB raid1 2 devices, 0 spares. Use mdadm --detail for more detail.
/dev/md1: device 1 in 2 device active raid0 md2. Use mdadm --examine for more
detail.
The output of mdadm --detail displays information about an active array. There is some overlap between this information and the data found in /proc/mdstat, but mdadm provides some additional information. In the following example, we have a four-disk RAID-5:
# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.00
Creation Time : Wed Mar 13 06:52:41 2002
Raid Level : raid5
Array Size : 53761152 (51.27 GiB 55.05 GB)
Device Size : 17920384 (17.09 GiB 18.35 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Persistance : Superblock is persistant
Update Time : Wed Mar 13 06:52:41 2002
State : dirty, no-errors
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDisk State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
3 8 65 3 active sync /dev/sde1
UUID : 3d793b9a:c1712878:1113a282:465e2c8f
The first section of the listing displays general information about the array, including the version of the md driver that created it, the creation date and time, the RAID level, the total size, and the total number of disks.
The second section displays information about the current state of the array. Update Time reflects the last time that the array changed status. This includes disk failures, as well as normal operations such as array activation. State reflects the health of the array; in this case, the array is operating within normal parameters, as indicated by no-errors. The dirty state might be a bit confusing, since it implies that there is a problem. Dirty simply means that there are array stripes that haven't yet been committed to disk by the kernel. When an array is stopped, dirty stripes are written and the array becomes clean. Both dirty and clean indicate normal operation.
Next, a list of Active Devices, Working Devices, Failed Devices, and Spare Devices is displayed. Active Devices reflects the number of functioning (non-failed) array members, but does not include spare disks. Working Devices is the total number of non-failed disks in the array (Active Devices + Spare Devices). Failed Devices and Spare Devices display the number