Online Book Reader

Home Category

Managing RAID on Linux - Derek Vadala [84]

By Root 1374 0
array, based on the number of disks connected to the system.

Vendors have several different names for the autoconfiguration process, such as Auto Configure, Assisted Configuration, or Configuration Wizard, depending on the controller you purchase. I recommend against using these shortcuts, because they obfuscate the configuration process and build arrays using the lowest common denominator for important array properties. These features also don't take future expansion into account. A system administrator might know that while there are only five disks connected to the controller, one more is arriving next week. Because the controller only makes autoconfiguration suggestions based on hardware that is already connected to the system, it might recommend creating a four-disk RAID-5 with one spare disk. However, the system admistrator will realize that creating a RAID-5 that uses all five disks is a better option. The disk arriving next week can be introduced later and set up as a spare disk.

Write Cache

Controller cache memory operates in one of two modes. Each mode offers trade-offs between performance and reliability.

Write-back caching

When the controller is configured for write-back caching, the controller holds data in the controller's memory until it is full or until the controller is idle and then commits the data to disk. This mode yields the best performance, but it's not as reliable as write-through caching because a system failure could result in the loss of data that is still in the controller's memory, but not flushed to disk.

Having a controller battery is very helpful when using the write-back caching method because a power failure only means that unflushed buffers are stored until the system restarts. Extra controller memory is also important for write-back caching. On heavily used systems with lots of sequential disk I/O, it's a good idea to consider getting a memory upgrade.

Write-through caching

Write-through caching commits data to disk immediately. This method is much slower than write-back caching, but it ensures that all writes are committed to disk and are never lost because a failure occurred while unwritten buffers were waiting in the controller's memory. If you use write-through caching, the amount of memory on your controller is not as important as getting a fast controller and fast disks.

Each array can generally use its own caching method. Thus, it's possible to configure heavily used, less important arrays for write-back caching, and critical, less frequently used system disks for write-through operation.

Don't forget that other system features might help provide the security that write-through caching delivers. Data journaling under XFS or ext3 is one option. Using an uninterruptible power supply (UPS) with automatic system shutdown is always a good idea and might also provide the necessary safeguards required to use write-back caching even in critical situations.

Logical Drives

It should be clear by now that sales sheets and documentation about RAID use many terms interchangeably, although their meaning varies depending on the context. The term logical drive is used to mean an array in some software RAID implementations. In the context of a controller, the term logical drive has another meaning. Some controllers let you split a single large array into multiple smaller logical drives. So, while I might create a single RAID-5 that is several hundred gigabytes in size, I can further segment that array by creating logical drives that contain a subset of the total storage space. This is useful when you are working with very large disks, but want to allot a manageable amount of space for system disks. It's also useful if subpartitioning at the operating system level won't work because you need more partitions than are supported by a single disk. Not all controllers implement logical drives.

Using logical drives also helps maximize storage space. Let's say that you have a system with five 18 GB disks. You know already that you need roughly 25 GB to store a MySQL database. It might

Return Main Page Previous Page Next Page

®Online Book Reader