Online Book Reader

Home Category

Managing RAID on Linux - Derek Vadala [127]

By Root 1402 0
MaxMultSect=16, MultSect=16

On systems where this value is set below the maximum, increasing it doesn't necessarily mean increasing your I/O throughput, so I do recommend experimenting with different values. In fact, even if your disk is set to its maximum, as in my case, throttling down and running some throughput tests is still a good idea. The following command decreases the multiple sector I/O value to 8:

# hdparm -m8 /dev/hda

/dev/hda:

setting multcount to 8

multcount = 8 (on)

There are quite a few caveats and computability issues surrounding the multiple sector I/O value. I recommend reading the hdparm(8) manual page for a complete discussion of these issues.

Interrupt unmasking

Normally when disk I/O is performed, the rest of the system must wait until the request is completed. On heavily loaded systems, it might be useful to allow other hardware to perform some tasks while waiting for disk I/O to finish. Interrupt unmasking won't specifically increase disk throughput, but it will increase the overall speed and responsiveness of a Linux system. Use the hdparm -u1 command to enable this functionality.

# hdparm -u1 /dev/hda

/dev/hda:

setting unmaskirq to 1 (on)

unmaskirq = 1 (on)

I must warn you that this feature has been reported to cause hazardous results with some hardware configurations, including filesystem corruption. Again, please consult the hdparm manual page for further details and use this option with caution. Use hdparm -u0 if you need to disable interrupt unmasking.

Filesystem read-ahead

The filesystem read-ahead determines how many sectors are read, in anticipation that contiguous sequential blocks will be required by the current operation. The default value for this setting is 8 sectors (4 KB). Increasing it helps systems with a lot of sequential I/O, but a smaller value helps with random read performance.

To change the value to 4 sectors per read:

# hdparm -a4 /dev/hda

/dev/hda:

setting fs readahead to 4

readahead = 4 (on)

To increase the value to 16 sectors per read:

# hdparm -a16 /dev/hda

Testing your configuration

After you have made some modifications to your disks, you can use the -t option to perform a rudimentary throughput test:

# hdparm -t /dev/hda

/dev/hda:

Timing buffered disk reads: 64 MB in 1.65 seconds = 38.79 MB/sec

I recommend using one of the other benchmark programs, such as bonnie++ or tiobench in lieu of, or in addition to, hdparm -t.

Saving your configuration

Most of the settings that can be altered using hdparm are not persistent through cold system reboots. Therefore, I recommend creating an initialization script that runs each time the system starts. Add one command for each hard disk that includes all the options you wish to modify. For example:

# hdparm -a16 -m16 -u1 -d1 -X69 /dev/hda

# hdparm -a16 -m16 -u1 -d1 -X69 /dev/hda

Tuning Disk Elevators

Linux tries to balance read and write operations on block devices to maximize performance. This helps ensure that heavily utilized systems aren't dominated solely by either read or write operations. The ratio can be tuned on a per-device basis, using the elvtune command.

Use elvtune to get a list of the current settings:

# elvtune /dev/sdb

/dev/sdb elevator ID 246

read_latency: 256

write_latency: 512

max_bomb_segments: 0

The -w and -r flags allow you to alter the read and write latency settings. The best settings really depend on system and device usage, so I recommend experimenting until you get an optimal setting. Try doubling each value, one at a time, and running some performance tests until you find a balance that works for you.

# elvtune -r 512 -w 1024 /dev/sdb

/dev/sdb elevator ID 246

read_latency: 512

write_latency: 1024

max_bomb_segments: 0

Like settings altered with hdparm, elvtune settings should be added into your initialization scripts, so that changes are made each time the system starts. You need to use elvtune on low-level block devices, not arrays. So be certain to alter the settings for each array component, and not just one disk.

Performance

Return Main Page Previous Page Next Page

®Online Book Reader