Online Book Reader

Home Category

Squid_ The Definitive Guide - Duane Wessels [215]

By Root 2059 0
interesting. The ideal hit ratio for this workload is about 58%. Due to an as-yet unresolved Polygraph bug, however, the hit ratio decreases slightly as the test progresses.

Keep in mind that these results are meant to demonstrate the relative performance of different options, rather than the absolute values. You'll get different numbers if you repeat the tests on different hardware.

Linux

Linux is obviously a popular choice for Squid. It supports a wide variety of filesystems and storage schemes. These results come from Linux kernel Version 2.4.19 (released August 2, 2002) with SGI's XFS patches Version 1.2.0 (released Feb 11, 2003) and ReiserFS Version 3.6.25.

The kernel's file descriptor limit is set to 8192. I used this command to configure Squid before compiling:

% ./configure --enable-storeio=diskd,ufs,aufs,null,coss --with-aufs-threads=32

The Linux results are summarized in Table D-1, and Figure D-1 shows the traces. You can see that coss is the best performer, with aufs coming in second and diskd third. As I'm writing this, coss is an experimental feature and not necessarily suitable for a production system. In the long run, you'll probably be better off with aufs.

Table D-1. Linux benchmarking results

Storage scheme

Filesystem

Mount options

Throughput (xact/sec)

Response time (sec)

Hit ratio (%)

coss

326.3

1.59

53.9

aufs(1)

ext2fs

noatime

168.5

1.45

56.3

diskd(1)

ext2fs

noatime

149.4

1.53

56.1

aufs(2)

ext2fs

110.0

1.46

55.6

ufs(1)

ext2fs

54.9

1.52

55.6

ufs(2)

ext3fs

48.4

1.49

56.8

ufs(3)

xfs

40.7

1.54

55.3

ufs(4)

reiserfs

notail, noatime

29.7

1.55

55.0

ufs(5)

reiserfs

21.4

1.55

55.1

Figure D-1. Linux filesystem benchmarking traces

Note that the noatime option gives a significant boost in performance to aufs. The throughput jumps from 110 to 168 transactions per second with the addition of this mount option. Linux also has an async option, but it is enabled by default. I did not run any tests with async disabled.

Of the many filesystem choices, ext2fs seems to give the best performance. ext3fs (ext2 plus journaling) is only slightly lower, followed by xfs, and reiserfs.

FreeBSD

FreeBSD is another popular Squid platform, and my personal favorite. Table D-2 and Figure D-2 summarize the results for FreeBSD. Again, coss exhibits the highest throughput, followed by diskd. The aufs storage scheme doesn't currently run on FreeBSD. These results come from FreeBSD Version 4.8-STABLE (released April 3, 2003). I built a kernel with the following noteworthy options:

options MSGMNB=16384

options MSGMNI=41

options MSGSEG=2049

options MSGSSZ=64

options MSGTQL=512

options SHMSEG=16

options SHMMNI=32

options SHMMAX=2097152

options SHMALL=4096

options MAXFILES=8192

options NMBCLUSTERS=32768

options VFS_AIO

Table D-2. FreeBSD benchmarking results

Storage scheme

Filesystem

Mount options

Throughput

Response time

Hit ratio

coss

330.7

1.58

54.5

diskd(1)

UFS

async, noatime, softupdate

129.0

1.58

54.1

diskd(2)

UFS

77.4

1.47

56.2

ufs(1)

UFS

async, noatime, softupdate

38.0

1.49

56.8

ufs(2)

UFS

noatime

31.1

1.54

55.0

ufs(3)

UFS

async

30.2

1.51

55.9

ufs(4)

UFS

softupdate

29.9

1.51

55.7

ufs(5)

UFS

24.4

1.50

56.4

Figure D-2. FreeBSD filesystem benchmarking traces

Enabling the async, noatime, and softupdate [3] options boosts the standard ufs performance from 24 to 38 transactions per second. However, using one of the other storage schemes increases the sustainable throughput even more.

FreeBSD's diskd performance (129/sec) isn't quite as good as on Linux (169/sec), perhaps because the underlying filesystem (ext2fs) is better.

Note that the trace for coss is relatively flat. Its performance doesn't change much over time. Furthermore, both FreeBSD and Linux report similar throughput numbers: 326/sec and 331/sec. This leads me to believe that the disk system isn't a bottleneck in these tests. In fact, the test with no disk cache (see Section D.8) achieves essentially

Return Main Page Previous Page Next Page

®Online Book Reader