Squid_ The Definitive Guide - Duane Wessels [213]
If you are concerned that the kernel buffers too much server-side data, you can decrease the TCP receive buffer size with the tcp_recv_bufsize directive. Even better, your operating system probably has a way to set this parameter for the whole system. On NetBSD/FreeBSD/OpenBSD, you can use the sysctl variable named net.inet.tcp.recvspace. For Linux, read about /proc/sys/net/ipv4/tcp_rmem in Documentation/networking/ip-sysctl.txt.
Fixed Subnetting Scheme
The current delay pools implementation assumes that your LAN uses /24 (class C) subnets, and that all users are in the same /16 (class B) subnet. This might not be so bad, depending on how your network is configured. However, it would be nice if the delay pools subnetting scheme were fully customizable.
If your address space is larger than a /24 and smaller than a 16/, you can always create a class 3 pool and treat it as a class 2 pool (that is one of the examples given earlier).
If you use just one class 2 pool with more than 256 users, some users will share the individual buckets. That might not be so bad, unless you happen to have a bunch of heavy users fighting over one measly bucket.
You might also create multiple class 2 pools and use delay_access rules to divide them up among all users. The problem with this approach is that you can't have all users share a single aggregate bucket. Instead, each subgroup has their own aggregate bucket. You can't make a single client go through more than one delay pool.
Monitoring Delay Pools
You can monitor the delay pool levels with the cache manager interface. Request the delay page from the CGI interface or with the squidclient utility:
% squidclient mgr:delay | less
See Section 14.2.1.44 for a description of the output.
Appendix D. Filesystem Performance Benchmarks
You have a myriad of choices to make when installing and configuring Squid, especially when it comes to the way Squid stores files on disk. Back in Chapter 8, I talked about the various filesystems and storage schemes. Here, I'll provide some hard data on their relative performance.
These tests were done with Web Polygraph, a freely available, high-performance tool for benchmarking HTTP intermediaries (http://www.web-polygraph.org/). Over the course of many months, I ran approximately 40 different tests on 5 different operating systems.
The Benchmark Environment
The primary purpose of these benchmarks is to provide a number of measurements that allow you to compare different Squid configurations and features. In order to produce comparable results, I've taken care to minimize any differences between systems being tested.
Hardware for Squid
I used five identical computer systems—one for each of the following operating systems: FreeBSD, Linux, NetBSD, OpenBSD, and Solaris. The boxes are IBM Netfinity servers with one 500-MHz PIII CPU, 1 GB of RAM, an Intel fast-Ethernet NIC, and three 8-GB disk SCSI drives. I realize that these aren't particularly powerful machines by today's standards, but they are good enough for these tests. Anyway, it is more important that they be identical than powerful.
The requirement to use identical hardware means that I can't generate comparable results for other hardware platforms, such as Sun, Digital/Compaq/HP, and others.
Squid Version and Configuration
Except for the coss tests, all results are from Squid Version 2.5.STABLE2. The coss results are from a patched version of 2.5.STABLE3. Those patches have been committed to the source tree for inclusion into 2.5.STABLE4.
Unless otherwise specified,