Online Book Reader

Home Category

Squid_ The Definitive Guide - Duane Wessels [212]

By Root 2129 0
rules determines which requests go through which delay pools. Requests that are allowed go through the delay pools, while those that are denied aren't delayed at all. If you don't have any delay_access rules, Squid doesn't delay any requests.

The syntax for delay_access is similar to the other access rule lists (see Section 6.2), except that you must put a pool number before the allow or deny keyword. For example:

delay_access 1 allow TheseUsers

delay_access 2 allow OtherUsers

Internally, Squid stores a separate access rule list for each delay pool. If a request is allowed by a pool's rules, Squid uses that pool and stops searching. If a request is denied, however, Squid continues examining the rules for remaining pools. In other words, a deny rule causes Squid to stop searching the rules for a single pool but not for all pools.

cache_peer no-delay Option

The cache_peer directive has a no-delay option. If set, it makes Squid bypass the delay pools for any requests sent to that neighbor.

Examples

Let's start off with a simple example. Suppose that you have a saturated Internet connection, shared by many users. You can use delay pools to limit the amount of bandwidth that Squid consumes on the link, thus leaving the remaining bandwidth for other applications. Use a class 1 delay pool to limit the bandwidth for all users. For example, this limits everyone to 512 Kbit/s and keeps 1 MB in reserve if Squid is idle:

delay_pools 1

delay_class 1 1

delay_parameters 1 65536/1048576

acl All src 0/0

delay_access 1 allow All

One of the problems with this simple approach is that some users may receive more than their fair share of the bandwidth. If you want to try something more balanced, use a class 2 delay pool that has individual buckets. Recall that the individual bucket is determined by the fourth octet of the client's IPv4 address. Thus, if you have more than a /24 subnet, you might want to use a class 3 pool instead, which gives you 65536 individual buckets. In this example, I won't use the network buckets. While the overall bandwidth is still 512 Kbit/s, each individual is limited to 128 Kbit/s:

delay_pools 1

delay_class 1 3

delay_parameters 1 65536/1048576 -1/-1 16384/262144

acl All src 0/0

delay_access 1 allow All

You can also use delay pools to provide different classes of service. For example, you might have important users and unimportant users. In this case, you could use two class 1 delay pools. Give the important users a higher bandwidth limit than everyone else:

delay_pools 2

delay_class 1 1

delay_class 2 1

delay_parameters 1 65536/1048576

delay_parameters 2 10000/50000

acl ImportantUsers src 192.168.8.0/22

acl All src 0/0

delay_access 1 allow ImportantUsers

delay_access 2 allow All

Issues

Squid's delay pools are often useful, but not perfect. You need to be aware of a few drawbacks and limitations before you use them.

Fairness

One of the most important things to realize about the current delay pools implementation is that it does nothing to guarantee fairness among all users of a single bucket. This is especially important for aggregate buckets (where sharing is high), but less so for individual buckets (where sharing is low).

Squid generally services requests in order of increasing file descriptors. Thus, a request whose server-side TCP connection has a lower file descriptor may receive more bandwidth from a shared bucket than it should.

Application Versus Transport Layer

Bandwidth shaping and rate limiting usually operate at the network transport layer. There, the flow of packets can be controlled very precisely. Delay pools, however, are implemented in the application layer. Because Squid doesn't actually send and receive TCP packets (the kernel does), it has less control over the flow of individual packets. Rather than controlling the transmission and receipt of packets on the wire, Squid controls only how many bytes to read from the kernel.

This means, for example, that incoming response data is queued up in the kernel. The TCP/IP stack can buffer some number

Return Main Page Previous Page Next Page

®Online Book Reader