Squid_ The Definitive Guide - Duane Wessels [54]
* * *
Preventing Cache Hits for Local Sites
If you have a number of origin servers on your network, you may want to configure Squid so that their responses are never cached. Because the servers are nearby, they don't benefit too much from cache hits. Additionally, it frees up storage space for other (far away) origin servers.
The first step is to define an ACL for the local servers. You might want to use an address-based ACL, such as dst:
acl LocalServers dst 172.17.1.0/24
If the servers don't live on a single subnet, you might find it easier to create a dstdomain ACL:
acl LocalServers dstdomain .example.com
Next, you simply deny caching of those servers with a no_cache access rule:
no_cache deny LocalServers
* * *
Tip
The no_cache rules don't prevent your clients from sending these requests to Squid. There is nothing you can configure in Squid to stop such requests from coming. Instead, you must configure the user-agents themselves.
* * *
If you add a no_cache rule after Squid has been running for a while, the cache may contain some objects that match the new rule. Prior to Squid Version 2.5, these previously cached objects might be returned as cache hits. Now, however, Squid purges any cached response for a request that matches a no_cache rule.
Testing Access Controls
As your access control configuration becomes longer, it also becomes more complicated. I strongly encourage you to test your access controls before turning them loose on a production server. Of course, the first thing you should do is make sure that Squid can correctly parse your configuration file. Use the -k parse feature for this:
% squid -k parse
To further test your access controls, you may need to set up a fake Squid installation. One easy way to do that is compile another copy of the Squid source code with a different $prefix location. For example:
% tar xzvf squid-2.5.STABLE4.tar.gz
% cd squid-2.5.STABLE4
% ./configure --prefix=/tmp/squid ...
% make && make install
After installing, you need to edit the new squid.conf file and change a few directives. Change http_port if Squid is already running on the default port. For simple testing, create a single, small cache directory like this:
cache_dir ufs /tmp/squid/cache 100 4 4
If you don't want to recompile Squid again, you can also just create a new configuration file. The drawback to this approach is that you'll need to set all the log-file pathnames to the temporary location so that you don't overwrite the real files.
You can easily test some access controls with the squidclient program. For example, if you have a rule that depends on the origin server hostname (dstdomain ACL), or some part of the URL (url_regex or urlpath_regex), simply enter a URI that you would expect to be allowed or denied:
% squidclient -p 4128 http://blocked.host.name/blah/blah
or:
% squidclient -p 4128 http://some.host.name/blocked.ext
Certain aspects of the request are harder to control. If you have src ACLs that block requests from outside your network, you may need to actually test them from an external host. Testing time ACLs may be difficult unless you can change the clock on your system or stay awake long enough.
You can use squidclient's -H option to set arbitrary request headers. For example, use the following if you need to test a browser ACL.
% squidclient -p 4128 http://www.host.name/blah \
-H 'User-Agent: Mozilla/5.0 (compatible; Konqueror/3)\r\n'
For more complicated request, with many headers, you may want to use the technique described in Section 16.4.
You might also consider developing a routine cron job that checks your ACLs for expected behavior and reports any anomalies. Here is a sample shell script to get you started:
#!/bin/sh
set -e
TESTHOST="www.squid-cache.org"
# make sure Squid is not proxying dangerous ports
#
ST=`squidclient 'http://$TESTHOST:25/' | head -1 | awk '{print $2}'`
if test "$ST" != 403 ; then
echo "Squid did not block HTTP request to port 25"
fi
# make sure Squid