Managing NFS and NIS, 2nd Edition - Mike Eisler [274]
badcalls > 0
RPC calls on soft-mounted filesystems are timing out. If a server has crashed, then badcalls can be expected to increase. But if badcalls grows during "normal" operation then soft-mounted filesystems should use a larger timeo or retrans value to prevent RPC failures. Better yet, mount the filesystem without the soft option.
cantconn > 1%
This indicates that the NFS client is having trouble making a TCP connection to the NFS server. Often this is because the NFS server has been or is down. It can also indicate that the connection queue length in the NFS server is too small, or that an attacker is attempting a denial of service attack on the server by clogging the connection queue. If you cannot eliminate connection queue length as a problem, then use the -l parameter to nfsd to increase the queue length.
NFS errno values
The following system call errno values are the result of various NFS call failures:
EINTR
A system call was interrupted when the intr option was used on a hard-mounted filesystem.
EACCES
A user attempted to access a file without proper credentials. This error is usually caused by mapping root or anonymous users to nobody, a user that has almost no permissions on files in the exported filesystem.
EBUSY
The superuser attempted to unmount a filesystem that was in use on the NFS client.
ENOSPC
The fileserver has run out of room on the disk to which the client is attempting an NFS write operation.
ESTALE
An NFS client has asked the server to reference a file that has either been freed or reused by another client.
EREMOTE
An attempt was made to NFS-mount a filesystem that is itself NFS-mounted on the server. Multihop NFS-mounts are not allowed. This error is reported by mount on the NFS client.
Appendix C. Tunable Parameters
NFS client and server implementations tend to have lots of tunable parameters. This appendix summarizes some of the more important ones. Except as noted, the parameters are tuned by changing a parameter in the kernel, which requires setting a value in a file like /etc/system on Solaris 8. Note that while many NFS implementations share many of these parameters, the names of the parameters and the methods for setting them will vary between implementations. Table C-1 and Table C-2 summarize client and server tunables.
Table C-1. Client parameters
Parameter
Description
Caveats
clnt_max_conns
This parameter controls the number of connections the client will create between the client and a given server. In Solaris, the default is one. The rationale is that a single TCP connection ought to be sufficient to use the available bandwidth of network channel between the client and server. You may find this to not be the case for network media faster than the traditional 10Base T (10Mb per second).
Note that this parameter is not in the Solaris nfs module, but it is in the kernel RPC module rpcmod.
At the time of this writing, the algorithm used to assign traffic to each connection was a simple round robin approach. You may find diminishing returns if you set this parameter higher than 2. This parameter is highly experimental.
clnt_idle_timeout
This parameter sets the number of milliseconds the NFS client will let a connection go idle before closing it.
This parameter applies to NFS/TCP connections and is set in the Solaris kernel RPC module called rpcmod.
Normally this parameter should be a minute below the lowest server-side idle timeout among all the servers that you connect your client to. Otherwise, you may observe clients sending requests simultaneous with the server tearing down connections. This will result in an unnecessary sequence of connection tear down, followed immediately by connection setup.
nfs_max_threads (NFS Version 2)
nfs3_max_threads (NFS Version 3)
Sets the number of background read-ahead and write-behind threads on a per NFS-mounted filesystem basis, for NFS Version 2 and Version 3.
Read-ahead is a performance win when applications do mostly sequential reads. The NFS