Managing NFS and NIS, 2nd Edition - Mike Eisler [250]
This deadlock problem goes away when your NFS clients use the automounter in place of hard-mounts. Most systems today heavily rely on the automounter to administer NFS mounts. Also note that the bg mount option is for use by the mount command only. It is not needed when the mounts are administered with the automounter.
Multihomed servers
When a server exports NFS filesystems on more than one network interface, it may expend a measurable number of CPU cycles forwarding packets between interfaces. Consider host boris on four networks:
138.1.148.1 boris-bb4
138.1.147.1 boris-bb3
138.1.146.1 boris-bb2
138.1.145.1 boris-bb1 boris
Hosts on network 138.1.148.0 are able to "see" boris because boris forwards packets from any one of its network interfaces to the other. Hosts on the 138.1.148.0 network may mount filesystems from either hostname:
boris:/export/boris
boris-bb4:/export/boris
Figure 16-2. A multihomed host
The second form is preferable on network 138.1.148.0 because it does not require boris to forward packets to its other interface's input queue. Likewise, on network 138.1.145.0, the boris:/export/boris form is preferable. Even though the requests are going to the same physical machine, requests that are addressed to the "wrong" server must be forwarded, as shown in Figure 16-2. This adds to the IP protocol processing overhead. If the packet forwarding must be done for every NFS RPC request, then boris uses more CPU cycles to provide NFS service.
Fortunately, the automounter handles this automatically. It is able to determine what addresses are local to its subnetwork and give strong preference to them. If the server reply is not received within a given timeout, the automounter will use an alternate server address, as explained in Section 9.5.1.
* * *
[4] A terminal server has RS-232 ports for terminal connections and runs a simple ROM monitor that connects terminal ports to servers over telnet sessions. Terminal servers vary significantly: some use RS-232 DB-25 connectors, while others have RJ-11 phone jacks with a variable number of ports.
[5] Refer to the Solaris 8 NFS Server Performance and Tuning Guide for Sun Hardware (February 2000).
[6] In Solaris, SunOS 4.x, and SVR4, the buffer cache stores only UFS metadata. This in contrast to the "traditional" buffer cache used by other Unix systems, where file data is also stored in the buffer cache. The Solaris buffer cache consists of disk blocks full of inodes, indirect blocks, and cylinder group information only.
[7] 2% of total memory can be too much buffer cache for some systems, such as the Sun Sparc Center 2000 with very large memory configurations. You may need to reduce the size of the buffer cache to avoid starving the kernel of memory resources, since the kernel address space is limited on Super Sparc-based systems. The newer Ultra Sparc-based systems do not suffer from this limitation.
[8] RAID stands for Redundant Array of Inexpensive Disks. Researchers at Berkeley defined different types of RAID configurations, where lots of small disks are used in place of a very large disk. The various configurations provide the means of combining disks to distribute data among many disks (striping), provide higher data availability (mirroring), and provide partial data loss recovery (with parity computation).
[9] There are no adverse effects of using the background option, so you can use it for all your NFS-mounted