Online Book Reader

Home Category

Managing NFS and NIS, 2nd Edition - Mike Eisler [246]

By Root 485 0
much faster than the network can deliver NFS requests.

NFS server threads don't impose the "usual" context switching load on a system because all of the NFS server code is in the kernel. Instead of using a per-process context descriptor or a user-level process "slot" in the memory management unit, the nfsd threads use the kernel's address space mappings. This eliminates the address translation loading cost of a context switch.

Choosing the number of server threads

The maximum number of server threads can be specified as a parameter to the nfsd daemon:

# /usr/lib/nfs/nfsd -a 16

The -a directive indicates that the daemon should listen on all available transports. In this example the daemon allows a maximum of 16 NFS requests to be serviced concurrently. The nfsd threads are created on demand, so you are only setting a high water mark, not the actual number of threads. If you configure too many threads, the unused threads will not be created. You can throttle NFS server usage by limiting the maximum number of nfsd threads, allowing the NFS server to concentrate on performing other tasks.

It is hard to come up with a magic formula to compute the ideal number of nfsd threads, since hardware and NFS implementations vary considerably between vendors. For example, at the time of this writing, Sun servers are recommended[5] to use the maximum of:

2 nfsd threads for each active client process

16 to 32 nfsd threads for each CPU

16 nfsd threads per 10Mb network or 160 per 100Mb network

Memory usage

NFS uses the server's page cache (in SunOS 4.x, Solaris and System V Release 4) for file blocks read in NFS read requests. Because these systems implement page mapping, the NFS server will use available page frames to cache file pages, and use the buffer cache[6] to store UFS inode and file metadata (direct and indirect blocks).

In Solaris, you can view the buffer cache statistics by using sar -b. This will show you the number of data transfers per second between system buffers and disk (bread/s & bwrite/s), the number of accesses to the system buffers (logical reads and writes identified by lread/s & lwrit/s), the cache hit ratios (%rcache & %wcache), and the number of physical reads and writes using the raw device mechanism (pread/s & pwrit/s):

# sar -b 20 5

SunOS bunker 5.8 Generic sun4u 12/06/2000

10:39:01 bread/s lread/s %rcache bwrit/s lwrit/s %wcache pread/s pwrit/s

10:39:22 19 252 93 34 103 67 0 0

10:39:43 21 612 97 46 314 85 0 0

10:40:03 20 430 95 35 219 84 0 0

10:40:24 35 737 95 49 323 85 0 0

10:40:45 21 701 97 60 389 85 0 0

Average 23 546 96 45 270 83 0 0

In practice, a cache hit ratio of 100% is hard to achieve due to lack of access locality by the NFS clients, consequently a cache hit ratio of around 90% is considered acceptable. By default, Solaris grows the dynamically sized buffer cache, as needed, until it reaches a high watermark specified by the bufhwm kernel parameter. By default, Solaris limits this value to 2% of physical memory in the system. In most cases, this 2%[7] ceiling is more than enough since the buffer cache is only used to cache inode and metadata information. You can use the sysdef command to view its value:

# sysdef

...

*

* Tunable Parameters

*

41385984 maximum memory allowed in buffer cache (bufhwm)

...

If you need to modify the default value of bufhwm, set its new value in /etc/system, or use adb as described in Chapter 15.

The actual file contents are cached in the page cache, and by default the filesystem will cache as many pages as possible. There is no high watermark, potentially causing the page cache to grow and consume all available memory. This means that all process memory that has not been used recently by local applications may be reclaimed for use by the filesystem page cache, possibly causing local processes to page excessively.

If the server is used for non-NFS purposes, enable priority paging to ensure that it has enough memory to run all of its processes without paging. Priority paging prevents the filesystem from consuming excessive memory by

Return Main Page Previous Page Next Page

®Online Book Reader