This chapter provides information about maintaining ONC3/NFS. It explains how to change the default number of NFS daemons and modify automount maps. It also gives suggestions for using alternative mounting techniques and avoiding mount point conflicts. It also describes how to modify and delete CacheFS file systems.
This chapter contains these sections:
Systems set up for NFS normally run four nfsd(1M) daemons. nfsd daemons, called NFS server daemons, accept RPC calls from clients. Four NFS server daemons might be inadequate for the amount of NFS traffic on your server. Degraded NFS performance on clients is usually an indication that their server is overloaded.
To change the number of NFS server daemons, create the file /etc/config/nfsd.options on the server if it doesn't already exist and specify the number of daemons to start at system start up. For example, to have the /etc/init.d/network script start eight nfsd daemons, the /etc/config/nfsd.options file needs to look like this:
# cat /etc/config/nfsd.options 8
Modify this number only if a server is overloaded with NFS traffic. In addition to increasing NFS daemons, consider adding another server to your NFS setup. The maximum recommended number of NFS daemons is 24 on a large server. If you increase the number of NFS server daemons, confirm your choice by giving this command:
# /usr/etc/nfsstat -s Server RPC: calls badcalls nullrecv badlen xdrcall duphits dupage 21669881 0 118760787 0 0 12246 7.56
If the output shows many null receives, such as in this example, you should consider lowering the number of NFS server daemons. There is no exact formula for choosing the number of NFS daemons, but here are several rules of thumb you can consider:
One nfsd for each CPU plus one to three nfsds as a general resource
One nfsd for each disk controller plus one to three nfsds as a general resource (a logical volume counts as one controller, no matter how many real controllers it is spread over)
One nfsd for each CPU, one nfsd for each controller, and one to three nfsds as a general resource
In cases where an NFS client requires directories not listed in its /etc/fstab file, you can use manual mounting to temporarily make the NFS resource available. With temporary mounting, you need to supply all the necessary information to the mount(1M) program through the command line. As with any mount, a temporarily mounted directory requires that a mount point be created before mounting can occur.
For example, to mount /usr/demos from the server redwood to a local mount point /n/demos with read-only, hard, interrupt, and background options give this command:
# mkdir -p /n/demos # mount –o ro,intr,bg redwood:/usr/demos /n/demos
A temporarily mounted directory remains in effect until the system is rebooted or until the superuser manually unmounts it. Use this method for one-time mounts.
You can modify the automounter maps at any time. Some of your modifications take effect the next time the automounter accesses the map, and others take effect when the system is rebooted. Whether or not booting is required depends on the type of map you modify and the kind of modification you introduce.
Rebooting is generally the most effective way to restart the automounter. You can also kill and restart the automounter using an automount(1M) command line. Use this method sparingly, however. (See the automount(1M) manual page.)
The automounter consults the master map only at startup time. A modification to the master map, /etc/auto.master, takes effect only after the system has been rebooted or automount is restarted (see “Modifying Direct Maps”).
Each entry in a direct map is an automount mount point, and the daemon mounts itself at these mount points at startup. Therefore, adding or deleting an entry in a direct map takes effect only after you have gracefully killed and restarted the automount daemon or rebooted. However, except for the name of the mount point, direct map entries can be modified while the automounter is running. The modifications take effect when the entry is next mounted, because the automounter consults the direct maps whenever a mount must be done.
For instance, suppose you modify the file /etc/auto.indirect so that the directory /usr/src is mounted from a different server. The new entry takes effect immediately (if /usr/src is not mounted at this time) when you try to access it. If it is mounted now, you can wait until auto-unmounting takes place to access it. If this is not satisfactory, unmount with the umount(1M) command, notify automount that the mount table has changed with the command /etc/killall -HUP automount, and then access it. The mounting should be done from the new server. However, if you want to delete the entry, you must gracefully kill and restart the automount daemon. automount must be killed with the SIGTERM signal:
# /etc/killall -TERM automount
You can then manually restart automount or reboot the system.
|Note: If gracefully killing and manually restarting automount does not work, rebooting the system should always work.|
You can cause a mount conflict by mounting one directory on top of another. For example, say you have a local home partition mounted on /home, and you want the automounter to mount other home directories. If the automounter maps specify /home as a mount point, the automounter hides the local home partition whenever it mounts.
/net/home /export/home efs rw,raw=/dev/rhome 0 0
This example assumes that the master file contains a line similar to this:
It also assumes an entry in /etc/auto.home like this:
where terra is the name of the system.
|Note: Before changing parameters for a cache, you must unmount all file systems in the cache directory with the umount command.|
cfsadmin -u –o parameter_list cache_directory
|Note: You can only increase the size of a cache, either by number of blocks or number of inodes. If you want to make a cache smaller, you must remove it and re-create it with new values.|
The following commands unmount /local/cache3 and change the threshfiles parameter to 65%:
# umount /local/cache3 # cfsadmin -u -o threshfiles=65 /local/cache3
cfsadmin -l cache_directory
The following command shows information about the cache directory named /usr/cache/lolita:
# cfsadmin -l /usr/cache/lolita cfsadmin: list cache FS information maxblocks 90% (122628 blocks) minblocks 0% (0 blocks) threshblocks 85% (115815 blocks) hiblocks 85% (104234 blocks) lowblocks 75% (91971 blocks) maxfiles 90% (206480 files) minfiles 0% (0 files) threshfiles 85% (195009 files) hifiles 85% (175508 files) lowfiles 75% (154860 files) maxfilesize 3MB lolita:_usr_people_jmy_work:_usr_people_jmy_work flags CFS_DUAL_WRITE CFS_ACCESS_BACKFS popsize 65536 fgsize 256 Current Usage: blksused 757 filesused 124 flags
If there are multiple mount points for a single cache, cfsadmin returns information similar to the following:
# cfsadmin -l /usr/cache/bonnie cfsadmin: list cache FS information maxblocks 90% (122628 blocks) minblocks 0% (0 blocks) threshblocks 85% (115815 blocks) hiblocks 85% (104234 blocks) lowblocks 75% (91971 blocks) maxfiles 90% (206480 files) minfiles 0% (0 files) threshfiles 85% (195009 files) hifiles 85% (175508 files) lowfiles 75% (154860 files) maxfilesize 3MB bonnie:_jake:_hosts_bonnie_jake flags CFS_DUAL_WRITE CFS_ACCESS_BACKFS popsize 65536 fgsize 256 bonnie:_depot:_hosts_bonnie_depot flags CFS_DUAL_WRITE CFS_ACCESS_BACKFS popsize 65536 fgsize 256 bonnie:_proj_sherwood_isms:_hosts_bonnie_proj_sherwood_isms flags CFS_DUAL_WRITE CFS_ACCESS_BACKFS popsize 65536 fgsize 256 bonnie:_proj_irix5.3_isms:_hosts_bonnie_proj_irix5.3_isms flags CFS_DUAL_WRITE CFS_ACCESS_BACKFS popsize 65536 fgsize 256 Current Usage: blksused 759 filesused 279 flags
cfsadmin –d cache_id cache_directory
|Note: Before deleting a cached file system, you must unmount all the cached files systems for that cache directory.|
The cache ID is part of the information returned by cfsadmin –l. After deleting one or more of the cached file systems, you must run the fsck_cachefs command to correct the resource counts for the cache.
The following commands unmount a cached file system, delete it from the cache, and run fsck_cachefs:
# umount /usr/work # cfsadmin -d _dev_dsk_c0t1d0s7 /local/cache1 # fsck_cachefs -t cachefs /local/cache1
You can delete all file systems in a particular cache by using all as an argument to the –d option. The following command deletes all file systems cached under /local/cache1:
# cfsadmin -d all /local/cache1
The all argument to –d also deletes the specified cache directory.