This chapter explains how to set up ONC3/NFS services and verify that they work. It provides procedures for enabling exporting on NFS servers, for setting up mounting and automatic mounting on NFS clients, and for setting up the network lock manager. It also explains how to create a CacheFS file system. Before you begin these procedures, you should be thoroughly familiar with the information provided in Chapter 2, “Planning ONC3/NFS Service”.
This chapter contains these sections:
![]() | Note: To perform the procedures in this chapter, you should have already installed ONC3/NFS software on the server and client systems that will participate in the ONC3/NFS services. The ONC3/NFS Release Notes explain where to find instructions for installing ONC3/NFS software. |
Setting up an NFS server requires verifying that the required software is running on the server, editing the server's /etc/exports file, adding the file systems to be exported, exporting the file systems, and verifying that they have been exported. The instructions below explain the setup procedure. Do this procedure as the superuser on the server.
Use versions to verify the correct software has been installed on the server:
# versions | grep nfs I nfs 06/09/2004 Network File System, 6.5.25 I nfs.books 06/09/2004 IRIS InSight Books, 2.2 I nfs.books.NIS_AG 06/09/2004 NIS Administration Guide I nfs.books.ONC3NFS_AG 06/09/2004 ONC3NFS Administrator's Guide I nfs.man 06/09/2004 NFS Documentation I nfs.man.nfs 06/09/2004 NFS Support Manual Pages I nfs.man.relnotes 06/09/2004 NFS Release Notes I nfs.sw 06/09/2004 NFS Software I nfs.sw.autofs 06/09/2004 AutoFS Support I nfs.sw.cachefs 06/09/2004 CacheFS Support I nfs.sw.nfs 06/09/2004 NFS Support I nfs.sw.nis 06/09/2004 NIS (formerly Yellow Pages) Support |
This example shows NFS as I (installed). A complete listing of current software modules is contained in the ONC3/NFS Release Notes.
Check the NFS configuration flag on the server.
When the /etc/init.d/network script executes at system startup, it starts the NFS server if the chkconfig flags nfs and nfsd are on. To verify that nfs and nfsd are on, enter the chkconfig command and check its output, for example:
# /etc/chkconfig ... Flag State ==== ===== ... nfs on nfsd on ... |
This example shows that the nfs and nsfd flags are set to on.
![]() | Note: The nfsd chkconfig flag was added in IRIX 6.5.25 release. Prior to this release, both the NFS server and the NFS client were controlled using the nfs flag. |
If your output shows that either nfs is or nfsd is off, enter the following command and reboot your system:
/etc/chkconfig nfs on /etc/chkconfig nfsd on |
Verify that NFS daemons are running.
Several nfsd daemons should be running on the server. Verify that the daemons are running using the ps command, as shown below. The output of your entries should look similar to the output in these examples:Four nfsd and four biod daemons should be running (the default number specified in /etc/config/nfsd.options and /etc/config/biod.options). Verify that the appropriate NFS daemons are running using the ps command, shown below. The output of your entries should look similar to the output in these examples:
ps -ef | grep nfsd root 102 1 0 Jan 30 ? 0:00 /usr/etc/nfsd 4 root 104 102 0 Jan 30 ? 0:00 /usr/etc/nfsd 4 root 105 102 0 Jan 30 ? 0:00 /usr/etc/nfsd 4 root 106 102 0 Jan 30 ? 0:00 /usr/etc/nfsd 4 root 2289 2287 0 14:04:50 ttyq4 0:00 grep nfsd |
If no NFS daemons appear in your output, either the daemon's binary is missing or the IRIX kernel does not support NFS serving. To check the former, use ls command. as follows:
ls -l /usr/etc/nfsd -rwx--x--x 1 root sys 68292 Jun 14 16:22 /usr/etc/nfsd |
And to check that the kernel supports NFS serving, use the exportfs command, as follows:
exportfs -i / exportfs: export / - Package not installed |
If the exportfs command generates a "Package not installed" message, there is no support for the NFS server in the kernl. Make sure the nfs.sw.nfs subsystem is installed and rebuild the kernel with this command, then reboot the system:
/etc/autoconfig -f |
Verify that mount daemons are registered with the portmapper.
Mount daemons must be registered with the server's portmapper so the portmapper can provide port numbers to incoming NFS requests. Verify that the mount daemons are registered with the portmapper by entering this command:
/usr/etc/rpcinfo –p | grep mountd |
After your entry, you should see output similar to this:
391004 1 udp 1048 sgi_mountd 391004 3 udp 1048 sgi_mountd 391004 1 tcp 1044 sgi_mountd 391004 3 tcp 1044 sgi_mountd 100005 1 udp 1049 mountd 100005 3 udp 1049 mountd 100005 1 tcp 1045 mountd 100005 3 tcp 1045 mountd |
The sgi_mountd in this example is an enhanced mount daemon that reports on SGI-specific export options.
Edit the /etc/exports file.
Edit the /etc/exports file to include the file systems you want to export and their export options (/etc/exports and export options are explained in “Operation of /etc/exports and Other Export Files” in Chapter 2). This example shows one possible entry for the /etc/exports file:
/usr/demos -ro,access=client1:client2:client3 |
In this example, the file system /usr/demos is exported with read-only access to three clients: client1, client2, and client3. Domain information can be included in the client names, for example client1.eng.sgi.com.
Run the exportfs command.
Once the /etc/exports file is complete, you must run the exportfs command to make the file systems accessible to clients. You should run exportfs anytime you change the /etc/exports file. Enter the following command:
/usr/etc/exportfs -av |
In this example, the –a option exports all file systems listed in the /etc/exports file, and the –v option causes exportfs to report its progress. Error messages reported by exportfs usually indicate a problem with the /etc/exports file.
Use exportfs to verify your exports.
Type the exportfs command with no parameters to display a list of the exported file system(s) and their export options, as shown in this example:
/usr/etc/exportfs /usr/demos -ro,access=client1:client2:client3 |
In this example, /usr/demos is accessible as a read-only file system to systems client1, client2, and client3. This matches what is listed in the /etc/exports file for this server (see instruction 6 of this procedure). If you see a mismatch between the /etc/exports file and the output of the exportfs command, check the /etc/exports file for syntax errors.
The NFS software for this server is now running and its resources are available for mounting by clients. Repeat these instructions to set up additional NFS servers.
To set up an NFS client for conventional mounting, you must:
verify that NFS software is running on the client.
edit the /etc/fstab file to add the names of directories to be mounted.
mount directories in /etc/fstab by giving the mount command or by rebooting your system. These directories remain mounted until you explicitly unmount them.
![]() | Note: For instructions on mounting directories not listed in /etc/fstab, see “ Temporary NFS Mounting” in Chapter 5. |
The procedure below explains how to set up NFS software on a client and mount its NFS resources using the mount command. You must do this procedure as the superuser.
Use versions to verify the correct software has been installed on the client:
versions | grep nfs I nfs 06/09/2004 Network File System, 6.5.25 I nfs.books 06/09/2004 IRIS InSight Books, 2.2 I nfs.books.NIS_AG 06/09/2004 NIS Administration Guide I nfs.books.ONC3NFS_AG 06/09/2004 ONC3NFS Administrator's Guide I nfs.man 06/09/2004 NFS Documentation I nfs.man.nfs 06/09/2004 NFS Support Manual Pages I nfs.man.relnotes 06/09/2004 NFS Release Notes I nfs.sw 06/09/2004 NFS Software I nfs.sw.autofs 06/09/2004 AutoFS Support I nfs.sw.cachefs 06/09/2004 CacheFS Support I nfs.sw.nfs 06/09/2004 NFS Support I nfs.sw.nis 06/09/2004 NIS (formerly Yellow Pages) Support |
This example shows NFS as I (installed). A complete listing of current software modules is contained in the ONC3/NFS Release Notes.
Use chkconfig to check the client's NFS configuration flag.
To verify that nfs is on, give the chkconfig command and check its output (see “Setting Up the NFS Server” in this chapter for details on chkconfig).
If your output shows that nfs is off, enter the following command and reboot your system:
/etc/chkconfig nfs on |
Verify that NFS daemons are running.
Several Buffered I/O daemons, biod, should be running (the number is specified in the /etc/config/biod.options file). If you plan to use file locking over NFS, either the NFS daemon or Network Lock Manager daemon, rpc.lockd, must be running also. You can verify that the appropriate daemons are running, using the ps command, as shown below. The output of your entries should look similar to the output in these examples:
ps -ef | egrep 'nfsd|biod|lockd' root 225 1 0 Jun 15 ? 0:00 /usr/etc/nfsd root 230 1 0 Jun 15 ? 0:00 /usr/etc/biod 4 root 231 1 0 Jun 15 ? 0:00 /usr/etc/biod 4 root 232 1 0 Jun 15 ? 0:00 /usr/etc/biod 4 root 233 1 0 Jun 15 ? 0:00 /usr/etc/biod 4 |
If no daemons appear in your output, they were not installed. See step 4 in “Setting Up the NFS Server”, for information on how to check if daemon binaries are present and if there is support for NFS serving in the kernel.
Edit the /etc/fstab file.
Add an entry to the /etc/fstab file for each NFS directory you want mounted when the client is booted. The example below illustrates an /etc/fstab with an NFS entry to mount /usr/demos from the server redwood at mount point /n/demos:
/dev/root / xfs rw,raw=/dev/rroot 0 0 /dev/usr /usr xfs rw,raw=/dev/rusr 0 0 redwood:/usr/demos /n/demos nfs ro,bg 0 0 |
![]() | Note: The background (bg) option in this example allows the client to proceed with the boot sequence without waiting for the mount to complete. If the bg option is not used, the client hangs if the server is unavailable. |
Create the mount points for each NFS directory.
After you edit the /etc/fstab file, create a directory to serve as the mount point for each NFS entry in /etc/fstab file. If you specified an existing directory as a mount point for any of your /etc/fstab entries, remember that the contents of the directory are inaccessible while the NFS mount is in effect.
For example, to create the mount point /n/demos for mounting the directory /usr/demos from server redwood, enter the following command:
mkdir -p /n/demos |
Mount each NFS resource.
You can use the mount command in several ways to mount the entries in this client's /etc/fstab. See the mount(1M) man page for a description of the options. The examples below show two methods: mounting each entry individually and mounting all fstab entries that specify a particular server. The first example is:
mount /n/demos |
In this example, only the mount point is specified. All other information needed to perform the mount, the server name redwood and its resource /usr/demos, is provided by the /etc/fstab file.
The second example is:
mount -h redwood |
In this example, all NFS entries in /etc/fstab that specify server redwood are mounted.
![]() | Note: If you reboot the client instead of using the mount command, all NFS entries in /etc/fstab are mounted. |
The NFS software for this client is now ready to support user requests for NFS directories. Repeat these instructions to set up additional NFS clients.
Since the automatic mounters run only on NFS clients, all setup for the automatic mounters is done on the client system. This section provides two procedures for setting up the automatic mounters: one for setting up a default automount or autofs environment (autofs is recommended) and one for setting up a more complex environment.
Depending on the automatic mounter that will be started, the command line options for the appropriate daemons are comming from either the /etc/config/autofsd.options file for autofsd ( the /etc/config/autofs.config file for autofs) or from the /etc/config/automount.option file for automount.
![]() | Note: Both the autofs and automount should NOT be configured on at the same time, one flag or the other, only, to avoid problems. |
By default, the automatic mounter is set up to operate on a special map called -hosts. The –hosts map tells the automatic mounter to read the hosts database from the Unified Naming Service database; see the nsswitch.conf(4) man page and use the server specified if the hosts database has a valid entry for that server. When using the –hosts map, when a client accesses a server, the automatic mounter gets the exports list from the server and mounts all directories exported by that server. automount uses /tmp_mnt/hosts as the mount point, and autofs uses /hosts.
A sample –hosts entry in /etc/config/automount.options is:
-v /hosts -hosts -nosuid,nodev |
Use this procedure to set up the default automatic mounter environment on an NFS client. You must do this procedure as the superuser.
Verify that NFS flags are on.
By default, the nfs and autofs (or automount) flags are set to on. To verify that they are on, give the chkconfig command and check its output (see instruction 2 of “Setting Up an NFS Client” in this chapter for sample chkconfig output).
If the command output shows that nfs and autofs (or automount) is set to off, enter either of these sets of commands to reset them, then reboot:
/etc/chkconfig nfs on /etc/chkconfig autofs on or /etc/chkconfig nfs on /etc/chkconfig automount on |
Verify that the default configuration is working:
cd /hosts/servername |
In place of servername, substitute the hostname of any system whose name can be resolved by the hostname resolution method you are using (see the resolver(4) man page). If the system specified is running NFS and has file systems that can be accessed by this client, autofs mounts all available file systems to /hosts/servername (automount uses /tmp_mnt/hosts/servername). If the system is not running NFS or has nothing exported that you have access to, you get an error message when you try to access its file systems.
Verify that directories have been mounted, for example:
mount servername:/ on /hosts/servername type nfs (rw,dev=c0005)(for autofs) or servername:/ on /tmp_mnt/hosts/servername type nfs (rw,dev=c0005)(for automount) |
The automatic mounter has serviced this request. It dynamically mounted /hosts/servername using the default automatic mounter environment.
A customized automatic mounter environment allows you to select the NFS directories that are dynamically mounted on a particular client, and allows you to customize the options in effect for particular mounts. You must complete four general steps to set up a customized automount environment:
Creating the maps.
Starting the automatic mounter program.
Verifying the automatic mounter process.
Testing the automatic mounter.
A customized automatic mounter environment contains a master map and any combination of direct and indirect maps. Although a master map is required, the automatic mounter does not require both direct and indirect maps. You can use either direct or indirect maps exclusively. AutoFS comes with a default /etc/auto_master file that can be modified.
Instructions for creating each type of map are given below. Notice from these instructions that a crosshatch (#) at the beginning of a line indicates a comment line in all types of maps. Include comment lines in your maps to illustrate map formats until you become familiar with each map type.
Create or modify the master map on the client.
The master map points the automatic mounter to other files that have more detailed information needed to complete NFS mounts. To create the master map, become superuser and create a file called /etc/auto.master (for automount) with any text editor. With AutoFS, modify the default /etc/auto_master file.
Keep in mind that mount options specified in indirect maps may override the mount options specified in the parent map.
Specify the mount point, map name, and any options that apply to the direct and indirect maps in your entries, for example:
#Mount Point Map Name Map Options /food/dinner /etc/auto.food -ro /- /etc/auto.exercise -ro,soft /hosts -hosts -nosuid,nodev |
Create the indirect map.
Create your indirect map and insert the entries it needs. This example is the indirect map /etc/auto.food, listed in /etc/auto.master (or /etc/auto_master) in instruction 1:
#Directory Options Location ravioli venice:/food/pasta crepe -rw paris:/food/desserts chowmein hongkong:/food/noodles |
Create the direct map.
Create your direct map and insert the entries it needs. This example is the direct map /etc/auto.exercise, listed in /etc/auto.master (or /etc/auto_master) in instruction 1:
#Directory Options Location /leisure/swim spitz:/sports/water/swim /leisure/tennis becker:/sports/racquet/tennis /leisure/golf palmer:/sports/golf |
If you make a change to any of the automount map or autofs map files, and you want the automount or autofs to re-read these changes, you need to issue the following command:
autofs -v |
It usually takes a reboot of the system to clear out problems with hung mount points when using automount or autofs.
You can set up the software on a client so that the automatic mounter starts when the client is booted, and you can also start the automatic mounter from the command line. The procedures in this section explain how to set up the automatic mounter to start during the boot sequence.
If the automatic mounter is configured on at system startup, the /etc/init.d/network script reads the contents of the /etc/config/automount.options file (or /etc/config/autofs.options and /etc/auto_master files for autofs) to determine how to start the automatic mounter program, what to mount, and how to mount it. Depending on the site configuration specified in the options file, the automatic mounter either finds all necessary information in the options file, or it is directed to local or NIS maps (or both) for additional mounting information.
If you plan to use NIS database maps other than the –hosts built-in map, you need to create the NIS maps. See the NIS Administrator Guide for information on building custom NIS maps. Follow this procedure to set the automatic mounter to start automatically at system startup:
Configure the automatic mounter on by using the chkconfig command (if needed) as follows:
/etc/chkconfig automount on or /etc/chkconfig autofs on |
Modify the /etc/config/automount.options file (or /etc/auto_master file).
Using any standard editor, modify the /etc/config/automount.options (or /etc/auto_master) file to reflect the automatic mounter site environment. (See automount(1M) or autofs(1M) man pages for details on the options file). Based on the previous examples, the /etc/config/automount.options file contains this entry:
-v -m -f /etc/auto.master |
The /etc/config/autofs.options file contains this entry:
-v -m 16 |
The –v option directs error messages to the screen during startup and into the /var/adm/SYSLOG file once the automatic mounter is up and running. The –m option tells automount not to check the NIS database for a master map. Use this option to isolate map problems to the local system by inhibiting automount from reading the NIS database maps, if any exist. The –f option tells automount that the argument that follows it is the full pathname of the master file.
![]() | Note: In general, it is recommended that you start the automatic mounter with the verbose option (–v), since this option provides messages that can help with problem solving. |
Reboot the system.
Verify that the automatic mounter process is functioning by performing the following two steps.
Validate that the automatic mounter daemon is running by using the ps command, as follows:
ps -ef | grep automount or ps -ef | grep autofs |
You should see output similar to this for automount:
root 455 1 0 Jan 30 ? 0:02 automount -v -m -f /etc/auto.master root 4675 4673 0 12:45:05 ttyq5 0:00 grep automount |
You should see output similar to this for autofs:
root 555 1 0 Jan 30 ? 0:02 /usr/etc/autofsd -v -m 16 root 4775 4773 0 12:45:05 ttyq5 0:00 grep autofs |
Check the /etc/mtab entries.
When the automatic mounter program starts, it creates entries in the client's /etc/mtab for each of the automatic mounter's mount points. Entries in /etc/mtab include the process number and port number assigned to the automatic mounter, the mount point for each direct map entry, and each indirect map. The /etc/mtab entries also include the map name, map type (direct or indirect), and any mount options.
Look at the /etc/mtab file. A typical /etc/mtab table with automount running looks similar to this example (wrapped lines end with the \ character):
/dev/root / xfs rw,raw=/dev/rroot 0 0 /dev/usr /usr xfs rw,raw=/dev/rusr 0 0 /debug /debug dbg rw 0 0 /dev/diskless /diskless xfs rw,raw=/dev/rdiskless 0 0 /dev/d /d xfs rw,raw=/dev/rd 0 0 flight:(pid12155) /src/sgi ignore \ ro,port=885,map=/etc/auto.source,direct 0 0 flight:(pid12155) /pam/framedocs/nfs ignore \ ro,port=885,map=/etc/auto.source,direct 0 0 flight:(pid12155) /hosts ignore ro,port=885,\ map=-hosts,indirect,dev=1203 0 0 |
A typical /etc/mtab table with autofs running looks similar to this example:
-hosts on /hosts type autofs (ignore,indirect,nosuid,dev=1000010) -hosts on /hosts2 type autofs \ (ignore,indirect,nosuid,vers=2,dev=100002) -hosts on /hosts3 type autofs \ (ignore,indirect,fstype=cachefs,backfstype=nfs,dev=100003) /etc/auto_test on /text type autofs\ (ignore,indirect,ro,nointr,dev=100004) neteng:/ on /hosts2/neteng type nfs \ (nosuid,vers=2,dev=180004) |
The entries corresponding to automount mount points have the file system type ignore to direct programs to ignore this /etc/mtab entry. For instance, df and mount do not report on file systems with the type ignore. When a directory is NFS mounted by the automount program, the /etc/mtab entry for the directory has nfs as the file system type. df and mount report on file systems with the type nfs.
When the automatic mounter program is set up and running on a client, any regular account can use it to mount remote directories transparently. You can test your automatic mounter setup by changing to a directory specified in your map configuration.
The instructions below explain how to verify that the automatic mounter is working.
As a regular user, enter the cd command to change to an automounted directory.
For example, to test whether the automatic mounter mounts /food/pasta:
cd /food/dinner/ravioli |
This command causes the automatic mounter to look in the indirect map /etc/auto.food to execute a mount request to server venice and apply any specified options to the mount. automount then mounts the directory /food/pasta to the default mount point /tmp_mnt/food/dinner/ravioli. The directory /food/dinner/ravioli is a symbolic link to /tmp_mnt/food/dinner/ravioli. autofs mounts the directory /food/pasta to the default mount point /food/dinner/ravioli.
![]() | Note: The /food/dinner directory appears empty unless one of its subdirectories has been accessed (and therefore mounted). |
Verify that the individual mount has taken place.
Use the pwd command to verify that the mount has taken place, as shown in this example:
pwd /food/pasta |
Verify that both directories have been automatically mounted.
You can also verify automounted directories by checking the output of a mount command:
mount |
mount reads the current contents of the /etc/mtab file and includes conventionally mounted and automounted directories in its output.
The custom configuration of automount is set up and ready to work for users on this client.
The NFS lock manager provides file and record locking between a client and server for NFS-mounted directories. The lock manager is implemented by two daemons, lockd and statd (see the lockd(1M) and statd(1M) man pages ). Both are installed as part of NFS software.
The NFS lock manager program must be running on both the NFS client and the NFS server to function properly. Use this procedure to check the lock manager setup:
Use chkconfig on the client to check the lock manager flag.
To verify that the lockd flag is on, enter the chkconfig command and check its output (see instruction 2 of “Setting Up an NFS Client” in this chapter for sample chkconfig output). If your output shows that lockd is off, enter the following command and reboot your system:
/etc/chkconfig lockd on |
Verify that rpc.statd and either nlockmgr or nfsd are running.
Enter the following commands and check their output to verify that the lock manager daemons, rpc.statd and either nlockmgr or nfsd are running:
ps -ef | grep statd root 131 1 0 Aug 6 ? 0:51 /usr/etc/rpc.statd root 2044 427 2 16:13:24 ttyq1 0:00 grep statd rpcinfo -p | grep nlockmgr 100021 1 udp 2049 nlockmgr 100021 3 udp 2049 nlockmgr 100021 4 udp 2049 nlockmgr ps -ef | grep lockd root 1064 999 0 21:55:00 ttyd1 0:00 grep lockd root 1062 1 0 21:54:56 ? 0:0 /usr/etc/rpc.lockd |
If rpc.statd is not running, start it manually by giving the following command:
/usr/etc/rpc.statd |
If neither rpc.lockd or nfsd is running, start rpc.lockd manually by entering the following command:
/usr/etc/rpc.lockd |
Repeat instructions 1 and 2, above, on the NFS server, using nfsd instead of rpc.lockd.
When you set up a cache, you can use all or part of an existing file system. You can also set up a new slice to be used by CacheFS. In addition, when you create a cache, you can specify the percentage of resources, such as number of files or blocks, that CacheFS can use in the front file system. The configurable cache parameters are discussed in the section “Cache Resource Parameters in CacheFS” in Chapter 2.
Before starting to set up CacheFS, check that it is configured to start on the client.
Check the CacheFS configuration flag.
When the /etc/init.d/network script executes at system startup, it starts CacheFS running if the chkconfig flag cachefs is on.
To verify that cachefs is on, enter the chkconfig command and check its output, for example:
/etc/chkconfig Flag State ==== ===== ... cachefs on ... |
This example shows that the cachefs flag is set to on.
If your output shows that cachefs is off, enter the following command and reboot your system:
/etc/chkconfig cachefs on |
CacheFS uses a local XFS file system for the front file system.You can use an existing XFS file system for the front file system or you can create a new one. Using an existing file system is the quickest way to set up a cache. Dedicating a file system exclusively to CacheFS gives you the greatest control over the file system space available for caching.
![]() | Caution: Do not make the front file system read-only and do not set quotas on it. A read-only front file system prevents caching, and file system quotas interfere with control mechanisms built into CacheFS. |
There are two steps to setting up a cached file system:
Create the cache using the cfsadmin command. See “Creating a Cache”. Normally the cache directory is created with default parameters when you use the mount command. If you want to create the cache directory with different parameters, follow the procedures in “Creating a Cache”.
You must mount the file system you want cached using the -t cachefs option to the mount command. See “Mounting a Cached File System”.
The following example is the command to use to create a cache using the cfsadmin command:
cfsadmin –c directory_name |
The following example creates a cache and creates the cache directory /local/mycache. Make sure the cache directory does not already exist.
cfsadmin -c /local/mycache |
This example uses the default cache parameter values. The CacheFS parameters are described in the section “Cache Resource Parameters in CacheFS” in Chapter 2. See the cfsadmin(1M) man page and “Cached File System Administration” in Chapter 2 for more information on cfsadmin options.
The following example shows how to set parameters for a cache.
cfsadmin -c -o parameter_list cache_directory |
The parameter_list has the following form:
parameter_name1=value,parameter_name2=value,... |
The parameter names are listed in Table 2-2. You must separate multiple arguments to the –o option with commas.
![]() | Note: The maximum size of the cache is by default 90% of the front file system resources. Performance deteriorates significantly if an XFS file system exceeds 90% capacity. |
The following example creates a cache named /local/cache1 that can use a maximum of 80% of the disk blocks in the front file system and can cache up to a high-water mark of 60% of the front file system blocks before starting to remove files.
cfsadmin -c -o maxblocks=80,hiblocks=60 /local/cache1 |
The following example creates a cache named /local/cache2 that can use up to 75% of the files available in the front file system:
cfsadmin -c -o maxfiles=75 /local/cache2 |
The following example creates a cache named /local/cache3 that can use 75% of the blocks in the front file system, that can cache up to a highwater mark of 60% of the front file system files before starting to remove files, and that has 70% of the files in the front file system as an absolute limit.
cfsadmin -c -o maxblocks=75,hifiles=60,maxfiles=70 /local/cache3 |
There are two ways to mount a file system in a cache:
Using the mount command
Creating an entry for the file system in the /etc/fstab file
The following command mounts a file system in a cache.
mount -t cachefs back_file_system mount_point |
The cache directory is automatically created when mounting a cached file system.
For example, the following command makes the file system merlin:/docs available as a cached file system named /docs:
mount -t cachefs merlin:/docs /docs |
Use the backpath argument when the file system you want to cache has already been mounted. The backpath argument specifies the mount point of the mounted file system. When the backpath argument is used, the back file system must be already mounted as read-only. If you want to write to the back file system, you must unmount it before mounting it as a cached file system.
For example, if the file system merlin:/doc is already NFS-mounted on /nfsdocs, you can cache that file system by giving that pathname as the argument to backpath, as shown in the following example:
mount -t cachefs -o backfstype=nfs,cachedir=/local/cache1,backpath=/nfsdocs \ merlin:/doc /doc |
![]() | Note: There is no performance gain in caching a local XFS disk file system. |
So far, examples have illustrated back file systems that are NFS-mounted, and the device argument to the mount command has taken the form server:file_system. If the back file system is an ISO9660 file system, the device argument is the CD-ROM device in the /CDROM directory. The file system type is iso9660.
The following example illustrates caching an ISO9660 back file system on the device /CDROM as /doc in the cache /local/cache1:
mount -t cachefs -o backfstype=iso9660,cachedir=/local/cache1,\ ro,backpath=/CDROM /CDROM /doc |
Because you cannot write to the CD-ROM, the ro argument is specified to make the cached file system read-only. The arguments to the -o option are explained in “Operation of /etc/fstab and Other Mount Files” in Chapter 2.
You must specify the backpath argument because the CD-ROM is automatically mounted when it is inserted. The mount point is in the /CDROM directory and is determined by the name of the CD-ROM. The special device to mount is the same as the value for the backpath argument.
![]() | Note: When a CD-ROM is changed, the CacheFS file system must be unmounted and remounted. |
The IRIX 6.5.25 release has support for user authentication and optional integrity protection and encryption of NFS traffic using the RPCSEC_GSS authentication mechanism with a Kerberos V5 backend. It describes how to add an NFS client and NFS server to an existing Kerberos realm. As such, the NFS server and client are acting as clients of the Kerberos Domain Controller. SGI does not support the use of an IRIX system as the Domain Controller of a Kerberos realm.
This section describes how to set up a secure RPC configuration using the RPCSEC_GSS authentication mechanism. It covers the following topics:
In order to use the RPCSEC_GSS authentication mechanism, you must install the following subsystems from your IRIX 6.5.25 distribution:
nfs.sw.rpcsec
It provides rpcsec.so.1 user-space DSO, rpcsec.o kernel module, and necessary support commands and daemons.
kerberos.sw.client
It provides Kerberos V5 client utilities.
To configure the Kerberos V5 client, edit your /etc/krb5.conf file to appear, as follows:
[libdefaults] default_realm = REALM [realms] REALM = { kdc = kdc.location.sgi.com admin_server = kdc.location.sgi.com default_domain = location.sgi.com } [domain_realm] .location.sgi.com = REALM location.sgi.com = REALM |
Because the RPCSEC_GSS software uses a limited implementation of Kerberos V5, only simple data encryption standard (DES) encryption keys can be used. If your Kerberos Domain Controller supports both DES and TrippleDES, do not use TrippleDES because RPCSEC_GSS software will return a cryptic error.
To restrict encryption algorithms used by Kerberos, edit the libdefaults section of the /etc/krb5.conf file, as follows:
[libdefaults] default_realm = REALM default_tgs_enctypes = des-cbc-crc default_tkt_enctypes = des-cbc-crc permitted_enctypes = des-cbc-crc,des-cbc-md5 .... |
To check the attributes of your Kerberos ticket, you can use klist(1) command, as follows:
/usr/kerberos/bin/klist -e Ticket cache: FILE:/tmp/krb5cc_16314 Default principal: [email protected] Valid starting Expires Service principal 03/05/04 15:11:58 03/06/04 15:11:58 krbtgt/[email protected] renew until 03/05/04 15:11:58, Etype (skey, tkt): DES cbc mode with CRC-32, DES cbc mode with CRC-32 |
This section describes how to configure an NFS Client to use RPCSEC_GSS authentication.
To request RPCSEC_GSS authentication from NFS client, you need to specify the security mode when mounting NFS filesystems. You can do this manually by using the mount -o sec=... option or you can add sec=... to the corresponding line in the autofs master file. Before you do this, however, you need to modify the /etc/nfssec.conf to enable Kerberos V5 security modes that are disabled in the default configuration. For instructions, see comments in /etc/nfssec.conf file.
![]() | Note: The sec=... mount option is only supported by mount and autofs but not by automount. |
For additional information, see the mount(1M), autofs(1M), and nfssec.conf(4) man pages.
Make sure root has a Kerberos ticket and the ticket is current before attempting to mount. As root user, perform the following commands:
/usr/kerberos/bin/klist klist: No credentials cache found (ticket cache FILE:/tmp/krb5cc_0) /usr/kerberos/bin/kinit Password for [email protected]: /usr/kerberos/bin/klist Ticket cache: FILE:/tmp/krb5cc_0 Default principal: [email protected] Valid starting Expires Service principal 03/05/04 15:28:06 03/06/04 15:27:57 krbtgt/[email protected] |
Make sure the gssd daemon is running. The gssd daemon is an RPC server that is used to support generation and validation of Generic Security Service (GSS) tokens for the kernel implementation of the RPCSEC_GSS authentication mechanism and to translate Kerberos V5 principal names to UID/GID appropriate for the local server. For more information, see the gssd(1M) man page.
As root user, perform the following commands:
ps -ef | grep gssd root 195 1 0 Mar 04 ? 0:00 /usr/etc/gssd root 2946 1463 0 15:29:19 ttyd1 0:00 grep gssd rpcinfo -p | grep gssd 100234 1 tcp 1024 gssd rpcinfo -t localhost gssd program 100234 version 1 ready and waiting |
As root user, mount the filesystem, as follows:
mount -o sec=krb5,proto=udp server:/export /mnt ls /mnt foo bar baz |
![]() | Note: Each user who wants to access files on a NFS-mounted filesystem that uses RPCSEC_GSS must have a valid (that is, not expired) Kerberos ticket. Otherwise, NFS returns an EPERM error message. |
Before you configure an NFS server to use RPCSEC_GSS, make sure you have configured the Kerberos V5 client as described in “Configuring Kerberos V5 Client” and enabled Kerberos V5 security modes in the /etc/nfssec.conf file as described in “Configuring an NFS Client to Use RPCSEC_GSS Authentication”. Note that this procedure is performed on the Kerberos Domain Controller, not on the NFS server. The Kerberos server software is not supported on IRIX and the IRIX system will not have the kadmin program.
Procedure 4-2. Configuring an NFS Server to Use RPCSEC_GSS
On your Kerberos Domain Controller, use the kadmin(8) command to create a server principal for the NFS on your server, as follows:
kadmin Authenticating as principal root/admi[email protected] with password. Password for root/[email protected]: kadmin: ank -randkey nfs/server.location.sgi.com WARNING: no policy specified for nfs/[email protected]; defaulting to no policy Principal "nfs/[email protected]" created. kadmin: getprinc nfs/server.location.sgi.com Principal: nfs/[email protected] Expiration date: [never] Last password change: Fri Mar 05 16:52:32 AEDT 2004 Password expiration date: [none] Maximum ticket life: 1 day 00:00:00 Maximum renewable life: 0 days 00:00:00 Last modified: Fri Mar 05 16:52:32 AEDT 2004 (root/[email protected]) Last successful authentication: [never] Last failed authentication: [never] Failed password attempts: 0 Number of keys: 1 Key: vno 2, DES cbc mode with CRC-32, no salt Attributes: Policy: [none] |
Add a new principal to keytab file on server, as follows:
kadmin: ktadd -k /etc/krb5/krb5.keytab nfs/server.location.sgi.com Entry for principal nfs/[email protected] with kvno 3, encryption type DES cbc mode with CRC-32 added to keytab WRFILE:/etc/krb5/krb5.keytab. |
Copy the keytab file from your Kerberos Domain Controller to the /etc/krb5/krb5.keytab file on your NFS server.
You now need to decide which export entries will be accessible to calls with RPCSEC_GSS authentication. To export an entry with RPCSEC_GSS, add the sec=... option to the corresponding line in the /etc/exports file, as follows:
/export sec=krb5,root=trusted |
Once your keytab file is correct and have you have updated your /etc/exports file, reboot the server or restart network services by using the /etc/init.d/network script.
Kerberos V5 and RPCSEC_GSS are using principals names to pass the identity of the user, for example, instead of passing UID 16314 for user jane, a client passes a string "[email protected]" to the server . It is then up to the server to translate that string into a UID/GID appropriate for user jane on that server. On IRIX, this function is performed by the gssd(1M) daemon that uses its own cache of credentials to associate a Kerberos V5 principal with a local UID. The cache is maintained by the gsscred(1M) command - see the man page for details.
Note that only principals which are found in the cache can be mapped to UID/GID, all other principals will be mapped to UID/GID of "nobody".
It is possible to use Windows Server Active Directory as the Kerberos Domain Controller but there are a few issues which are addressed in this section. This section covers the following topics:
Active Directory uses non-DES encrypted tickets by default to enable compatibility with NT/4 password hashing. Some versions of Kerberos can support the encryption mechanism called RC4-HMAC. While it is possible to obtain a ticket from a KDC, this ticket cannot be used to initiate an RPCSEC_GSS session. You can check the kind of tickets you have by using the klist(1) command, as follows:
/usr/kerberos/bin/klist -e -f Ticket cache: FILE:/tmp/krb5cc_0 Default principal: [email protected] Valid starting Expires Service principal 06/01/04 12:50:34 06/01/04 22:51:33 krbtgt/[email protected] renew until 06/02/04 12:50:34, Flags: RI Etype (skey, tkt): ArcFour with HMAC/md5, ArcFour with HMAC/md5 /usr/kerberos/bin/klist -e -f Ticket cache: FILE:/tmp/krb5cc_0 Default principal: [email protected] Valid starting Expires Service principal 06/01/04 12:53:19 06/01/04 22:53:22 krbtgt/[email protected] renew until 06/02/04 12:53:19, Flags: RI Etype (skey, tkt): DES cbc mode with CRC-32, ArcFour with HMAC/md5 |
In the first case, the ticket is encrypted using RC4-HMAC and an attempt to use it for mounting a NFS filesystem results in EPERM error returned from the mount(2) system call, as follows:
mount -o sec=krb5,proto=udp rogi:/var /mnt mount: NFS version 3 mount failed, trying NFS version 2. mount: rogi:/var on /mnt: Permission denied mount: giving up on: /mnt |
In order to request a ticket from the Active Directory that could be used with RPCSEC_GSS, add the following to the [libdefaults] section of your krb5.conf file, as follows:
[libdefaults] default_realm = REALM default_tgs_enctypes = des-cbc-crc default_tkt_enctypes = des-cbc-crc |
As stated above, Active Directory uses non-DES encoding by default. Therefore, an Active Directory administrator must enable DES tickets for every user that is to be authenticated from a UNIX host.
To enable DES tickets using Active Directory User Management GUI, go to User's Properties, select Account tab and check the box Use DES encryption types for this account in the Account Options list. You may also want to consider disabling Kerberos pre-authentication, since it is not supported by all implementations of Kerberos.
Active Directory does not support kadmin(8) protocol, therefore you cannot use the kadmin utility to generate keytab for the services that you could be running on a UNIX server. Instead you need to use the ktpass utility that is shipped on the Windows Server CD in the Support/Tools directory.
Start by creating a Windows user for a service you want to run, for example, [email protected] that can be a principal for NFS on a host in your realm. After adding the user and enabling DES tickets for that user (see “Enabling DES Tickets for Users in Active Directory”), use ktpass to extract the keytab information, for example:
ktpass -out nfshost.keytab -princ nfs/host[email protected] -mapuser [email protected] -crypto des-cbc-crc -pass * |
Then copy nfshost.keytab to the appropriate keytab for your host.