Chapter 4. Setting Up and Testing ONC3/NFS

This chapter explains how to set up ONC3/NFS services and verify that they work. It provides procedures for enabling exporting on NFS servers, for setting up mounting and automounting on NFS clients, and for setting up the network lock manager. It also explains how to create a CacheFS file system. Before you begin these procedures, you should be thoroughly familiar with the information provided in Chapter 2, “Planning ONC3/NFS Service.”

This chapter contains these sections:


Note: To do the procedures in this chapter, you should have already installed ONC3/NFS software on the server and client systems that will participate in the ONC3/NFS services. The ONC3/NFS Release Notes explain where to find instructions for installing ONC3/NFS software.


Setting Up the NFS Server

Setting up an NFS server requires verifying that the required software is running on the server, editing the server's /etc/exports file, adding the file systems to be exported, exporting the file systems, and verifying that they have been exported. The instructions below explain the set-up procedure. They assume that NFS software is already installed on the server. Do this procedure as the superuser on the server.

  1. Check the NFS configuration flag on the server.

    When the /etc/init.d/network script executes at system startup, it starts NFS running if the chkconfig(1M) flag nfs is on. To verify that nfs is on, type the chkconfig(1M) command and check its output, for example:

    # /etc/chkconfig
            Flag                 State
            ====                 =====
            ...
            nfs                  on
            ...
    

    This example shows that the nfs flag is set to on.

  2. If your output shows that nfs is off, type this command and reboot your system:

    # /etc/chkconfig nfs on 
    

  3. Verify that NFS daemons are running.

    Four nfsd and four biod daemons should be running (the default number specified in /etc/config/nfsd.options and /etc/config/biod.options). Verify that the appropriate NFS daemons are running using the ps(1) command, shown below. The output of your entries looks similar to the output in these examples:

    # ps -ef | grep nfsd
    root   102      1  0  Jan 30 ?      0:00 /usr/etc/nfsd 4
    root   104    102  0  Jan 30 ?      0:00 /usr/etc/nfsd 4
    root   105    102  0  Jan 30 ?      0:00 /usr/etc/nfsd 4
    root   106    102  0  Jan 30 ?      0:00 /usr/etc/nfsd 4
    root  2289   2287  0 14:04:50 ttyq4 0:00 grep nfsd
    # ps -ef | grep biod
    root   107      1  0  Jan 30 ?      0:00 /usr/etc/biod 4
    root   108      1  0  Jan 30 ?      0:00 /usr/etc/biod 4
    root   109      1  0  Jan 30 ?      0:00 /usr/etc/biod 4
    root   110      1  0  Jan 30 ?      0:00 /usr/etc/biod 4
    root  2291   2287  4 14:04:58 ttyq4 0:00 grep biod
    

    If no NFS daemons appear in your output, they were not included in the IRIX kernel during NFS installation. To check the kernel, type this command:

    # strings /unix | grep nfs
    

    If there is no output, rebuild the kernel with this command, then reboot the system:

    # /etc/autoconfig -f
    

  4. Verify that mount daemons are registered with the portmapper.

    Mount daemons must be registered with the server's portmapper so the portmapper can provide port numbers to incoming NFS requests. Verify that the mount daemons are registered with the portmapper by typing this command:

    # /usr/etc/rpcinfo –p | grep mountd
    

    After your entry, you should see output similar to this:

    100005  1   tcp  1230  mountd
    100005  1   udp  1097  mountd
    391004  1   tcp  1231  sgi_mountd
    391004  1   udp  1098  sgi_mountd
    

    The sgi_mountd in this example is an enhanced mount daemon that reports on SGI-specific export options.

  5. Edit the /etc/exports file.

    Edit the /etc/exports file to include the file systems you want to export and their export options (/etc/exports and export options are explained in “/etc/exports and Other Export Files” in Chapter 2). This example shows one possible entry for the /etc/exports file:

    /usr/demos -ro,access=client1:client2:client3
    

    In this example, the file system /usr/demos are exported with read-only access to three clients: client1, client2, and client3. Domain information can be included in the client names, for example client1.eng.sgi.com.

  6. Run the exportfs(1M) command.

    Once the /etc/exports file is complete, you must run the exportfs command to make the file systems accessible to clients. You should run exportfs anytime you change the /etc/exports file. Type this command:

    # /usr/etc/exportfs -av
    

    In this example, the –a exports all file systems listed in the /etc/exports file, and the –v causes exportfs to report its progress. Error messages reported by exportfs usually indicate a problem with the /etc/exports file.

  7. Use exportfs to verify your exports.

    Type the exportfs command with no parameters to display a list of the exported file system(s) and their export options, as shown in this example:

    # /usr/etc/exportfs
    /usr/demos -ro,access=client1:client2:client3
    

    In this example, /usr/demos is accessible as a read-only file system to systems client1, client2, and client3. This matches what is listed in the /etc/exports file for this server (see instruction 5 of this procedure). If you see a mismatch between the /etc/exports file and the output of the exportfs command, check the /etc/exports file for syntax errors.

The NFS software for this server is now running and its resources are available for mounting by clients. Repeat these instructions to set up additional NFS servers.

Setting Up an NFS Client

Setting up an NFS client for conventional mounting requires verifying that NFS software is running on the client, editing the /etc/fstab file, adding the names of directories to be mounted, and mounting the directories in /etc/fstab by giving the mount(1M) command or by rebooting your system. These directories remain mounted until you explicitly unmount them.


Note: For instructions on mounting directories not listed in /etc/fstab, see “Temporary NFS Mounting” in Chapter 5.

The procedure below explains how to set up NFS software on a client and mount its NFS resources using the mount command. You must do this procedure as the superuser.

  1. Use chkconfig to check the client's NFS configuration flag.

    To verify that nfs is on, give the chkconfig command and check its output (see “Setting Up the NFS Server” in this chapter for details on chkconfig).

  2. If your output shows that nfs is off, type this command and reboot your system:

    # /etc/chkconfig nfs on  
    

  3. Verify that NFS daemons are running.

    Four nfsd and four biod daemons should be running (the default number specified in /etc/config/nfsd.options and /etc/config/biod.options). Verify that the appropriate NFS daemons are running using the ps(1) command, shown below. The output of your entries looks similar to the output in these examples:

    # ps -ef | grep nfsd
    root   102      1  0  Jan 30 ?      0:00 /usr/etc/nfsd 4
    root   104    102  0  Jan 30 ?      0:00 /usr/etc/nfsd 4
    root   105    102  0  Jan 30 ?      0:00 /usr/etc/nfsd 4
    root   106    102  0  Jan 30 ?      0:00 /usr/etc/nfsd 4
    root  2289   2287  0 14:04:50 ttyq4 0:00 grep nfsd
    # ps -ef | grep biod
    root   107      1  0  Jan 30 ?      0:00 /usr/etc/biod 4
    root   108      1  0  Jan 30 ?      0:00 /usr/etc/biod 4
    root   109      1  0  Jan 30 ?      0:00 /usr/etc/biod 4
    root   110      1  0  Jan 30 ?      0:00 /usr/etc/biod 4
    root  2291   2287  4 14:04:58 ttyq4 0:00 grep biod
    

    If no NFS daemons appear in your output, they were not included in the IRIX kernel during NFS installation. To check the kernel, type this command:

    # strings /unix | grep nfs
    

    If there is no output, rebuild the kernel with this command, then reboot the system:

    # /etc/autoconfig -f
    

  4. Edit the /etc/fstab file.

    Add an entry to the /etc/fstab file for each NFS directory you want mounted when the client is booted. The example below illustrates an /etc/fstab with an NFS entry to mount /usr/demos from the server redwood at mount point /n/demos:

    /dev/root          /            efs rw,raw=/dev/rroot 0 0
    /dev/usr           /usr         efs rw,raw=/dev/rusr 0 0
    redwood:/usr/demos /n/demos     nfs ro,intr,bg 0 0
    


    Note: The background (bg) option in this example allows the client to proceed with the boot sequence without waiting for the mount to complete. If the bg option is not used, the client hangs if the server is unavailable.


  5. Create the mount points for each NFS directory.

    After you edit the /etc/fstab file, create a directory to serve as the mount point for each NFS entry in /etc/fstab file. If you specified an existing directory as a mount point for any of your /etc/fstab entries, remember that the contents of the directory are inaccessible while the NFS mount is in effect.

    For example, to create the mount point /n/demos for mounting the directory /usr/demos from server redwood, give this command:

    # mkdir -p /n/demos
    

  6. Mount each NFS resource.

    You can use the mount command in several ways to mount the entries in this client's /etc/fstab. See the mount(1M) manual page for a description of the options. The examples below show two methods: mounting each entry individually and mounting all fstab entries that specify a particular server. The first example is:

    # mount /n/demos
    

    In this example, only the mount point is specified. All other information needed to perform the mount, the server name redwood and its resource /usr/demos, is provided by the /etc/fstab file.

    The second example is:

    # mount -h redwood  
    

    In this example, all NFS entries in /etc/fstab that specify server redwood are mounted.


    Note: If you reboot the client instead of using the mount command, all NFS entries in /etc/fstab will be mounted.


The NFS software for this client is now ready to support user requests for NFS directories. Repeat these instructions to set up additional NFS clients.

Setting Up the Automounter

Since the automounter runs only on NFS clients, all setup for the automounter is done on the client system. This section provides two procedures for setting up the automounter: one for setting up a default automount environment and one for setting up a more complex environment.

Setting Up a Default Automount Environment

If you set up the default automount environment on a client, at system startup automount(1M) reads the /etc/config/automount.options file for mount information. By default, /etc/config/automount.options contains an entry for a special map called –hosts. The –hosts map tells the automounter to read the hosts database (/etc/hosts, NIS and/or DNS (BIND), see the resolver(4) manual page) and use the server specified on the command line if the hosts database has a valid entry for that server. When using the –hosts map, when a client accesses a server, automount gets the exports list from the server and mounts all directories exported by that server. automount uses /tmp_mnt/hosts as the mount point.

A sample –hosts entry in /etc/config/automount.options is:

-v    /hosts    -hosts    -intr,nosuid,nodev

Use this procedure to set up the default automount environment on an NFS client. You must do this procedure as the superuser.

  1. Verify that NFS flags are on.

    By default, the nfs and automount flags are set to on. To verify that they are on, give the chkconfig command and check its output (see instruction 1 of “Setting Up an NFS Client” in this chapter for sample chkconfig output). If either flag is set to off, use one of these commands to reset it, then reboot:

    # /etc/chkconfig nfs on
    # /etc/chkconfig automount on
    

  2. Verify that the default configuration is working:

    # cd /hosts/servername
    

    In place of servername, substitute the host name of any system whose name can be resolved by the host name resolution method you are using (see the resolver(4) manual page). If the system specified is running NFS and has file systems that can be accessed by this client, automount mounts all available file systems to /tmp_mnt/hosts/servername. If the system is not running NFS or has nothing exported that you have access to, you get an error message when you try to access its file systems.

  3. Verify that directories have been mounted, for example (the \ character shows that the output is wrapped):

    # mount
    servername:/ on /tmp_mnt/hosts/servername type nfs (rw,dev=c0005)
    

    The automounter has serviced this request. It dynamically mounted /hosts/servername using the default automount environment.

Setting Up a Custom Automount Environment

A customized automount environment allows you to select the NFS directories that are dynamically mounted on a particular client, and allows you to customize the options in effect for particular mounts. You must complete four general steps to set up a customized automount environment:

  1. Creating the maps

  2. Starting the automount program

  3. Verifying the automount process

  4. Testing automount

Step 1: Creating the Maps

A customized automount environment contains a master map and any combination of direct and indirect maps. Although a master map is required, automount does not require both direct and indirect maps. You can use either direct or indirect maps exclusively.

Instructions for creating each type of map are given below. Notice from these instructions that a crosshatch (#) at the beginning of a line indicates a comment line in all types of maps. Include comment lines in your maps to illustrate map formats until you become familiar with each map type.

  1. Create the master map on the client.

    The master map points automount to other files that have more detailed information needed to complete NFS mounts. To create the master map, become superuser and create a file called /etc/auto.master with any text editor. Specify the mount point, map name, and any options that apply to the direct and indirect maps in your entries, for example:

    #Mount Point   Map Name            Map Options
    /food/dinner   /etc/auto.food      -ro
    /-             /etc/auto.exercise  -ro,soft,intr
    /hosts         -hosts              -intr,nosuid,nodev
    

  2. Create the indirect map.

    Create your indirect map and insert the entries it needs. This example is the indirect map /etc/auto.food, listed in /etc/auto.master in instruction 1:

    #Directory    Options      Location
    ravioli                    venice:/food/pasta
    crepe         -rw          paris:/food/desserts
    chowmein                   hongkong:/food/noodles 
    

  3. Create the direct map.

    Create your direct map and insert the entries it needs. This example is the direct map /etc/auto.exercise, listed in /etc/auto.master in instruction 1:

    #Directory        Options   Location
    /leisure/swim               spitz:/sports/water/swim
    /leisure/tennis             becker:/sports/racquet/tennis
    /leisure/golf     -hard     palmer:/sports/golf  
    

Step 2: Starting the automount Program

You can set up the software on a client so that automount starts when the client is booted, and you can also start automount from the command line. The procedures in this section explain how to set up the automounter to start during the boot sequence.

If automount is configured on at system startup, the /etc/init.d/network script reads the contents of the /etc/config/automount.options file to determine how to start the automount program and what to mount and how to mount it. Depending on the site configuration specified in the /etc/automount.options file, automount either finds all necessary information in the /etc/automount.options file, or it is directed to local or NIS maps (or both) for additional mounting information.


Note: If you plan to use NIS database maps other than the –hosts built-in map, you need to create the NIS maps. See the NIS Administration Guide for information on building custom NIS maps.

Follow this procedure to set automount to start automatically at system startup:

  1. Configure automount on with the chkconfig command (if needed):

    # /etc/chkconfig automount on
    

  2. Modify the /etc/config/automount.options file.

    Using any standard editor, modify /etc/config/automount.options to reflect the automount site environment. See automount(1M) for details on the /etc/config/automount.options file. Based on the previous examples, the /etc/config/automount.options file contains this entry:

    -v -m -f /etc/auto.master
    

    The –v option directs error messages to the screen during start up and into the /var/adm/SYSLOG file once automount is up and running. The –m option tells automount not to check the NIS database for a master map. Use this option to isolate map problems to the local system by inhibiting automount from reading the NIS database maps, if any exist. The –f option tells automount that the argument that follows it is the full path name of the master file.


    Note: In general, it is recommended that you start the automounter with the verbose option ( –v), since this option provides messages that can help with problem solving.


  3. Reboot the system.

Step 3: Verifying the automount Process

Verify that the automount process is functioning by performing the following two steps.

  1. Validate that the automount daemon is running with the ps command:

    # ps -ef | grep automount
    

    You should see output similar to this:

     root    455     1  0   Jan 30 ?        0:02 automount -v -m -f /etc/auto.master
     root   4675  4673  0 12:45:05 ttyq5    0:00 grep automount    
    

  2. Check the /etc/mtab entries.

    When the automount program starts, it creates entries in the client's /etc/mtab for each automount mount point. Entries in /etc/mtab include the process number and port number assigned to automount, the mount point for each direct map entry, and each indirect map. The /etc/mtab entries also include the map name, map type (direct or indirect), and any mount options.

    Look at the /etc/mtab file. A typical /etc/mtab table with automount running looks similar to this example (wrapped lines end with the \ character):

    /dev/root / efs rw,raw=/dev/rroot 0 0
    /dev/usr /usr efs rw,raw=/dev/rusr 0 0
    /debug /debug dbg rw 0 0
    /dev/diskless /diskless efs rw,raw=/dev/rdiskless 0 0
    /dev/d /d efs rw,raw=/dev/rd 0 0
    flight:(pid12155) /src/sgi ignore \
        ro,intr,port=885,map=/etc/auto.source,direct 0 0
    flight:(pid12155) /pam/framedocs/nfs ignore \
        ro,intr,port=885,map=/etc/auto.source,direct 0 0
    flight:(pid12155) /hosts ignore ro,intr,port=885,\
        map=-hosts,indirect,dev=1203 0 0
    

    The entries corresponding to automount mount points have the file system type ignore to direct programs to ignore this /etc/mtab entry. For instance, df(1) and mount do not report on file systems with the type ignore. When a directory is NFS mounted by the automount program, the /etc/mtab entry for the directory has nfs as the file system type. df and mount report on file systems with the type nfs.

Step 4: Testing automount

When the automount program is set up and running on a client, any regular account can use it to mount remote directories transparently. You can test your automount set-up by changing to a directory specified in your map configuration.

The instructions below explain how to verify that automount is working.

  1. As a regular user, cd(1) to an automounted directory.

    For example, test whether automount mounts /food/pasta:

    % cd /food/dinner/ravioli
    

    This command causes automount to look in the indirect map /etc/auto.food to execute a mount request to server venice and apply any specified options to the mount. automount then mounts the directory /food/pasta to the default mount point /tmp_mnt/food/dinner/ravioli. The directory /food/dinner/ravioli is a symbolic link to /tmp_mnt/food/dinner/ravioli.


    Note: The /food/dinner directory appears empty unless one of its subdirectories has been accessed (and therefore mounted).


  2. Double-check your setup using a different directory.

    To have automount NFS mount /sports/water/swim automatically, give this command:

    % cd /leisure/swim
    

    This command causes automount to look in the direct map /etc/auto.exercise to execute a mount request to server spitz and apply specified options to the mount. It then mounts the directory /sports/water/swim to the default mount point /tmp_mnt/leisure/swim. The directory /leisure/swim is a symbolic link to /tmp_mnt/leisure/swim.

  3. Verify that the individual mount has taken place.

    Use the pwd(1) command to verify that the mount has taken place, as shown in this example:

    % pwd
    /leisure/swim
    

  4. Verify that both directories have been mounted with the automounter.

    You can also verify automounted directories by checking the output of a mount command:

    % mount
    

    mount reads the current contents of the /etc/mtab file and includes conventional and automount mounted directories in its output.

The custom configuration of automount is set up and ready to work for users on this client.

Setting Up the Lock Manager

The NFS lock manager provides file and record locking between a client and server for NFS-mounted directories. As an NFS utility, the lock manager is in effect when NFS software in installed and operating properly on both the server and client systems. It is implemented by two daemons, lockd(1M) and statd(1M). These daemons must be running on an NFS server and its clients for the lock manager to function.

The NFS lock manager program must be running on both the NFS client and the NFS server to function properly. Use this procedure to check the lock manager setup:

  1. Use chkconfig on the client to check the lock manager flag.

    To verify that the lockd flag is on, give the chkconfig command and check its output (see instruction 1 of “Setting Up an NFS Client” in this chapter for sample chkconfig output). If your output shows that lockd is off, give this command and reboot your system:

    # /etc/chkconfig lockd on 
    

  2. Verify that both lock manager daemons are running.

    Give these ps commands and check their output to verify that the lock manager daemons, rpc.lockd(1M) and rpc.statd(1M), are running:

    # ps -ef | grep statd 
    root   131     1  0   Aug  6 ?        0:51 /usr/etc/rpc.statd
    root  2044   427  2 16:13:24 ttyq1    0:00 grep statd 
    # ps -ef | grep lockd
    root   129     1  0   Aug  6 ?        0:51 /usr/etc/rpc.lockd
    root  2045   427  2 16:13:24 ttyq1    0:00 grep lockd 
    

    If either rpc.lockd or rpc.statd is not running, start them manually by giving these commands in this order:

    # /usr/etc/rpc.statd
    # /usr/etc/rpc.lockd
    

  3. Repeat instructions 1 and 2, above, on the NFS server.

Setting Up the CacheFS File System

When you set up a cache, you can use all or part of an existing file system. You can also set up a new slice to be used by CacheFS. In addition, when you create a cache, you can specify the percentage of resources, such as number of files or blocks, that CacheFS can use in the front file system. The configurable cache parameters are discussed in the section “Cache Resource Parameters” on page 32.

Before starting to set up CacheFS, check that it is configured to start on both the server and client.

  1. Check the CacheFS configuration flag.

    When the /etc/init.d/network script executes at system startup, it starts CacheFS running if the chkconfig(1M) flag cachefs is on. To verify that cachefs is on, type the chkconfig(1M) command and check its output, for example:

    # /etc/chkconfig
            Flag                 State
            ====                 =====
            ...
           cachefs			 on
            ...
    

    This example shows that the cachefs flag is set to on.

  2. If your output shows that cachefs is off, type this command and reboot your system:

    # /etc/chkconfig cachefs on 
    

Front File System Requirements

CacheFS typically uses a local EFS file system for the front file system.You can use an existing EFS file system for the front file system or you can create a new one. Using an existing file system is the quickest way to set up a cache. Dedicating a file system exclusively to CacheFS gives you the greatest control over the file system space available for caching.


Caution: Do not make the front file system read-only and do not set quotas on it. A read-only front file system prevents caching, and file system quotas interfere with control mechanisms built into CacheFS.


Setting Up a Cached File System

There are two steps to setting up a cached file system:

  1. You must create the cache with the cfsadmin command. See “Creating a Cache” on page 61.

  2. You must mount the file system you want cached using the -t cachefs option to the mount command. See “Mounting a Cached File System” on page 62.

Creating a Cache

The following example is the command to create a cache:

cfsadmin –c directory_name 

The following example creates a cache and creates the cache directory /local/mycache. Make sure the cache directory does not already exist.

# cfsadmin -c /local/mycache

This example uses the default cache parameter values. The CacheFS parameters are described in the section “Cache Resource Parameters” on page 32. See the cfsadmin(1M) manual page and “cfsadmin Command” on page 31 for more information on cfsadmin options.

Setting Cache Parameters

The following example shows how to set parameters for a cache.

cfsadmin -c -o parameter_list cache_directory 

The parameter_list has the following form:

parameter_name1=value,parameter_name2=value,...

The parameter names are listed in Table 2-2. You must separate multiple arguments to the –o option with commas.


Note: The maximum size of the cache is by default 90% of the front file system resources. Performance deteriorates significantly if an EFS file system exceeds 90% capacity.

The following example creates a cache named /local/cache1 that can use up to 80% of the disk blocks in the front file system and can grow to use 55% of the front file system blocks without restriction unless 60% (or more) of the front file system blocks are already in use.

# cfsadmin -c -o maxblocks=80,minblocks=55,threshblocks=60 \ /local/cache1

The following example creates a cache named /local/cache2 that can use up to 75% of the files available in the front file system.

# cfsadmin -c -o maxfiles=75 /local/cache2

The following example creates a cache named /local/cache3 that can use 75% of the blocks in the front file system, that can use 50% of the files in the front file system without restriction unless total file usage already exceeds 60%, and that has 70% of the files in the front file system as an absolute limit.

# cfsadmin -c -o \ maxblocks=75,minfiles=50,threshfiles=60,maxfiles=70 \ /local/cache3

Mounting a Cached File System

There are two ways to mount a file system in a cache:

  • Using the mount command

  • Creating an entry for the file system in the /etc/fstab file

Using mount to Mount a Cached File System

The following command mounts a file system in a cache.

mount -t cachefs -o backfstype=type,cachedir=cache_directory \
back_file system mount_point

The arguments used with the -o option are described in “/etc/fstab File” on page 28. See the mount(1M) manual page for more information about the arguments used when mounting a cached file system.

For example, the following command makes the file system merlin:/docs available as a cached file system named /docs:

# mount -t cachefs -o backfstype=nfs,cachedir=/local\
/cache1 merlin:/docs /docs

Mounting a Cached File System That Is Already Mounted

Use the backpath argument when the file system you want to cache has already been mounted. backpath specifies the mount point of the mounted file system. When the backpath argument is used, the back file system must be read-only. If you want to write to the back file system, you must unmount it before mounting it as a cached file system.

For example, if the file system merlin:/doc is already NFS-mounted on /nfsdocs, you can cache that file system by giving that path name as the argument to backpath, as shown in the following example:

# mount -t cachefs -o \ backfstype=nfs,cachedir=/local/cache1,backpath=/nfsdocs \ merlin:/doc /doc


Note: There is no performance gain in caching a local EFS disk file system.


Mounting a CD-ROM as a Cached File System

So far, examples have illustrated back file systems that are NFS-mounted, and the device argument to the mount command has taken the form server:file_system. If the back file system is an ISO9660 file system, the device argument is the CD-ROM device in the /CDROM directory. The file system type is iso9660.

The following example illustrates caching an ISO9660 back file system on the device /CDROM as /doc in the cache /local/cache1:

# mount -t cachefs -o backfstype=iso9660,cachedir=/local/cache1,\ ro,backpath=/CDROM /CDROM /doc

Because you cannot write to the CD-ROM, the ro argument is specified to make the cached file system read-only. The arguments to the -o option are explained in “/etc/fstab and Other Mount Files” on page 16.

You must specify the backpath argument because the CD-ROM is automatically mounted when it is inserted. The mount point is in the /CDROM directory and is determined by the name of the CD-ROM. The special device to mount is the same as the value for the backpath argument.


Note: When a CD-ROM is changed, the CacheFS file system must be unmounted and remounted.


Creating an fstab Entry for Cached File Systems

As with other file system types, you can put an entry in the /etc/fstab file for a cached file system to mount the cached file system automatically every time the system boots. The /etc/fstab file has the following fields:

  • device to mount

  • mount point

  • file system type

  • mount options

  • dump frequency

  • fsck pass

Enter the special device name of the back file system as the device to mount. For NFS file systems, the entry takes the form server:path. The device to fsck is the cache directory path. The mount point is the mount point of the cached file system. The dump frequency and fsck pass should always be 0. The following example shows an entry for a cached file system (the lines beginning with hash marks (#) are comments):

#device        mount  FS       mount                               dump      fsck
#to mount      point  type     options                             frequency pass

svr1:/usr/abc  /docs  cachefs  rw,backfstype=nfs,cachedir=/cache1  0         0

Checking a Cached File System

The fsck_cachefs(1M) command checks the integrity of cached file systems. The CacheFS version of fsck automatically corrects problems without requiring user interaction.

To check a cached file system, type:

fsck_cachefs -o noclean cache_directory 

The following example forces a check of the cache directory /local/cache1:

# fsck_cachefs -o noclean /local/cache1

You should not need to run fsck manually for cached file systems; fsck is run automatically when the file system is mounted.

Two options are available for the CacheFS version of fsck: –m and –o noclean. The –m option causes fsck to check the specified file system without making any repairs. The –o noclean option forces a check of the file system. See the fsck_cachefs(1M) man page for more information.