Chapter 2. Planning ONC3/NFS Service

To plan the ONC3/NFS service for your environment, it is important to understand how ONC3/NFS processes work and how they can be configured. This chapter provides prerequisite information on ONC3/NFS processes and their configuration options. It also explains the conditions under which certain options are recommended.

This chapter contains these sections:

The Export Process

Access to files on an NFS server is provided by means of the exportfs(1M) command. The exportfs command reads the file /etc/exports(4) for a list of file systems and directories to be exported from the server. Normally, exportfs is executed at system startup by the /etc/init.d/network script. It can also be executed by the superuser from a command line while the server is running. Exported file systems must be local to the server. A file system that is NFS-mounted from another server cannot be exported (see “Mount Restrictions” in Chapter 1 regarding multihop).

exportfs Command Options

The exportfs command has several options used to configure its operation. Four of these options are briefly described below. For more complete information on exportfs options, see the exportfs(1M) manual page.

–a 

(all) Export all resources listed in /etc/exports.

–i 

(ignore) Do not use the options set in the /etc/exports file.

–u 

(unexport) Terminate exporting designated resources.

–v 

(verbose) Display any output messages during execution.

Invoking exportfs without options reports the file systems that are currently exported.

/etc/exports and Other Export Files

Exporting starts when exportfs reads the file /etc/exports(4) for a list of file systems and directories to be exported from the server. As it executes, exportfs writes a list of file systems it successfully exported, and information on how they were exported, in the /etc/xtab(4) file. Anytime the /etc/exports file is changed, exportfs must be executed to update the /etc/xtab file. If an entry is not listed in /etc/xtab, it has not been exported, even if it is listed in /etc/exports.

In addition to the /etc/xtab file, the server maintains a record of the exported resources that are currently mounted and the names of clients that have mounted them. The record is maintained in a file called /etc/rmtab. Each time a client mounts a directory, an entry is added to the server's /etc/rmtab file. The entry is removed when the directory is unmounted. The information contained in the /etc/rmtab file can be viewed using the showmount(1M) command.


Note: The information in /etc/rmtab may not be current, since clients can unmount file systems without informing the server.


/etc/exports Options

There are a number of export options for managing the export process. Some commonly used export options are briefly described below. For a complete explanation of options, see the exports(4) manual page.

ro 

(read only) Export this file system with read-only privileges.

rw 

(read, write) Export this file system with read and write privileges. rw is the default.

rw=  

(read mostly) Export this file system read-only to all clients except those listed.


Note: Directories are exported either ro or rw, not both ways. The option specified first is used.


anon= 

(anonymous UID) If a request comes from the user root (UID = 0), use the specified UID as the effective UID instead. By default, the effective UID is nobody (UID = –2). Specifying a UID of –1 disables access by unknown users or by root on a host not specified by the root option. Use the root option to permit accesses by root.

root= 

Give superuser privileges to root users of NFS-mounted directories on systems specified in root access list. By default, root is set to none.

access= 

Grant mount privileges to a specified list of clients only. Clients can be listed individually or as an NIS netgroup (see netgroup(4)).

nohide 

(IRIX enhancement) By default, the contents of a child file system are hidden when only the parent file system is mounted. Allow access to this file system if its parent file system is mounted.

wsync 

(IRIX enhancement) Perform all write operations to disk before sending an acknowledgment to the client. Overrides delayed writes. (See “Input/Output Management” in Chapter 1 for details.)

When a file system or directory is exported without specifying options, the default options are rw and anon=nobody.

Sample /etc/exports File

A default version of the /etc/exports file is shipped with NFS software and stored in /etc/exports when NFS is installed. You must add your own entries to the default version as part of the NFS setup procedure (given in “Setting Up the NFS Server” in Chapter 4). This sample /etc/exports illustrates entries and how to structure them with various options:

/                      -ro
/reports               -access=finance,rw=susan
/usr                   -nohide
/usr/demos             -ro,access=client1:client2:client3
/usr/catman            -nohide

In this sample /etc/exports, the first entry exports the root directory (/) with read-only privileges. The second entry exports a separate file system, /reports read-only to the netgroup finance, with write permission specified for susan. Users who mount /usr can access the /usr/demos file system because nohide is specified.

The fourth entry uses the access list option. It specifies that client1, client2, and client3 are authorized to access /usr/demos with read-only privileges. To avoid possible problems, client1, client2, and client3 should be fully qualified domain names (as returned by hostname(1)).


Note: If you are using an access list to export to a client with multiple network interfaces, the /etc/exports file must contain all names associated with the client's interfaces. For example, a client named octopus with two interfaces needs two entries in the /etc/exports file, typically octopus and gate-octopus.

The fifth entry is an example of an open file system. It exports /usr/catman to the entire world with read-write access (the default when neither ro or rw is specified) to its contents. Activities performed as superuser on /usr/catman files have no effect, since anon is not specified.

Export Restrictions

CacheFS file systems cannot be exported.

Recommendations for Exporting

Consider these suggestions for setting up exports on your NFS service:

  1. Use the ro option unless clients must write to files. This reduces accidental removal or changes to data.

  2. In secure installations, set anon to 1 to disable root on any client, except those specified in the root option, from accessing the designated directory as root.

  3. Be cautious with your use of the root option.

  4. If you are using NIS, consider using netgroups for long access lists.

  5. Use nohide to export related but separate file systems to minimize the number of mounts clients must perform.

  6. Use wsync when minimizing risk to data is more important than optimizing performance.

The /etc/fstab Mount Process

An NFS client mounts directories at startup via /etc/fstab entries, or by executing the mount(1M) command. The mount command can be executed during the client's boot sequence, from a command line entry, or graphically, using the System Manager tool. The mount command supports the NFS3 protocol if that protocol is also running on the server.

Mounts must reference directories that are exported by a network server and mount points that exist on the client. Directories that serve as mount points may or may not be empty. If using the System Manager for NFS mounting, the mount points must be empty. If the directory is not empty, the original contents are hidden and inaccessible while the NFS resources remain mounted.

mount and umount Command Options

The mount and umount(1M) commands have many options for customizing mounting and unmounting that can apply to either EFS or NFS file systems. Several commonly-used options are briefly described below in their NFS context (see mount(1M) for full details).

–t type 

(type) Set the type of directories to be mounted or unmounted. type is nfs for NFS mounting, nfs3 for
the new NFS3 protocol, and nfs3pref for mounts that attempt NFS3 protocol, but fall back to nfs if the attempt fails. To mount NFS3, the server must support NFS3.

–a 

(all) Attempt to mount all directories listed in /etc/fstab, or unmount all directories listed in /etc/mtab.

–h hostname 

(host) Attempt to mount all directories listed in /etc/fstab that are remote-mounted from the server hostname, or unmount directories listed in /etc/mtab that are remote-mounted from server hostname.

-b list 

(all but) Attempt to mount or unmount all file systems listed in /etc/fstab except those associated with the directories in list. list contains one or more comma-seperated directory names.

–o options 

(options) Use the options options, instead of the options in /etc/fstab.

/etc/fstab and Other Mount Files

Mounting typically occurs when the mount command reads the /etc/fstab file. Each NFS entry in /etc/fstab contains up to six fields. An NFS entry has this format:

file_system directory type options frequency pass

where:

file_system 

is the remote server directory to be mounted.

directory 

is the mount point on the client where the directory is attached.

type 

is the file system type. This can be nfs for NFS resources, nfs3 for the NFS3 protocol, and nfs3pref for mounts that try nfs3 but fall back to nfs if the mount fails.

options 

is mount options (see “/etc/fstab Options” in this chapter).

frequency 

is always set to zero (0) for NFS and CacheFS entries.

pass 

is always set to zero (0) for NFS and CacheFS entries.

The mount command maintains a list of successfully mounted directories in the file /etc/mtab. When mount successfully completes a task, it automatically updates the /etc/mtab file. It removes the /etc/mtab entry when the directory is unmounted. The contents of the /etc/mtab file can be viewed using the mount command without any options. See the mount(1M) manual page for more details.

/etc/fstab Options

There are several options for configuring mounts. When you use these options, it is important to understand that export options (specified on a server) override mount options. NFS /etc/fstab options are briefly described below (see the fstab(4) manual page for complete information):

ro 

Read-only permissions are set for files in this directory.

rw 

Read write permissions are set for files in this directory (default).

hard 

Specifies how the client should handle access attempts if the server fails. If the NFS server fails while a directory is hard-mounted, the client keeps trying to complete the current NFS operation until the server responds (default).

soft 

Alternative to hard mounting. If the NFS server fails while a directory is soft-mounted, the client attempts a limited number of tries to complete the current NFS operation before returning an error.

intr 

(interrupt) Allows NFS operations to be interrupted by users. The default setting is off.

bg 

(background) Mounting is performed as a background task if the first attempt fails. The default setting is off.

fg 

(foreground) Mounting is performed as a foreground task. The default setting is on.

private 

(IRIX enhancement) Uses local file and record locking instead of a remote lock manager and minimizes delayed write flushing. Diskless clients are the primary users of this option.

rsize 

(read size) Changes the read buffer to the size specified (default is 8K).

wsize 

(write size) Changes the write buffer to the size specified (default is 8K).

timeo 

(NFS timeout) Sets a new timeout limit (default is .11 seconds.)

retrans 

(retransmit) Specifies an alternative to the number of times NFS operations are retried (default is 5).

port 

Specifies an alternative UDP port number for NFS on the server (default port number is 2049).

noauto 

Tells mount –a to ignore this /etc/fstab entry.

grpid 

Allows files created in a file system to have the parent directory's group ID, not the process' group ID.

nosuid 

Turns setuid execution off for nonsuperusers (default is off).

nodev 

Disallows access to character and block special files (default is off).

In addition to these options, /etc/fstab also offers several options dedicated to attribute caching. Using these options, you can direct NFS to cache file attributes, such as size and ownership, to avoid unnecessary network activity. See the fstab(4) manual page for more details.

Sample /etc/fstab File

NFS entries in /etc/fstab are designated by the nfs identifier, while EFS (local file systems) entries are designated by efs. This sample /etc/fstab file includes a typical NFS entry:

/dev/root             /           efs rw,raw=/dev/rroot 0 0
/dev/usr              /usr        efs rw,raw=/dev/rusr 0 0
redwood:/usr/demos    /n/demos    nfs ro,hard,intr,bg 0 0

In this example, the NFS directory /usr/demos on server redwood is mounted at mount point /n/demos on the client system with read-only (ro) permissions (see Figure 1-2). If the server fails after the mount has taken place, the client attempts to complete any current NFS transactions indefinitely (hard) or until it receives an interrupt (intr). Mounting executes as a background task (bg) if it didn't succeed the first time.

Recommendations for /etc/fstab Mounting

Some recommendations for /etc/fstab mounting are:

  1. Use conventional mounting for clients that are inoperable without NFS directories (such as diskless workstations) and for directories that need to be mounted most of the time.

  2. If directories are mounted with the rw (read-write) option or if they contain executable files, they should be mounted with the hard option. Hard mounting offers more certainty that processing will complete if the server temporarily fails.

  3. The intr option is recommended when using a hard mount. It allows the user to break retransmission attempts if the server becomes unavailable for an extended period of time.

  4. The bg option should always be specified to expedite the boot process if a server is unavailable when the client is booting. In other words, a client hangs until the server comes back up unless you specify bg.

  5. If you use nohide when exporting file systems on the server, the client can mount the top-most directory in the exported file system hierarchy. This gives access to all related file systems while reducing individual mount calls and the complexity of the /etc/fstab file.


    Note: A severe performance problem occurs if the nohide option is used when exporting an NFS back file system for a CacheFS mount. The nohide option creates duplicate node IDs with different file handles, causing CacheFS to remove files from the cache sooner than normal. Either avoid using the nohide option for NFS file systems that are used as the back file system or map CacheFS mounts to the back file system on the server one-to-one.


  6. Use private when the NFS directory on the server is not shared between multiple NFS clients.

  7. Do not put NFS mount points in the root (/) directory of a client. Mount points in the root directory can slow the performance of the client and can cause the client to be unusable when the server is unavailable.

The Automounter

The automount utility dynamically mounts NFS directories on a client when a user references the directory. This function is provided by the automount command. It can be set up to execute when a client is booted, or it can be executed by the superuser from a command line while the client is running. The automounter supports the NFS3 file system type.

To start the automounter at boot time, the automount flag must be set to on (see the chkconfig(1M) manual page for details). If the flag is on, the automounter is invoked by the /etc/init.d/network script and started with any automount options specified in the /etc/config/automount.options file.

automount Command Options

The automount command offers many options that allow you to configure its operation (for a complete description, see the automount(1M) manual page). Some commonly used options are:

–D 

Assign a value to an environment variable.

–f 

Read the specified local master file before the NIS master map.

–m 

Do not read the NIS master map.

–M 

Use the specified directory as the automount mount point.

–n 

Disable dynamic mounts.

–T 

Trace and display each NFS call.

–tl 

Maintain the mount for a specified duration of client inactivity (default duration is 5 minutes).

–tm 

Wait a specified interval between mount attempts (default interval is 30 seconds).

–tp 

Hold information about server availability in a cache for a specified time (default interval is 5 seconds).

–tw 

Wait a specified interval between attempts to unmount file systems that have exceeded cache time (default interval is 60 seconds).

–v 

Display any output messages during execution.

automount Files and Maps

Just as the conventional mount process reads /etc/fstab and writes to /etc/mtab, automount can be set up to read input files for mounting information. automount also records its mounts in the /etc/mtab file and removes /etc/mtab entries when it unmounts directories.

By default, when automount executes at boot time, it reads the /etc/config/automount.options file for initial operating parameters. The information contained in the /etc/config/automount.options file can contain the complete information needed by the automounter or the information can direct automount to a set of files that contain customized automounting instructions. /etc/config/automount.options cannot have comments in it.

The default version of /etc/config/automount.options is:

-v /hosts -hosts -intr,nosuid,nodev

This /etc/config/automount.options directs automount to execute with the verbose (–v) option. It also specifies that automount should use /hosts as its daemon mount point. When a user accesses a file or directory under /hosts, the –hosts argument directs automount to use the pathname component that follows /hosts as the name of the NFS server. All accessible file
systems exported by the server are mounted to the default mount point /tmp_mnt/hosts with the intr, nosuid, and nodev options.

For example, if the system redwood has the following entry in /etc/exports:

/usr/share/catman	-ro,nohide

If a client system is using the default /etc/config/automount.options file, as above, then executing the following command on the client lists the contents of the directory /usr/share/catman on redwood:

ls -l /hosts/redwood/usr/share/catman/*

automount Mount Points

Mount points for automount serve the same function as mount points in conventional NFS mounting. They are the access point in the client's file system where a remote NFS directory is attached. There are two major differences between automount mount points and conventional NFS mount points.

With automount, mount points are automatically created and removed as needed by the automount program. When the automount program is started, it reads configuration information from /etc/config/automount.options, additional automount maps, or both, and creates all mount points needed to support the specified configuration.

By default, automount mounts everything in the directory /tmp_mnt and creates a link between the mounted directory in /tmp_mnt and the accessed directory. For example, in the default configuration, mounts take place under /tmp_mnt/hosts/hostname. The automounter creates a link from the access point /hosts/hostname to the actual mount point under /tmp_mnt/hosts/hostname. This command ls /hosts/redwood/tmp displays the contents of server redwood's /tmp directory. You can change the default root mount point with the automount –M option.

automount Map Types

The automount feature uses three kinds of maps:

  • master maps

  • direct maps

  • indirect maps

Master Maps

The master map is the first file read by the automount program. There is only one master map on a client. It specifies the types of supported maps, the name of each map to be used, and options that apply to the entire map (if any). By convention, the master map is called /etc/auto.master, but the name can be changed.

For complex automount configurations, a master map can be specified in the /etc/config/automount options file.

The master map can be a local file or an NIS database file. It contains three fields: mount point, map name and map options. A crosshatch (#) at the beginning of a line indicates a comment line. A sample of master map entries is:

#Mount Point    Map Name               Map Options
/hosts          -hosts                 -intr,nosuid,nodev
/net            /etc/auto.irix.misc    -intr,nosuid
/home           /etc/auto.home         -intr,timeo=10
/-              /etc/auto.direct       -ro,intr
/net            /etc/indirect3         -ro,nfs3

The mount point field serves two purposes. It determines whether a map is a direct or indirect map, and it provides mount point information. A dash (/–) in the mount point field designates a direct map. It signals automount to use the mount points specified in the direct map for mounting this map. For example, to mount the fourth entry in the sample above, automount gets a mount point specification from the direct map /etc/auto.direct. In the fifth entry, an entire indirect map, which includes all its entries, is declared to use the NFS3 protocol. If NFS3 is not available on the server, the mount fails.

A directory name in the mount point field designates an indirect map. It specifies the mount point automount should use when mounting this map. For example, the second entry in the sample above tells automount to mount the indirect map /etc/auto.irix.misc at mount point /net. A mount point for direct and indirect maps can be several directory levels deep.

The map name field in a master map specifies the full name and location of the map. Notice that –hosts is considered an indirect map whose mount point is /hosts. The –hosts map mounts all the exported file systems from a server. If frequent access to just a single file system is required for a server with many exports, it is more efficient to access that file system with a map entry that mounts just that file system.

The map options field can be used to specify any options that should apply to the entire map. Options set in a master map can be overridden by options set for a particular entry within a map.

Direct Maps

Direct maps allow mounted directories to be distributed throughout a client's local file system. They contain the information automount needs to determine when, what, and how to mount a remote NFS directory. You can have as many direct maps as needed.

A direct map is typically called /etc/auto.mapname, where mapname is some logical name that reflects the map's contents. Direct maps can also be grouped based on logical characteristics. For instance, in the above master map example, the direct map /etc/auto.direct, indicated by the /– mount point, can also include mounting information for software to be mounted as read-only.

All direct maps contain three fields: directory, options, and location. An example of an /etc/auto.direct direct map is:

#Directory          Options    Location
/usr/local/tools    -nodev     ivy:/usr/cooltools
/usr/frame                     redwood:/usr/frame
/usr/games          -nosuid    peach:/usr/games 

In a direct map, users access the NFS directory with the pathname that is identical to the directory field value in the direct map. For example, a user gives the command cd /usr/local/tools to mount /usr/cooltools from
server ivy as specified in the direct map /etc/auto.direct. Notice that the directory field in a direct map can include several subdirectory levels.

The options field can be used to set options for an entry in the direct map. Options set within a map for an individual entry override the general option set for the entire map in the master map. The location field contains the NFS server's name and the remote directory to mount.


Note: When direct map mount points are mounted into routinely accessed directories, unexpected mount activity can occur.


Indirect Maps

Indirect maps allow remotely mounted directories to be housed under a specified shared top-level location on the client's file system. They contain the specific information the automount program needs to determine when, what, and how to NFS mount a remote directory. You can have as many indirect maps as needed.

An indirect map is typically called /etc/auto.mapname, where mapname is some logical name that reflects the map's contents. Indirect maps can be grouped according to logical characteristics. For example, in the master map above, the indirect map /etc/auto.home, indicated by the mount point /home, can include mounting information for all home directories on various servers.

Indirect maps contain three fields: directory, options, and location. Entries might look something like this for the /etc/auto.home indirect map:

#Directory    Options     Location
willow        -intr       willow:/usr/people
pine          -nosuid     pine:/usr/people
ivy           -ro,intr    ivy:/usr/people
jinx          -ro,nfs3    jinx:/usr

With an indirect map, user access to an NFS directory is always relative to the mount point specified in the master map entry for the indirect map. That is, the directory is the concatenation of the mount-point field in the master map and the directory field in the indirect map. For example, given our sample /etc/auto.master and indirect map /etc/auto.home, a user gives the command  cd /home/willow  to  access the NFS directory willow:/usr/people.

If a user changes the current working directory to the /home directory and tries to list its contents, the directory appears empty unless a subdirectory of /home, such as /home/willow, was previously accessed, thereby mounting /home subdirectories. Access to the mount point of an indirect map only shows information for mounts currently in effect; it does not trigger mounts, as with direct maps. Users must access a subdirectory to trigger a mount.

The directory field in an indirect map is limited to one subdirectory level. Additional subdirectory levels for indirect maps must be indicated in the mount point field in the master map or on the command line.

The options field can be used to set options for an entry in the indirect map. For example, the fourth entry attempts to mount /usr using the NFS3 protocol, all other entries in the map are unaffected. Options set within a map for an entry override the general options set for the entire map in the master map. The location field contains the NFS server's name and the remote directory to mount.

Recommendations for Automounting

Some recommendations for automounting are:

  1. Use the automounter when the overhead of a mount operation is not important, when a file system is used more often than the automount time limit (5 minutes by default, specified by the –tl option), or when file systems are used infrequently. Although directories that are used infrequently do not consume local or remote resources, they can slow down applications that report on file systems, such as df(1).

  2. The default configuration in /etc/config/automount.options is usually sufficient because it allows access to all systems. It performs the minimal number of mounts necessary when it is used in conjunction with the nohide export option on the server.

  3. Use indirect maps whenever possible. Direct maps create more /etc/mtab entries, which means more mounts are performed, so system overhead is increased. With indirect maps, mounts occur when a process references a subdirectory of the daemon or map mount point. With direct maps, when a process reads a directory containing one or more direct mount points, all of the file systems are mounted at the mount points. This can result in a flurry of unintended mounting activity when direct mount points are used in well-traveled directories.

  4. Try not to mount direct map mount points into routinely accessed directories. This can cause unexpected mount activity and slow down system performance.

  5. Use a direct rather than an indirect map when directories cannot be grouped, but must be distributed throughout the local file system.

  6. Plan and test maps on a small group of clients before using them for a larger group. Some changes to the automount environment require that systems be rebooted (see Chapter 5, “Maintaining ONC3/NFS” for details on changing the map environment).

The CacheFS File System

CacheFS is optimally used on an NFS client that has sufficient local disk space to reduce network data access time. Once the data has been cached, file read and read-only directory operations are as fast as those on a local disk (EFS file systems). Write performance, however, is closer to an NFS write operation.

The original file system (which is typically NFS) is called the back file system and files in it are back files. The cached file system resides on the local disk and files in it are cached files. The cache directory is a directory on the local disk where the data for the cached file system is stored. The file system in which the cache directory resides is called the front file system and its files are front files.

Planning and setting up a CacheFS configuration is similar to that of an NFS client-server configuration.

Command and File Options for CacheFS

CacheFS-specific options have been added to the conventional mount command and /etc/fstab file and are described in this section. For the complete description of these commands and files, refer to “The /etc/fstab Mount Process”. The cfsadmin(1M) and fsck_cache(1M) commands are new with CacheFS.

mount and umount Commands

When mounting and unmounting a CacheFS file system, the following option is used for CacheFS. For descriptions of the other options, see “mount and umount Command Options”.

–t type 

(type) Set the type of directories to be mounted or unmounted. type is cachefs for all CacheFS mounting.

fsck_cachefs Command

The CacheFS version of fsck(1M) checks the integrity of a cache directory. By default, it corrects any problems it may find. It is automatically invoked when a CacheFS file system is mounted. The syntax for fsck_cachefs is:

fsck_cachefs [ -m | -o noclean] cache_directory

The two command line options are:

-m 

Check, but do not repair the file system.

-o noclean 

Force a check on the cache directory, even if there is no reason to suspect an integrity problem.

/etc/fstab File

The /etc/fstab file has several new options that are used with CacheFS for mounting, unmounting, and consistency checking.

Any mount options not recognized by CacheFS are passed to the back file system mount if one is performed.


Note: Any mount points which share the same cache directory must have the same set of the following options: write-around, non-shared, noconst, and purge.

The options that are new for CacheFS are:

backfstype=file_system_type 


Specifies the back file system type (for example, nfs). Any file system type may be used except proc, fd, and swap. The backfstype argument must be specified.

backpath=path 

Specifies the path where the back file system is already mounted. If this argument is not specified, CacheFS determines a mount point for the back file system.

cachedir=directory 


Specifies the name of the cache directory. It must be an existing directory, previously created with cfsadmin(1M).

cacheid=ID 

Allows you to assign a string to identify each separate cached file system. If you do not specify a cacheid, CacheFS generates one. You need the cacheid when you delete a cached file system with cfsadmin –d. A cacheid you choose is easier to remember than one automatically generated. The cfsadmin command with the –l option includes the cacheid in its display.

write-around | non-shared 


Determines the write modes for CacheFS. In the default write-around mode, as writes are made to the back file system, the affected file is purged from the cache.

The non-shared mode can be used when only one source is writing to the cached file system. In this mode, all writes are made to both the front and back file systems, and file remains in the cache.

noconst 

Disables consistency checking between the front and back file systems. Use noconst when the back file system and cache file system are read-only. Otherwise, always allow consistency checking. The default is to enable consistency checking.

If none of the files in the back file system will be modified, you can use the noconst option to mount when mounting the cached file system. Changes to the back file system may not be reflected in the cached file system.

local-access 

Improves performance by having CacheFS check the mode bits. By default, the back file system interprets the mode bits used for access checking to ensure data integrity.

purge 

Remove any cached information for the specified file system.

suid | nosuid 

Allow set-uid (default) or do not allow set-uid.

Consistency Checking mount Options for fstab

To ensure that the cached directories and files are kept up to date, CacheFS periodically checks consistency of files stored in the cache. To check consistency, CacheFS compares the current modification time to the previous modification time; if the modification times are different, all data and attributes for the directory or file are purged from the cache and new data and attributes are retrieved from the back file system.

When an operation on a directory or file is requested, CacheFS checks to see if it is time to verify consistency. If so, CacheFS obtains the modification time from the back file system and performs the comparison. If the write mode is write-around, CacheFS checks on every operation.

Table 2-1 provides more information on mount consistency checking parameters.

Table 2-1. Consistency Checking Arguments for the -o mount Option

Parameter

Description

acdirmin=n

Specifies that cached attributes are held for at least n seconds after a directory update. After n seconds, if the directory modification time on the back file system has changed, all information about the directory is purged and new data is retrieved from the back file system. The default for n is 30 seconds.

acdirmax=n

Specifies that cached attributes are held for no more than n seconds after a directory update. After n seconds, the directory is purged from the cache and new data is retrieved from the back file system. The default for n is 30 seconds.

acregmin=n

Specifies that cached attributes are held for at least n seconds after file modification. After n seconds, if the file modification time on the back file system has changed, all information about the file is purged and new data is retrieved from the back file system. The default for n is 30 seconds.

acregmax=n

Specifies that cached attributes are held for no more than n seconds after a file modification. After n seconds, all file information is purged from the cache. The default for n is 30 seconds.

actimeo=n

Sets acregmin, acregmax, acdirmin, and acdirmax to n.


cfsadmin Command

The cfsadmin(1M) command is used to administer the cached file system on the local system. It is used to

  • create a cached file system

  • list the contents and statistics about the cache

  • delete the cached file system

  • modify the resource parameters when the file system is unmounted

The cfsadmin command works on a cache directory, which is the directory where the cache is actually stored. A pathname in the front file system identifies the cache directory.

The syntax for the cfsadmin command is:

cfsadmin -c [ -o cachefs_parameters ] cache_directory
cfsadmin -d [ cache_ID | all ] cache_directory
cfsadmin -l cache_directory
cfsadmin -u [ -o cachefs_parameters ] cache_directory

The options and their parameters are:

-c 

Create a cache under the directory specified by cache_directory. This directory must not exist prior to cache creation.

-d 

Delete the file system and remove the resources of the cache_ID that you specify or all file systems in the cache if you specify all.


Note: You must run fsck_cachefs(1M) after deleting a file system to correct the resource counts for the cache.


-l 

List the file systems that are stored in the specified cache directory. A listing provides the cache_ID, and statistics about resource utilization and cache resource parameters.

-u 

Update the resource parameters of the specified cache directory. The parameter values (specified with the -o option) can only be increased; to decrease the values, you must remove the cache, then re-create it. All file systems in the cache must be unmounted when you use this option. Changes take effect the next time you mount the file system in the cache directory.

Using the -u option with the -o option resets all parameters to their default values.

cache_ID 

Specifies an identifying name for the file system that is cached. If you do not specify an ID, CacheFS assigns a unique identifier.

-o options 

Specifies the CacheFS resource parameters. Multiple resource parameters must be separated by commas. The following section describes the cache resource parameters.

Cache Resource Parameters

The default values for the cache parameters are for a cache that uses the entire front file system for caching. To limit the cache to only a portion of the front file system, you should change the parameter values.

Table 2-2 shows the parameters for space and file allocation.

Table 2-2. CacheFS Parameters

Parameters for Space Allocation

Parameters for File Allocation

maxblocks

maxfiles

minblocks

minfiles

threshblocks

threshfiles

Table 2-3 shows the default values for the cache parameters. The default values for parameters devote the full resources of the front file system to caching

Table 2-3. Default Values of Cache Parameters

Cache Parameters

Default Value

maxblocks

90%

minblocks

0%

threshblocks

85%

maxfiles

90%

minfiles

0%

threshfiles

85%

The maxblocks parameter sets the maximum number of blocks, expressed as a percentage, that CacheFS is allowed to claim within the front file system. The maxfiles parameter sets the maximum percentage of available inodes (number of files) CacheFS can claim.


Note: The maxblocks and maxfiles parameters do not guarantee the resources will be available for CacheFS—they set maximums. If you allow the front file system to be used for purposes other than CacheFS, there may be fewer blocks or files available to CacheFS than you intend.

The minblocks parameter does not guarantee availability of a minimum level of resources. The minblocks and threshblocks parameters work together. CacheFS can claim more than the percentage of blocks specified by minblocks only if the percentage of available blocks in the front file system is greater than threshblocks. The minfiles and threshfiles parameters work together in the same fashion.

The threshfiles and threshblocks values apply to the entire front file system, not file systems you have cached under the front file system. The threshblocks and threshfiles values are ignored until the minblocks and minfiles values have been reached.


Note: Using the whole front file system solely for caching eliminates the need to change the maxblocks, maxfiles, minblocks, minfiles, threshblocks, or threshfiles parameter.

When the minimum, maximum, and threshold values are identical, CacheFS allows the cache to grow to the maximum size specified—if you have not reduced available resources by using part of the front file system for other storage purposes.

CacheFS Tunable Parameters

The CacheFS tunable parameters are used to fine tune the performance of CacheFS file opens and reads. The CacheFS tunable parameters are contained in the file /var/sysgen/mtune/cachefs. They can be modified with the systune(1M) command.

There are three tunable parameters for CacheFS. Their descriptions are listed in Table 2-4.

Table 2-4. CacheFS Tunable Parameters

Parameter

Description

cachefs_max_lru

Controls the maximum number of files held open for all mounted CacheFS file systems in anticipation of future use. Holding files open reduces the overhead of opening and closing and is most noticeable for intensive open/close operations. Performance improves as the value is increased, but the system becomes vulnerable to system crashes and the time for unmounting a CacheFS file system increases.

cachefs_readahead

Controls the number of readaheads performed on any given read from a file.

cachefs_max_threads

Controls the maximum number of asynchronous I/O daemons allowed to run for each CacheFS file system.

The parameter's maximum, minimum, and default values are listed in Table 2-5.

Table 2-5. CacheFS Tunable Parameter Values

Parameter

Default Value

Minimum Value

Maximum Value

cachefs_max_lru

1000

0

10000

cachefs_readahead

1

0

10

cachefs_max_threads

5

1

10