Chapter 13. ClearCase Performance Tuning

This chapter presents techniques for improving ClearCase performance. There are techniques for addressing performance issues at the host level, at the VOB level, and at the view level.

Improving VOB Host Performance

Your organization's VOBs constitute a central data repository. Good VOB host performance ensures that the centralized resource does not become a bottleneck.

Although a VOB appears to be a version-smart file server, its implementation involves significant database access and computation. VOB usage patterns can greatly influence how many concurrent users will experience good ClearCase performance. For example, many more users can read header files from a VOB directory at a level of good performance than can produce derived objects in a similar directory.

Eliminate Extraneous Processes

The most effective measures for ensuring good performance from VOB hosts are also the easiest to implement (technically, if not organizationally):

  • Keep non-ClearCase processes off the VOB host—Don't have the VOB host also serve as a server host for another application (for example, a DBMS), or at the system-level (for example, as an NIS server).

  • Keep ClearCase client processes off the VOB host—Make sure that no one is performing clearmake builds on any VOB host. Similarly, make sure no one is using other client tools: cleartool, xclearcase, xcleardiff, and so on.

  • Keep view_server processes off the VOB host—This recommendation may be harder to implement; many organizations create shared views on the same hosts as VOBs. If possible, minimize this double-usage of VOB hosts.

    Exception: For reliable non-ClearCase access (avoiding “multihop” network access paths), place the VOB and the view through which it is exported on the same host. For more information, see “Setting Up an Export View for Non-ClearCase Access” and the exports_ccase manual page.

Manipulate Block Buffer Caches

All the UNIX-based operating systems supported by ClearCase have a dynamic block buffer cache feature. As much main memory as possible is used to cache blocks of data files that have been updated by user processes. Periodically, the contents of the block buffer cache is flushed to disk.

This feature speeds up disk I/O significantly; making full use of it is a very important factor in good VOB host performance. An inadequate block buffer cache causes thrashing of VOB database files—the files in the db subdirectories of VOB storage directories). The result is a significant performance degradation, evidenced by:

  • extended periods required for scrubber and vob_scrubber execution

  • very slow clearmake builds

  • ClearCase clients getting RPC timeouts

We recommend that the size of a VOB host's block buffer cache average about 200% of the size of the host's largest VOB database file; the minimum acceptable size is about 50%. You cannot directly control the size of the block buffer cache; its size increases automatically when you add more main memory to the host.

If there is a substantial amount on non-ClearCase activity and/or ClearCase client activity on the host, you will need even more main memory to assure good VOB database performance.

Block Buffer Cache Statistics

The standard UNIX System V sar(1M) utility reports block buffer cache activity. For example, this command reports activity over a 5-minute period, with a cumulative sample taken every 60 seconds:

% sar -b 60 5
12:14:22 bread/s lread/s  %rcache bwrit/s lwrit/s  %wcache pread/s pwrit/s
12:15:22       0       1      100       1       1        0       0       0
12:16:23       1       1      -60       2       2        0       0       0
12:17:24       0       4      100       4      17       77       0       0
12:18:25       0       6      100       3     145       98       0       0
12:19:25      17      91       81      28     335       92       0       0

12:19:25 bread/s lread/s  %rcache bwrit/s lwrit/s  %wcache pread/s pwrit/s
Average        4      21       83       8     100       92       0       0
   (cache-reads should be in the 90%–95% range)
   (cache-writes should be 75% or above)

Some UNIX variants provide special tools for monitoring buffer cache performance. For example, IRIX has osview; HP-UX has glance.

Flushing of the Block Buffer Cache

Interactive performance suffers considerably when the block buffer cache is flushed to disk. Most UNIX variants provide no user-level control over the frequency of flushing; HP-UX does, through the syncer(1M) utility. The larger the block buffer cache, the less frequently it should be flushed.

Improving Client Host Performance

Performance of a ClearCase client host can be adjusted at the client program level, at the view_server, and/or at the MVFS level.

Increasing System Resources

Client workstations supporting a single user should have a minimum of 10–15 MIPS processing power, 16Mb of main memory, and 300Mb of disk storage. An additional 8–16Mb of main memory will further improve performance. Extra memory is especially recommended for users who run memory-intensive applications in the ClearCase environment, make extensive use of graphical user interfaces, or want their client workstations to serve double-duty as hosts for parallel distributed building.

Creating Remote Storage Pools

The ClearCase default is to store all of a VOB's file system data in the default storage pools created by mkvob. These pools are located within the VOB storage directory. If a VOB host become I/O-bound, it is probably due to high storage pool traffic, caused by either “too many users” or “too many files”.

You can supplement (or replace) the default pools with remote storage pools, which effectively enable a VOB to outgrow its storage directory's disk partition. Remote pools need not be located on ClearCase hosts; they need only be accessible through NFS.

In some situations, remote storage pools can improve performance, as well:

  • If a particular view is being used heavily (perhaps by a group performing integration work), build performance may improve if the cleartext and derived object storage pools involved in the builds are located on the same host as the view storage directory.

  • Faster access to any storage pool may be achieved if it is located on a server host with a very fast file system.

Caution on Remote Source Pools

We recommend that you keep source pools local, within the VOB storage directory. This strategy optimizes data integrity—a single disk partition will contain all of the VOB's essential data. It will also simplify backup/restore procedures. This concern typically overrides performance considerations, since losing a source pool means that developers must recreate the lost versions.

If source pool access produces a significant processor or I/O bottleneck, you might temporarily move some elements into source pools on different hosts.

See “Creating Additional VOB Storage Pools” for a step-by-step procedure.

Changing the MVFS Configuration (SunOS Only)

This section describes procedures for reconfiguring the ClearCase multiversion file system (MVFS) on hosts running SunOS 4 or SunOS 5 (Solaris). By default, the MVFS is dynamically loaded at system startup with the following configuration:

  • MVFS-internal identifiers (mnodes) cached for up to 4096 MVFS objects

  • up to 900 unused mnode numbers cached

  • UNIX-internal identifiers (vnodes) cached for up to 100-400 cleartext files, depending on the system-wide maximum number of users (MAXUSERS kernel configuration parameter)

  • up to 1400 names of MVFS objects cached

You may wish to change the MVFS cache sizes to improve performance if your host performs builds that involve a large number of files, as indicated in Table 13-1.

Table 13-1. Selecting the Default or Alternative MVFS Cache Configuration

Main Memory

Files Used in Typical Build

Recommended MAXUSERS Value

Recommended MVFS Cache Configuration










< 400




> 400


“largeinit” alternative

Note: Enlarging the MVFS caches reduces the amount of memory available to UNIX applications. If you use the “largeinit” MVFS configuration, you should also reconfigure each view that is used to access ClearCase data on that host, increasing its view_server cache size to 1Mb. See “Reconfiguring a View”.

To change the MVFS cache sizes, perform one of the changes described below.

Selecting Alternative Cache Size Defaults—SunOS 4 Only

This technique is mutually exclusive with the technique for modifying the virtual file system table, which is described in the ClearCase Notebook. Exactly one of the modload commands in the ClearCase startup script must be enabled; all others must be commented out.

You can revise the ClearCase startup script, /etc/rc.atria, to configure larger default sizes for the MVFS caches:

  • MVFS-internal identifiers (mnodes) cached for up to 4096 MVFS objects

  • Up to 1800 unused mnode numbers cached

  • UNIX-internal identifiers (vnodes) cached for up to 200-1000 cleartext files, depending on the system-wide maximum number of users (MAXUSERS kernel configuration parameter)

  • Up to 2800 names of MVFS objects cached

The larger caches add about 500Kb to the size of kernel (unpageable) memory, but provides better performance when the “working set” of objects in a build or command exceeds the default cache allocations.

Use the following procedure to configure the larger default caches:

  1. Shut down ClearCase.

    # /etc/rc.atria stop

  2. Revise the cache configuration—In the “customer-editable section”, uncomment the CONFIG 2 entry, and make sure that all other entries are commented out.

    # CONFIG 2: Configure larger caches for MVFS file system
        ENTRY="-entry _xxxlargeinit"
    # CONFIG 3 (DEFAULT): Use 'TFS' slot if no available VFS switch
    # entry
    # TFS must not be used by any application on your host

  3. Restart ClearCase.

    # /etc/rc.atria start

Compiling New Cache Sizes into the MVFS

You can customize cache sizes on SunOS 4 hosts or SunOS 5 hosts by recompiling the MVFS module that modload incorporates into the UNIX kernel.

Table 13-2 lists the cache parameters, with default values and suggested “larger-than-default” values. But before proceeding, be sure you will avoid the following pitfalls:

  • An mvfs_cvpfreemax value that exceeds the recommended maximum may cause inode table overflow errors (reported on the system console) and/or system hangs.

  • The mvfs_mnmax value must exceed the mvfs_vobfreemax value. We recommend that the value be about twice as large.

  • Larger MAXUSERS values cause increased operating system memory utilization.

    Table 13-2. Cache Parameters for MVFS module: `mvfs.o' 

    MVFS Cache Parameter


    Default Value


    Suggested Increased Value



    system wide maximum number of mnodes






    maximum number of objects to cache
    (400 bytes/object)






    maximum number of cleartext files to cache


    Max #


    Max #



























    larger values:
    linear scaleup


    larger values: linear scaleup



    directory names to cache (100 bytes/entry)






    regular file names to cache (100 bytes/entry)






    names that produce ENOENT returns
    (100 bytes/entry)





SunOS 4 Cache Override Procedure

Use this procedure to customize cache sizes on hosts running SunOS 4.

  1. Gather your tools—Make sure that the standard UNIX programs make(1), cc(1), and ld(1) are available on your host.

  2. Become the root user.

    % su
    Password: <enter root password>

  3. Shut down ClearCase.

    # /etc/rc.atria stop

  4. Edit the MVFS configuration file—Edit file /usr/atria/sun4–4.n/kvm/mvfs_param.c.

  5. Revise cache configuration parameters—Change one or more of the MVFS cache parameters listed in Table 13-2. (Other parameters in mvfs_param.c generally have no effect on ClearCase performance.)

    For example, this code implements the “larger-than-default” values in the table:

    int mvfs_mnmax = 4096;
    int mvfs_vobfreemax = 1800;
    int mvfs_cvpfreemax = 300;
    int mvfs_dncdirmax = 400;
    int mvfs_dncregmax = 1600;
    int mvfs_dncnoentmax = 800;

  6. Save the MVFS configuration file. Save the file and exit the text editor.

  7. Rebuild the mvfs.o file.

    # cd /usr/atria/sun4-4.n/kvm
    # make -f
    # /etc/rc.atria start

  8. Restart ClearCase.

    # /etc/rc.atria start

SunOS 5 Cache Override Procedure

Use this procedure to customize cache sizes on hosts running SunOS 5:

  1. Become the root user.

    % su
    Password: <enter root password>

  2. Shut down ClearCase.

    # /etc/init.d/atria stop

  3. Edit file /etc/systemYou can make the change in either, but not both, of the following ways:

    • Add this line:

      set mvfs:mvfs_largeinit = 1 

    • For one or more of the MVFS cache parameters listed in Table 13-2 in “Cache Parameters for MVFS module: `mvfs.o'”, create an entry of the form:

      set mvfs:parameter = value

      For example, you might establish the following parameter settings to increase cache sizes:

      set mvfs:mvfs_mnmax=4096
      set mvfs:mvfs_vobfreemax=1800
      set mvfs:mvfs_cvpfreemax=300
      set mvfs:mvfs_dncdirmax=400
      set mvfs:mvfs_dncregmax=1600
      set mvfs:mvfs_dncnoentmax=800

  4. Save the /etc/system file.

  5. Restart the operating system—Use reboot(1M) or any other standard means to restart the operating system.

Reconfiguring a View

To speed its performance, the view_server process associated with a view maintains a cache. The default size is 204800 bytes (200Kb). You can configure a larger cache size, in order to boost performance. This is particularly useful for views in which very large software systems are built by clearmake.

Follow these steps to reconfigure a view_server's cache:

  1. Add or revise a `-cache' line in the view's configuration file—This is file .view in the view storage directory. For example:

    -cache 1048576

  2. Kill the view_server process—On the host where the view storage directory resides, search the process table for a view_server that was invoked with the pathname of the view storage directory. For example:

    % cleartool lsview akp
    * akp             /net/neon//home/hui/views/akp.vws
    % ps -ax | grep 'view_server.*akp.vws'
    5011 ... view_server /net/neon/home/akp/views/akp.vws
    % kill 5011

  3. Restart the view_server process—Use a startview or setview command:

    % cleartool startview akp