This chapter presents techniques for improving ClearCase performance. There are techniques for addressing performance issues at the host level, at the VOB level, and at the view level.
Your organization's VOBs constitute a central data repository. Good VOB host performance ensures that the centralized resource does not become a bottleneck.
Although a VOB appears to be a version-smart file server, its implementation involves significant database access and computation. VOB usage patterns can greatly influence how many concurrent users will experience good ClearCase performance. For example, many more users can read header files from a VOB directory at a level of good performance than can produce derived objects in a similar directory.
The most effective measures for ensuring good performance from VOB hosts are also the easiest to implement (technically, if not organizationally):
Keep non-ClearCase processes off the VOB host—Don't have the VOB host also serve as a server host for another application (for example, a DBMS), or at the system-level (for example, as an NIS server).
Keep ClearCase client processes off the VOB host—Make sure that no one is performing clearmake builds on any VOB host. Similarly, make sure no one is using other client tools: cleartool, xclearcase, xcleardiff, and so on.
Keep view_server processes off the VOB host—This recommendation may be harder to implement; many organizations create shared views on the same hosts as VOBs. If possible, minimize this double-usage of VOB hosts.
Exception: For reliable non-ClearCase access (avoiding “multihop” network access paths), place the VOB and the view through which it is exported on the same host. For more information, see “Setting Up an Export View for Non-ClearCase Access” and the exports_ccase manual page.
All the UNIX-based operating systems supported by ClearCase have a dynamic block buffer cache feature. As much main memory as possible is used to cache blocks of data files that have been updated by user processes. Periodically, the contents of the block buffer cache is flushed to disk.
This feature speeds up disk I/O significantly; making full use of it is a very important factor in good VOB host performance. An inadequate block buffer cache causes thrashing of VOB database files—the files in the db subdirectories of VOB storage directories). The result is a significant performance degradation, evidenced by:
extended periods required for scrubber and vob_scrubber execution
very slow clearmake builds
ClearCase clients getting RPC timeouts
We recommend that the size of a VOB host's block buffer cache average about 200% of the size of the host's largest VOB database file; the minimum acceptable size is about 50%. You cannot directly control the size of the block buffer cache; its size increases automatically when you add more main memory to the host.
If there is a substantial amount on non-ClearCase activity and/or ClearCase client activity on the host, you will need even more main memory to assure good VOB database performance.
The standard UNIX System V sar(1M) utility reports block buffer cache activity. For example, this command reports activity over a 5-minute period, with a cumulative sample taken every 60 seconds:
% sar -b 60 5 12:14:22 bread/s lread/s %rcache bwrit/s lwrit/s %wcache pread/s pwrit/s 12:15:22 0 1 100 1 1 0 0 0 12:16:23 1 1 -60 2 2 0 0 0 12:17:24 0 4 100 4 17 77 0 0 12:18:25 0 6 100 3 145 98 0 0 12:19:25 17 91 81 28 335 92 0 0 12:19:25 bread/s lread/s %rcache bwrit/s lwrit/s %wcache pread/s pwrit/s Average 4 21 83 8 100 92 0 0 (cache-reads should be in the 90%–95% range) (cache-writes should be 75% or above) |
Some UNIX variants provide special tools for monitoring buffer cache performance. For example, IRIX has osview; HP-UX has glance.
Performance of a ClearCase client host can be adjusted at the client program level, at the view_server, and/or at the MVFS level.
Client workstations supporting a single user should have a minimum of 10–15 MIPS processing power, 16Mb of main memory, and 300Mb of disk storage. An additional 8–16Mb of main memory will further improve performance. Extra memory is especially recommended for users who run memory-intensive applications in the ClearCase environment, make extensive use of graphical user interfaces, or want their client workstations to serve double-duty as hosts for parallel distributed building.
The ClearCase default is to store all of a VOB's file system data in the default storage pools created by mkvob. These pools are located within the VOB storage directory. If a VOB host become I/O-bound, it is probably due to high storage pool traffic, caused by either “too many users” or “too many files”.
You can supplement (or replace) the default pools with remote storage pools, which effectively enable a VOB to outgrow its storage directory's disk partition. Remote pools need not be located on ClearCase hosts; they need only be accessible through NFS.
In some situations, remote storage pools can improve performance, as well:
If a particular view is being used heavily (perhaps by a group performing integration work), build performance may improve if the cleartext and derived object storage pools involved in the builds are located on the same host as the view storage directory.
Faster access to any storage pool may be achieved if it is located on a server host with a very fast file system.
We recommend that you keep source pools local, within the VOB storage directory. This strategy optimizes data integrity—a single disk partition will contain all of the VOB's essential data. It will also simplify backup/restore procedures. This concern typically overrides performance considerations, since losing a source pool means that developers must recreate the lost versions.
If source pool access produces a significant processor or I/O bottleneck, you might temporarily move some elements into source pools on different hosts.
See “Creating Additional VOB Storage Pools” for a step-by-step procedure.
This section describes procedures for reconfiguring the ClearCase multiversion file system (MVFS) on hosts running SunOS 4 or SunOS 5 (Solaris). By default, the MVFS is dynamically loaded at system startup with the following configuration:
MVFS-internal identifiers (mnodes) cached for up to 4096 MVFS objects
up to 900 unused mnode numbers cached
UNIX-internal identifiers (vnodes) cached for up to 100-400 cleartext files, depending on the system-wide maximum number of users (MAXUSERS kernel configuration parameter)
up to 1400 names of MVFS objects cached
You may wish to change the MVFS cache sizes to improve performance if your host performs builds that involve a large number of files, as indicated in Table 13-1.
Table 13-1. Selecting the Default or Alternative MVFS Cache Configuration
Main Memory | Files Used in Typical Build | Recommended MAXUSERS Value | Recommended MVFS Cache Configuration |
---|---|---|---|
16Mb | any | 16 | default |
24Mb | any | 32 | default |
32Mb | < 400 | 32 | default |
| > 400 | 48 | “largeinit” alternative |
![]() | Note: Enlarging the MVFS caches reduces the amount of memory available to UNIX applications. If you use the “largeinit” MVFS configuration, you should also reconfigure each view that is used to access ClearCase data on that host, increasing its view_server cache size to 1Mb. See “Reconfiguring a View”. |
To change the MVFS cache sizes, perform one of the changes described below.
This technique is mutually exclusive with the technique for modifying the virtual file system table, which is described in the ClearCase Notebook. Exactly one of the modload commands in the ClearCase startup script must be enabled; all others must be commented out.
You can revise the ClearCase startup script, /etc/rc.atria, to configure larger default sizes for the MVFS caches:
MVFS-internal identifiers (mnodes) cached for up to 4096 MVFS objects
Up to 1800 unused mnode numbers cached
UNIX-internal identifiers (vnodes) cached for up to 200-1000 cleartext files, depending on the system-wide maximum number of users (MAXUSERS kernel configuration parameter)
Up to 2800 names of MVFS objects cached
The larger caches add about 500Kb to the size of kernel (unpageable) memory, but provides better performance when the “working set” of objects in a build or command exceeds the default cache allocations.
Use the following procedure to configure the larger default caches:
Shut down ClearCase.
# /etc/rc.atria stop |
Revise the cache configuration—In the “customer-editable section”, uncomment the CONFIG 2 entry, and make sure that all other entries are commented out.
# CONFIG 2: Configure larger caches for MVFS file system # ENTRY="-entry _xxxlargeinit" # # CONFIG 3 (DEFAULT): Use 'TFS' slot if no available VFS switch # entry # # TFS must not be used by any application on your host # ENTRY="" |
Restart ClearCase.
# /etc/rc.atria start |
You can customize cache sizes on SunOS 4 hosts or SunOS 5 hosts by recompiling the MVFS module that modload incorporates into the UNIX kernel.
Table 13-2 lists the cache parameters, with default values and suggested “larger-than-default” values. But before proceeding, be sure you will avoid the following pitfalls:
An mvfs_cvpfreemax value that exceeds the recommended maximum may cause inode table overflow errors (reported on the system console) and/or system hangs.
The mvfs_mnmax value must exceed the mvfs_vobfreemax value. We recommend that the value be about twice as large.
Larger MAXUSERS values cause increased operating system memory utilization.
Table 13-2. Cache Parameters for MVFS module: `mvfs.o'
MVFS Cache Parameter | Description | Default Value |
| Suggested Increased Value |
|
---|---|---|---|---|---|
mvfs_mnmax | system wide maximum number of mnodes | 4096 |
| 4096 |
|
mvfs_vobfreemax | maximum number of
objects to cache | 900 |
| 1800 |
|
mvfs_cvpfreemax | maximum number of cleartext files to cache | MAX- | Max # | MAX- | Max # |
|
| 16 | 100 | 16 | 250 |
|
| 32 | 200 | 32 | 500 |
|
| 48 | 300 | 48 | 750 |
|
| 64 | 400 | 64 | 1000 |
|
| larger values: |
| larger values: linear scaleup |
|
mvfs_dncdirmax | directory names to cache (100 bytes/entry) | 200 |
| 400 |
|
mvfs_dncregmax | regular file names to cache (100 bytes/entry) | 800 |
| 1600 |
|
mvfs_dncnoentmax | names that produce
ENOENT returns | 400 |
| 800 |
|
Use this procedure to customize cache sizes on hosts running SunOS 4.
Gather your tools—Make sure that the standard UNIX programs make(1), cc(1), and ld(1) are available on your host.
Become the root user.
% su Password: <enter root password> |
Shut down ClearCase.
# /etc/rc.atria stop |
Edit the MVFS configuration file—Edit file /usr/atria/sun4–4.n/kvm/mvfs_param.c.
Revise cache configuration parameters—Change one or more of the MVFS cache parameters listed in Table 13-2. (Other parameters in mvfs_param.c generally have no effect on ClearCase performance.)
For example, this code implements the “larger-than-default” values in the table:
int mvfs_mnmax = 4096; int mvfs_vobfreemax = 1800; int mvfs_cvpfreemax = 300; int mvfs_dncdirmax = 400; int mvfs_dncregmax = 1600; int mvfs_dncnoentmax = 800; |
Save the MVFS configuration file. Save the file and exit the text editor.
Rebuild the mvfs.o file.
# cd /usr/atria/sun4-4.n/kvm # make -f config.mk # /etc/rc.atria start |
Restart ClearCase.
# /etc/rc.atria start |
Use this procedure to customize cache sizes on hosts running SunOS 5:
Become the root user.
% su Password: <enter root password> |
Shut down ClearCase.
# /etc/init.d/atria stop |
Edit file /etc/system—You can make the change in either, but not both, of the following ways:
Add this line:
set mvfs:mvfs_largeinit = 1 |
For one or more of the MVFS cache parameters listed in Table 13-2 in “Cache Parameters for MVFS module: `mvfs.o'”, create an entry of the form:
set mvfs:parameter = value |
For example, you might establish the following parameter settings to increase cache sizes:
set mvfs:mvfs_mnmax=4096 set mvfs:mvfs_vobfreemax=1800 set mvfs:mvfs_cvpfreemax=300 set mvfs:mvfs_dncdirmax=400 set mvfs:mvfs_dncregmax=1600 set mvfs:mvfs_dncnoentmax=800 |
Save the /etc/system file.
Restart the operating system—Use reboot(1M) or any other standard means to restart the operating system.
To speed its performance, the view_server process associated with a view maintains a cache. The default size is 204800 bytes (200Kb). You can configure a larger cache size, in order to boost performance. This is particularly useful for views in which very large software systems are built by clearmake.
Follow these steps to reconfigure a view_server's cache:
Add or revise a `-cache' line in the view's configuration file—This is file .view in the view storage directory. For example:
-cache 1048576 |
Kill the view_server process—On the host where the view storage directory resides, search the process table for a view_server that was invoked with the pathname of the view storage directory. For example:
% cleartool lsview akp * akp /net/neon//home/hui/views/akp.vws % ps -ax | grep 'view_server.*akp.vws' 5011 ... view_server /net/neon/home/akp/views/akp.vws % kill 5011 |
Restart the view_server process—Use a startview or setview command:
% cleartool startview akp |