Chapter 7. Setting Up ClearCase Views

This chapter discusses setting up of views for individual users, views to be shared by groups of users, and views through which VOBs will be made available to non-ClearCase hosts.

Setting Up an Individual User's View

In a typical ClearCase development environment, most views are created by individual developers, on their individual workstations, for their personal use. This model conforms well to ClearCase's client-server architecture, and takes advantage of scalability: as new users join the environment, they bring with them the processing power and disk storage of additional workstations. If a user's workstation has local storage, it makes sense for the user's view(s) to reside within his or her home directory. Alternatively, you can place the storage for some or all views on a central, well-backed-up file server host.

In deciding where to place views, keep in mind these architectural constraints:

  • Each view has an associated server process, its view_server, which executes on the host where the view's storage directory is created.

  • ClearCase must be installed on the host where a view storage directory is created and the view_server process runs.

  • If a host is to keep several views (and their several view_server processes) active concurrently, it should be configured with extra main memory.

View Storage Requirements

Each ClearCase view is implemented as a view storage directory, a directory tree that holds a small database, along with a private storage area that contains view-private files, checked-out versions of elements, and unshared derived objects.

View Database

The view database is a set of UNIX files, located in subdirectory db of the view storage directory. Typically, this database is quite small (less than 1Mb), and presents no significant disk space problems.

View's Private Storage Area

A view's private storage area is implemented as a directory tree named .s in the view storage directory. By default, .s is an actual subdirectory, so that all data stored in the view will occupy a single disk partition.

If you anticipate that a view will need a great deal of private storage, you can use the mkview -ln command to create .s as a symbolic link, pointing to a location in another disk partition, perhaps on another host:

% cleartool mkview -tag david -ln /net/sirius/viewstore/1 ~/my.vws
    (remote data storage can be on any NFS-accessible host)
    (view storage directory must be on a ClearCase host)

In making this decision, consider that unshared derived objects typically make the greatest storage demand on a view. To obtain a useful estimate of the maximum disk space required for a view, calculate the total size of all the binaries, libraries, and executables for the largest software system to be built in that view. If several ports (or other variants) of a software system will be built in the same view, it must be able to accommodate the several kinds of binaries.

Setting Up a Shared View

Views can be shared by multiple users. For example, a project might designate a shared view in which all of its software components are built in preparation for a release. The entire application might be built each night in such a view.

An ideal shared view is located on a dedicated host that is configured similarly to a client workstation. If no dedicated host is available, distribute shared views around the network on the least-heavily-used (or most richly configured) client workstations. Avoid placing too many views on any single machine; avoid placing shared views on VOB hosts (but see the next section for an exception).

Here is a simple procedure for setting up a shared view:

  1. Determine who will be using the view—In particular, determine whether all of the view's prospective users belong to the same group.

  2. (if necessary) Change your group—If all of the view's prospective users belong to the same group, make sure that you are logged in as a member of that group. You may need to use a newgrp(1) command to switch your group.

  3. Set your umask appropriately—A view's accessibility is determined by the umask(1) of its creator. If the view's users are all members of the same group, temporarily set your umask to allow “write by group members”:

    % umask 2
    

    Otherwise, you must set your umask to allow any user write access:

    % umask 0
    

  4. Determine a location for view storage directory—Use the discussion in “View Storage Requirements” to decide whether the view's private storage area should be local or remote.

  5. Choose a view-tag—Select a name that indicates the nature of the work that will be performed in the view. For example, you might select integ_r1.3 as the tag for a view that will be used to produce Release 1.3 of your application.

  6. Create the view storage directory—Enter a “create new view” command:

    % cleartool mkview -tag integ_r1.3 \
        /net/ccsvr05/viewstore/integr13.vws
    Created view.
    Host-local path: ccsvr05:/viewstore/integr13.vws
    Global path:     /net/ccsvr05/viewstore/integr13.vws
    It has the following rights:
    User : vobadm   : rwx
    Group: dvt      : rwx
    Other:          : r-x
    

    (See “View's Private Storage Area” for a command that creates a view with a remote private storage area.)

  7. Verify your work—Examine the mkview command's output to verify that the access permissions are in accordance with your decisions in Steps #1–#3. In addition, examine the “host-local path” and “global path”. You may need to make adjustments similar to those discussed in “Ensuring the VOB's Global Accessibility”.

  8. Restore your original umask and/or group—Enter a umask command to restore your original umask setting; or just exit the shell process. Exiting the shell is also the easiest course to take if you've changed your group setting with a newgrp command.

Setting Up an Export View for Non-ClearCase Access

A ClearCase VOB can be made available to hosts on which ClearCase is not installed. This non-ClearCase access feature involves setting up an export view, through which the VOB will be seen on the non-ClearCase host (Figure 7-1):

  1. A ClearCase client host—one whose kernel includes the MVFS—activates (mounts) the VOB.

  2. The host starts an export view, through which the VOB will be accessed
    by non-ClearCase hosts.

  3. The host uses a ClearCase-specific exports file to export a view-extended pathname to the VOB mount point—for example, /view/exp_vu/vobs/proj.

  4. One or more non-ClearCase hosts in the network perform an NFS mount of the exported pathname.

    Figure 7-1. Export View for Non-ClearCase Access


The exports_ccase manual page describes the simplest (and recommended) setup, in which the VOB and the export view are located on the same host. The following sections discuss this issue in greater detail, including advice on how to proceed if you don't wish to co-locate the VOB and export view.

Exporting Multiple VOBs

If you adopt the recommendation to co-locate VOBs and their export views, it is likely that developers working on a non-ClearCase host will access several export views at the same time. For example, a project might involve three VOBs located on three different hosts. Since each VOB and its export view are located on the same host, three different export views are involved. On the non-ClearCase host, the NFS mount entries might be:

saturn:/view/beta/vobs/proj /vobs/proj nfs rw,hard 0 0 neptune:/view/exp_vu/vobs/proj_aux /vobs/proj_aux nfs rw,hard 0 0 
pluto:/view/archive_vu/vstore/tools /vobs/tools nfs rw,hard 0 0

The three VOBs can be accessed on the non-ClearCase host as subdirectories of /vobs. But developers must keep in mind that three views are involved, for such operations as checkouts. Developers need not be concerned with multiple-view issues when building software on the non-ClearCase host.

Multihop Export Configurations

In a non-ClearCase access situation, a single data-access can involve three hosts:

  • the host on which the VOB storage directory resides

  • the host on which the storage directory of the export view resides

  • the non-ClearCase host

This multihop situation is not supported in pure-NFS environments, but is made possible by MVFS-level communication between the two ClearCase hosts. But creating a multihop configuration introduces the possibility of “access cycles”, in which two of the hosts depend on each other for network-related services, or such a dependency is created through “third-party” hosts. Such situations result in timeouts (if VOBs are soft-mounted) or deadlocks (if VOBs are hard-mounted).

A sure way to avoid access cycles is to avoid multihop configurations altogether, as described in the exports_ccase manual page:

  • Locate the storage directory of the export view on the same host as the storage directory for the VOB.

  • Make sure that neither the VOB nor view has remote data storage. That is, the VOB should not have any remote storage pools, and the view's private storage area (.s directory tree) must be an actual subdirectory, not a symbolic link to another host.

If you wish to use a multihop configuration, you must ensure that the VOB host (and its “pool hosts”, if any) never request services from the view host. This ensures that no process on the VOB and pool hosts creates an access cycle with the view host. Figure 7-2 illustrates an access-layering scheme that avoids access cycles.

Figure 7-2. Avoiding Access Cycles in Non-ClearCase Access


In this scheme, higher-layer hosts always request services from lower-layer hosts. A request for any network service (not just ClearCase services) must never be made back to the view host, where the view_server for the export view runs, either directly or through some other host.

You might achieve the correct layering by never allowing any users to run processes on a host used for an export view, either directly or indirectly—no home directories, and no remote logins (except from non-ClearCase hosts). In addition, make sure that no over-the-network backups of the view server hosts are per- formed on the VOB server or pool hosts.

Restricting Exports to Particular Hosts

In a multihop situation, we recommend using an -access option in each entry in the ClearCase exports file, /etc/exports.mvfs. This restricts the export to specified non-ClearCase host(s) and/or netgroups. This greatly reduces the likelihood of creating access cycles. For example:

/view/exp_vu/usr/src/proj -access=galileo:newton:bohr:pcgroup

When combining -access with other options, be sure to specify them all as a comma-separated list off a single hyphen.