Chapter 3. Network-Wide Access to ClearCase Data

This chapter describes the mechanisms by which ClearCase data structures—VOBs and views—are made available throughout the local area network.

Storage Directories and Access Paths

Each ClearCase VOB and view has both a physical location and a logical location:

  • Physical location—Each VOB storage directory and view storage directory is actually a directory tree, located on some ClearCase host. For day-to-day work, developers need not know the actual locations of these storage directories.

  • Logical location—Each VOB and view also has a tag, which specifies its logical location. In their day-to-day work, developers use VOB-tags and view-tags to access the data structures.

Distributed VOBs and Views

ClearCase allows you to distribute the data storage for a given VOB or view to more than one host. You can create any number of additional VOB storage pools that are remote to the VOB storage directory (mkpool command); similarly, you can place a view's private storage area on a remote host (mkview -ln command).

In both cases, remote data storage is implemented at the UNIX level. As far as ClearCase servers are concerned, the data is located within the VOB or view storage directory—standard UNIX symbolic links cause the reference to “go remote”.


Note: Remote data storage is outside the scope of this chapter; see Remote data storage is outside the scope of this chapter; see Chapter 2, “ClearCase Data Storage” for a discussion. In particular, the ClearCase storage registries discussed in the remainder of this chapter are not used to resolve the symbolic links that implement distributed data storage. ©


Storage Registries

All VOB storage directories are registered in a set of files that constitute the VOB registry; all view storage directories are registered in files that constitute the view registry. These storage registries record physical locations—hostnames and pathnames on those hosts; they also record the logical access paths used by clients and servers to access VOB and view data.

A storage registry has two parts, implemented in separate files: an object registry and a tag registry. The following sections provide an overview of these components; for details, see the registry_ccase manual page.

Object Registries

The VOB object and view object registries record the location of each VOB and view using a host-local pathname. That is, the pathname to the data structure is one that is valid on the host where the storage directory resides. These pathnames are used by the ClearCase server processes (view_server, vob_server, and so on), which run locally.

An entry is placed in the appropriate object registry when a VOB or view is first created (mkvob, mkview). The entry is updated automatically whenever a reformatting is performed (reformatvob, reformatview); you can also update or remove the entry manually (register, unregister).

Object registry entries are used mostly by ClearCase server processes.

Tag Registries

For most purposes (including virtually all day-to-day development activities), VOBs and views are not referenced by their physical storage locations. Instead, they are referenced by their VOB-tags and view-tags:

  • A VOB's VOB-tag is its mount point as a file system of type MVFS. Developers access all ClearCase data (MVFS files and directories) at pathnames below VOB mount points.

  • A view's view-tag appears as a subdirectory entry in a host's viewroot directory, /view. For example, a view with tag oldwork appears in the host's file system as directory /view/oldwork. To access ClearCase data, developers must use a view—either implicitly (by setting the view) or explicitly (by using a view-extended pathname).

Thus, any reference to a ClearCase file system object involves both a VOB-tag and a view-tag. These logical locations are resolved to physical storage locations through lookups in the network-wide VOB-tag and view-tag registries. Each tag registry entry includes a global pathname to the storage area—a pathname that is valid on all ClearCase client hosts. Figure 3-1 illustrates how tag registries and object registries are used to access the network's set of data storage areas.


Note: In some networks, it is not possible to devise global pathnames to all ClearCase storage areas. The ClearCase network region facility handles such situations—see “Network Regions”. For simplicity, Figure 3-1 illustrates a network that has a single network region.

Figure 3-1. ClearCase Object and Tag Registries (Single Network Region)


Network-Wide Accessibility of VOBs and Views

ClearCase's network-wide storage registries make all VOBs and views visible to all users. You can use the lsvob and lsview commands to list them all. But typically, VOBs and views have different usage patterns:

  • Most users require access to most (or all) VOBs.

  • Most users need to access only a small number of views.

Accordingly, there are different schemes for activating VOBs and views on each client host. A set of public VOBs is activated automatically by the ClearCase startup script on a client host. By contrast, no views are activated automatically at ClearCase startup; the user(s) on a client host must activate their view(s) with explicit commands.

Public and Private VOBs

To provide control over which VOBs are activated automatically, each VOB is designated as public or private when it is created. More precisely, each VOB-tag is either public or private. Only public VOBs are activated automatically; a private VOB becomes active only when its owner enters an explicit mount command.

Public VOBs can be activated and deactivated (mounted and unmounted) by any ClearCase user. The actual mounting is performed by a short-lived server process, mntrpc_server, which runs as the root user. A password facility controls creation of these mountable-by-anyone data structures: when a VOB-tag is created (during execution of a mkvob or mktag -vob command), you must enter a password to match the one stored in the VOB-tag password file: /usr/adm/atria/rgy/vob_tag.sec on the registry server host.


Note: Be careful when making public VOBs. Each ClearCase client host will attempt to mount all public VOBs whenever the operating system is started (and whenever ClearCase processing is restarted with an explicit command).


Network Regions

Ideally, your network's VOB and view storage directories should be accessible at the same pathnames throughout the network. Automatic file-system mount utilities, such as automount(1M), are intended to achieve the ideal of uniform, global naming. Figure 3-2 shows a simple network in which global naming has been achieved.

Figure 3-2. Network with Global Naming


Uniform, global naming may not be achievable, however. The most common reasons are:

  • Multiple network interfaces—A VOB host or view host may have two or more interfaces to the network, each corresponding to a different UNIX-level hostname. For example, a host might be known to some hosts (and their automounter programs) as neptune, and to other hosts as neptune-gw. (The “gw” suffix is commonly used, standing for “gateway”.) In this case, the same VOB might have two “global” storage pathnames:

    /net/neptune/public/project.vbs
    /net/neptune-gw/public/project.vbs
    

  • Multiple aliases—The standard UNIX facilities for assigning names to hosts—file /etc/hosts or NIS map hosts—allow each host to have any number of alternate names, or aliases. This is a possible hosts entry:

    195.34.208.17 betelgeuse bg          (“gratuitous” alias)
    

    If shared storage resides on this host, ClearCase clients might be able to access the storage using either a “/net/betelgeuse/...” pathname or a “/net/bg/...” pathname.

  • Multiple architectures—A heterogeneous network may include hosts that support very different file systems. For example, a VOB that is accessed as /net/neptune/vobstore/incl.vbs on a UNIX host may be accessed as X:\vobstore\incl.vbs on a Windows/NT host.

ClearCase servers require consistent pathnames to shared storage areas. If you cannot achieve global consistency, then you must partition your network into a set of network regions, each of which is a consistent naming domain:

  • Each ClearCase host must belong to a single network region.

  • All hosts in a given network region must be able to access ClearCase physical data storage (that is, all VOB storage directories and the storage directories of shared views) using the same full pathnames.

  • Developers access VOBs and views through their VOB-tags (mount points) and view-tags. All hosts in a given network region use the same tags.

For example, a VOB and a view might be accessed in different network regions as follows:

Region: core_dvt
VOB storage: /net/neptune/public/vega_project.vbs
 


VOB-tag: /vobs/vega
View storage: /net/saturn/shared_views/int_43.vws
View-tag: int_43

Region: lib_dvt
VOB storage: /net/neptune-gw/public/project.vbs
 


VOB-tag: /vobs/vega
View storage: /net/saturn-gw/shared_views/int_43.vws
View-tag: int_43

Registries in a Multiple-Region Network

Conceptually, each network region has its own view-tag registry and VOB-tag registry. Each VOB can have at most one tag in a region; views can have multiple tags in a region. In a typical network with N regions, each VOB or view storage directory has N tag entries.


Note: A VOB or view need not have a tag in every region. However, a VOB or view is inaccessible for development work on hosts in any region for which it is “tagless”. This suggests that you might use network regions as “access domains” instead of “naming domains”.

If possible, keep the tag itself constant over all the regions. For example:

Region

VOB-tag

Pathname to Storage Area in Region

uno

/vobs/project

/net/neptune/public/vega_project.vbs

dos

/vobs/project

/net/neptune-gw/public/vega_project.vbs

tres

/vobs/project

/netstorage/vega_project.vbs

This set of tags provides a single developer-visible name for the VOB (/vobs/project), even though network file system idiosyncrasies require several different names for the VOB's physical storage location.

Tag Registry Implementation

All view-tag registries are actually implemented in a single file, view_tags, on the registry server host. Each view-tag entry has a region field, which places the entry in a particular region. Similarly, a single vob_tags file implements all the logically distinct VOB-tag registries.

Figure 3-3 illustrates a simple two-region network, each with its own logical set of tag registries. All hosts in a network region use the same VOB-tags and view-tags, and access ClearCase data storage areas using the same pathnames, provided by registry lookups.

Figure 3-3. Network Regions and Their Tag Registries


Establishing Network Regions

Just after you load a ClearCase release from its distribution medium, you run a site_prep program. This program prompts you to specify the name of a network region. This name becomes the default region, which can be accepted or overridden during ClearCase installation on individual hosts. A host's network region assignment is recorded in file /usr/adm/atria/rgy/rgy_region.conf on that host.

There is no formal mechanism for “defining” additional network regions. Nor is there any centralized list of region names or assignments of hosts to regions. For procedures relating to network regions, see Chapter 18, “Adjusting ClearCase Registry Information”.

Recording Multiple Network Interfaces


Note: This section applies to all ClearCase hosts, not just to hosts where VOB and view storage directories reside.

If a host has two or more network interfaces (two or more separate lines in the /etc/hosts file or the hosts NIS map), it must have a file called /usr/adm/atria/config/alternate_hostnames, which records its multiple entries. For example, suppose that the /etc/hosts file includes these entries:

 .
 .
159.0.10.16 widget sun-005 wid
 .
159.0.16.103 widget-gte sun-105
 .

In this case, the alternate_hostnames file should contain:

widget
widget-gte

Note that only the first hostname in each hosts entry need be included in the file. In general, the file must list each alternative hostname on a separate line. There is no commenting facility—all lines are significant. If a host does not have multiple network interfaces, this file should not exist at all on that host.

ClearCase Data and Non-ClearCase Hosts

In large development shops, some groups might adopt ClearCase before others. There is no problem with such “incremental adoption”—a host on which ClearCase has not yet been installed can still mount VOBs and access their data.

  • A ClearCase host must use file /etc/exports.mvfs to explicitly export a view-extended pathname to the VOB mount point (for example, /view/exportvu/vobs/vegaproj).

  • One or more non-ClearCase hosts mount the VOB through a view-extended pathname. For example, a host might have an entry in its file system table that begins:

    mars:/view/exportvu/vobs/vegaproj /usr/vega nfs ...
    

Usage Restrictions

Users on the non-ClearCase host can only read data from such VOBs—they cannot modify the VOB in any way. Moreover, they are restricted to using the element versions selected by the specified view. They cannot use version-extended or view-extended pathnames to access other versions of the VOB's elements.

There are techniques for relaxing these restrictions in practice. A user who also has an account on the ClearCase host can reconfigure the “mounted” view, by performing an rlogin(1) there and modifying the view's config spec. The same VOB can be mounted at several locations on a non-ClearCase host, each mount using a different view.

Building on a Non-ClearCase Host

Although users cannot modify VOBs that are mounted through a view, they can write to view-private storage. This enables editing and building—with a native make program or with scripts, not with clearmake. Files created by builds in the VOB's directories do not automatically become derived objects; they will be view-private files, unless developers take steps to convert them to derived objects. (For more on this topic, the CASEVision/ClearCase User's Guide.)

Since clearmake does not run on the non-ClearCase host, configuration lookup and derived object sharing are not available to the make utility or script that performs the native build.

Using automount with ClearCase

This section discusses use of the standard UNIX automount(1M) program with ClearCase. Implementation of the facility vary from architecture to architecture; be sure to consult the documentation supplied by your hardware vendor.

Use of –hosts Map Required

You can use any automount maps, including both “direct” and “indirect” maps, to access remote disk storage where VOB storage areas reside. For proper ClearCase operation, every ClearCase host must also use the special “–hosts” map to provide paths to remote VOB and view storage. ClearCase looks for symbolic links to the mount points created through the “–hosts” map in any of these directories:

/net

(the automount default)

/hosts

 

/nfs

 

If your site uses another directory for this purpose (for example, /remote), create a UNIX symbolic link to provide access to your directory through one of the expected pathnames. For example:

# ln -s /remote /net

Specifying a Non-Standard Mount Directory

By default, automount mounts directories under /tmp_mnt. If a ClearCase host uses another location for a host's automatic mounts (for example, you use automount –M), you must specify it in file /usr/adm/atria/config/ automount_prefix. For example, if your automatic mounts take place within directory /autom, place this line in the automount_prefix file:

/autom