This chapter presents a system administrator's overview of a local area network using ClearCase. It also serves as a roadmap to other chapters in this manual, and to detailed reference information in the CASEVision™/ClearCase Reference Pages.
Hosts—ClearCase can be installed and used on any number of hosts in a network. Different hosts use ClearCase software in different ways; for example, one host might be used only to store version-controlled data; another might be used only to run ClearCase software development tools.
Data storage—ClearCase data is stored in VOBs and views, which can be located on any or all the hosts where ClearCase is installed. A VOB or view can have auxiliary data storage on hosts where ClearCase is not installed; such storage is accessed through standard symbolic links.
For many organizations, the set of all VOBs constitutes a central data repository, which you may need to administer as a unit. Most views are used by individuals; it is likely, however, that one or more shared views will be created, requiring some central administration.
User base—ClearCase is used by a set of people, each of whom has a username and is assigned to one or more groups. Any number of people can use ClearCase on any number of hosts; the licensing scheme limits the number of concurrent users, but does not limit the number of hosts.
ClearCase is a distributed application with a client-server architecture. This means that any particular development task (for example, execution of a single ClearCase command) may involve programs and data on several hosts. It is important to classify hosts by the roles they play, because different kinds of hosts require different administrative procedures. But keep in mind that any particular host may play different roles at different times, or several roles at once.
Network-wide release host—One host in the network acts as the network-wide release host. This host stores the entire ClearCase release, exactly as it is supplied on the distribution medium (magnetic tape, CD-ROM). Note that this release area is active storage, not archival storage—some individual developers' workstations may access ClearCase programs and/or data through symbolic links to the release area.
License server host(s)—One or more hosts in the network act as ClearCase license server hosts, authorizing and limiting ClearCase usage according to the terms of your license agreement. Each host on which ClearCase is installed is assigned to a particular license server host, and periodically communicates with that host. (The albd_server process on a license server host acts as the “license server process”.)
Registry server host—One host in the network acts as the ClearCase registry server host. This host stores a set of files that contain essential access-path information concerning all the VOBs and views in the network. ClearCase client and server programs on all other hosts occasionally communicate with the registry server host, in order to determine the actual storage location of ClearCase data. (The albd_server process on the registry server host acts as the “registry server process”.)
Client hosts—Each user typically has his or her own workstation. It is called a client host, because it runs ClearCase client programs: the programs installed in /usr/atria/bin, including cleartool, clearmake, and xclearcase.
ClearCase must be explicitly installed on each client host; installation must include the multiversion file system (MVFS), a ClearCase virtual file system extension. All access by client programs to ClearCase data (in VOBs and views) goes through the host's MVFS.
Server hosts—Some hosts may be used only as data repositories for VOB storage directories and/or view storage directories. Such server hosts run ClearCase server programs only: albd_server, vob_server, view_server, and other programs installed in /usr/atria/etc.
ClearCase must be explicitly installed on each server host; installation need not include the MVFS—it is required only for running client programs.
Non-ClearCase hosts—ClearCase need not be installed on every host in your network. In fact, it may not even be possible to install it on some hosts—those whose architectures are not (yet) supported by ClearCase. Such hosts cannot run ClearCase programs, but they can access ClearCase data, through standard UNIX network file system facilities. You administer these “export” (or “share”) mechanisms using standard UNIX tools.
All ClearCase data is stored in VOBs and views. These data structures can be distributed throughout the local area network—even an individual VOB or view can be distributed. Users see these structures as global resources; after a VOB or view is explicitly activated, users access it through its VOB-tag or view-tag.
The network's permanent data repository is conceptually a centralized resource. Typically, however, the repository is distributed among multiple versioned object bases (VOBs), located on multiple hosts. Each VOB is implemented as a VOB storage directory (actually a directory tree), which holds both developers' file system objects and an embedded database.
Registration—All VOBs are listed in a network-wide storage registry. In a typical network, registry maintenance is minimal—certain ClearCase commands update the registry automatically. You'll need to adjust the registry if you move a VOB to another location (for example, to another host). You'll also need to do some registry work if different pathnames must be used on different hosts to access the same VOBs.
Periodic maintenance—Administering the central repository requires continual balancing of the need to preserve important data with the need to conserve disk space. ClearCase includes tools for occasional scrubbing of unneeded data. You can control the meaning of “unneeded” on a per-VOB basis.
ClearCase installation automatically sets up crontab(1) scripts for a host's root user. By default, the scripts perform daily and weekly scrubbing of all VOBs on that host. You can fine-tune VOB maintenance by revising scrubbing parameters, by revising the scripts themselves, or by adding your own scripts.
Access control—Each VOB has a principal group and a supplementary group list. Together, these control which developers can use the VOB. As your organizational structure changes (for example, a new project is launched), you may need to adjust a VOB's group list.
Growth—As new projects are launched (or existing projects are brought under ClearCase control), you'll need to create new VOBs, define their accessibility to various groups, and incorporate them into your data backup and periodic maintenance schemes.
Short-term storage for data created during the development process is provided by ClearCase views. A view stores checked-out versions of file elements, view-private files that have no counterpart in the VOB (for example, text editor backup files), and newly-built derived objects.
Developers think of views and VOBs as being very different: briefly, a VOB is where data resides; a view is a “lens” through which a developer sees VOB data. From an administrator's standpoint, however, views and VOBs are quite similar. Each view is implemented as a view storage directory (actually a directory tree), which holds both developers' file system objects and an embedded database. View administration is similar to that for VOBs, including registration, backup, periodic maintenance, and access control.
Like a VOB, a view includes both a storage area for file system data, and an associated database:
The view's private storage area (subdirectory .s of the top-level view storage directory) holds all view-private objects. It also holds the data files (data containers) for derived objects built in the view.
Most VOBs are long-lived structures, created by an administrator; views are usually created by individual developers, and tend to be shorter-lived.
Successful use of ClearCase depends on network-wide consistency in the user base: users should have the same user-IDs and group-IDs on all hosts. Consistency is usually achieved by using network-wide databases at the operating system level, such as NIS passwd and group maps.
Each user has a user-ID, a principal group-ID (specified in the OS-level password database), and a supplementary list of group-IDs (specified in the OS-level group database). These identities control the user's permission to read and create ClearCase data:
For example, an object registry entry might record the fact that a VOB's storage directory is located on host neptune, at pathname /vobstore/project.vbs; a corresponding tag registry entry might record the fact that on each developer's workstation, the VOB is to be activated (mounted) at the location specified by its VOB-tag, /vobs/proj.
Similarly, an object registry entry might indicate that a view storage directory is /usr/shared/integ.vws on host einstein; a corresponding tag registry entry might enable developers to access the view using the view-tag integration.
In an ideal network, all hosts would access ClearCase data storage areas using exactly the same “global” pathnames. Many networks fall short of this ideal, however. To address this situation, a network can be logically partitioned into multiple network regions. Conceptually, each region has its own tag registry—ClearCase data structures can be accessed with different “global” pathnames in different regions. (Physically, all tag registries are implemented in a single file.)
Figure 1-1 illustrates how registries mediate access to ClearCase data structures. The administrative benefits of network-wide registries include:
centralized control over all components of the network's distributed data repository
independence from architecture-specific mechanisms for mounting file systems
ability to accommodate heterogeneous networks, and networks in which hosts have multiple names and/or multiple network interfaces
making VOBs and views globally accessible
ClearCase is a distributed client-server application. This means that multiple processes, running on multiple hosts, can play a role in the execution of ClearCase commands—even the simplest ones. For example, when a user checks out a file element:
The user's client process and the view may be on different hosts.
The view might have a private storage area that is located on a different host from the view storage directory.
The VOB storage pool that holds the version being checked out may be located on a different host from the VOB storage directory.
Fortunately, this is all handled automatically and reliably by the ClearCase server processes. Users need not be concerned with server-level processing at all. As an administrator, your concerns in this area typically are limited to entering an occasional “stop all ClearCase processing” command on one or more hosts; this terminates all ClearCase server processes currently active on the host.
Each ClearCase host runs a single Atria Location Broker Daemon process, albd_server, which is invoked by the ClearCase startup script. (See “ClearCase Startup and Shutdown”.) Other ClearCase server processes are started, as needed, by the albd_server processes.
Most server processes manage a particular data structure; for example, a view_server process manages a particular view storage directory. Such servers always run on the host where that data structure resides. This kind of ClearCase server includes:
Manages the view storage directory of a particular view
Manages the storage pools of a particular VOB
Fields requests from one ClearCase client program, destined for one or more VOB databases on a host
Fields requests from one or more view_server processes, destined for a particular VOB database
Figure 1-2 shows the communications paths connecting a client process with server processes.
Each ClearCase server maintains an error log on the host where it executes, in directory /usr/adm/atria/log. Given ClearCase's distributed architecture, a user can enter a command on one host that logs an error message on another host. In such cases, the user is directed to the appropriate log file on the appropriate host.
See the errorlogs_ccase manual page for details on the error logs.
When UNIX is bootstrapped, the ClearCase startup script is executed automatically by init(1M). On some systems, this script is named /etc/rc.atria; on others, it is named /etc/init.d/atria.The startup script:
You can also invoke this script manually, as root. For example:
# /etc/rc.atria stop
Stops all ClearCase processing on a host: terminates the albd_server and lockmgr processes, along with any other ClearCase server processes running on the host. User processes that are set to views on that host will also be terminated.
# /etc/rc.atria start
Restarts ClearCase processing on a host, by starting an albd_server process and a lockmgr process.
See Chapter 16, “Adjusting the ClearCase Startup/Shutdown Script”, and the init_ccase manual page for more on this topic.