This guide provides an example of how to configure an SGI® COPAN™ native massive array of idle disks (MAID) system for use with the SGI InfiniteStorage™ Data Migration Facility (DMF) and the OpenVault™ mounting service. The procedures in this guide use the sample following DMF configuration files, making as few modifications as possible:
dmf.conf.copan_maid, for using COPAN MAID as permanent secondary storage.
dmf.conf.fmc, for using COPAN MAID as a fast-mount cache in conjunction with at least one permanent migration target. The example file specifies physical tape as the permanent migration target, but you could use another target.
You can divide the COPAN MAID resource into one set of volume groups (VGs) for permanent storage and another set of VGs for fast-mount cache, but that is not directly addressed by this guide.
![]() | Note: SGI always recommends that you migrate at least two copies to permanent storage targets in order to prevent data loss in the event that a migrated copy is lost. When using a fast-mount cache, SGI therefore recommends that you migrate at least three copies (one to the temporary cache and two to permanent storage targets). |
This guide documents the supported procedures for configuring DMF with COPAN MAID. If you deviate from these procedures, DMF may not function.
This chapter discusses the following:
For complete details about DMF and its configuration, see the DMF 6 Administrator Guide for SGI InfiniteStorage and the ov_copan(8) man page.
This section discusses the following:
A COPAN cabinet has up to eight MAID shelves, shelf 0 (bottom) through shelf 7 (top). A site can have multiple cabinets with differing numbers of shelves. Figure 1-1 shows a conceptual drawing of DMF using COPAN MAID with 8 shelves as permanent storage.
If you use COPAN MAID in a fast-mount cache configuration, you would also need a permanent migration target, such as physical tape. Figure 1-2 shows a conceptual drawing of DMF using COPAN MAID with 4 shelves as fast-mount cache.
For users with higher throughput requirements, the Parallel Data Mover Option allows additional data movers on dedicated nodes to operate in parallel with the integrated data mover functionality on the DMF server, increasing data throughput and enhancing resiliency. The dedicated function of a parallel data mover node is to move data from the filesystem to secondary storage or from secondary storage back into the primary filesystem. Offloading the majority of I/O from the DMF server improves I/O throughput performance.
When DMF migrates or recalls files, it issues a mount request to the OpenVault mounting service. OpenVault mounts the correct COPAN MAID storage filesystem and DMF performs I/O to the appropriate volume on that filesystem.
DMF continuously monitors the managed user filesystem according to the policies established in the DMF configuration file. Only the most timely data resides on the higher performance primary filesystem; less timely data is automatically migrated to the secondary storage. However, data always appears to be online to end users and applications, regardless of its actual location.
DMF moves data to secondary storage on the COPAN MAID disk, but it leaves critical metadata (such as index nodes, or inodes, and directories) in the primary filesystem. A user retrieves a file simply by accessing it normally through NFS; DMF automatically recalls the file's data from the secondary storage, caching it on the primary filesystem. From a user's perspective, all content is visible all of the time.
You can use COPAN MAID as either of the following:
Figure 1-3 and Figure 1-4 show the concepts of migrating and recalling data when using COPAN MAID as permanent storage.
The figures describe the migration concept, showing that data is removed from the primary filesystem after migrating (represented by the dashed lines) and moved to the storage on MAID, but the inode remains in place in the primary filesystem. During the period when the data has been copied to the secondary storage on MAID but has not yet been deleted from the primary filesystem, the file is considered to be dual-state. After the data has been deleted from the primary filesystem, the file is considered to be offline. SGI recommends that you migrate two copies of a file to prevent data loss in the event that a migrated copy is lost.
![]() | Note: For simplicity, these diagrams do not address a second copy. Data will be recalled from a second copy only if necessary. |
Because the inodes and directories remain online, users and applications never need to know where the data actually resides; migrated files remain cataloged in their original directories and are accessed as if they were still online. In fact, when drilling into directories or listing their contents, a user cannot determine whether a file is online or offline; determining the data's actual residence requires special commands or command options. The only difference users might notice is a slight delay in access time. Therefore, DMF allows you to oversubscribe your online disk in a manner that is transparent to users.
The fast-mount cache configuration migrates data simultaneously to COPAN MAID as a temporary cache and to permanent storage on another migration target (such as physical tape). Volumes on the cache can be freed immediately when the fullness threshold is reached. SGI always recommends that you migrate at least two copies to permanent storage targets in order to prevent data loss in the event that a migrated copy is lost. SGI therefore recommends that you migrate at least three copies for this configuration (one to the temporary cache and two to permanent storage targets).
Figure 1-5 and Figure 1-6 show the concepts of migrating and recalling data when using COPAN MAID as a fast-mount cache in conjunction with permanent storage, in this case on physical tape.
![]() | Note: For simplicity, these diagrams do not address a second permanent storage copy. Data will be recalled from a second permanent storage copy only if necessary. |
COPAN MAID uses redundant array of independent disks (RAID) technology. Each MAID shelf has a shelf controller and 27 groups of disks; each group is referred to as a RAID set:
The first 26 RAID sets each appear to the DMF server and any parallel data mover nodes as a SCSI disk device logical unit (LUN). Each RAID set has three data disks plus one parity disk (known as RAID 5). These RAID sets are named A-Z, and the associated LUNs are numbered 1-26. (LUN 0 is the shelf controller.)
The 27th RAID set contains the always on region (AOR), which consists of specially mapped ranges of blocks from other host-accessible LUNs (see “Contact SGI to Apply the AOR Files”).) This RAID set is not presented to the host as a LUN and is not consumed by DMF during data migration.
Figure 1-7 shows a conceptual example of a MAID shelf.
After you complete the procedures in this guide, each RAID set A-Z will be formatted with an XFS filesystem. Large files, known as volumes, will be preallocated on the filesystem. From the point of view of DMF and OpenVault, these volumes are the logical equivalent of a tape cartridge (and therefore are sometimes referred to as tapes in command output and documentation.)
COPAN MAID contains power-management features that spin down and power off the least-recently-used individual RAID sets after a period of disuse or when a fixed power budget is exceeded.
DMF carefully controls how many RAID sets are accessed at once and groups migrate/recall requests in order to move data efficiently. If data is required from a disk that is powered off, the drives in the RAID set that contains the data being requested will be powered on; if necessary, the drives in another RAID set will be powered off so that the power budget is maintained.
Each MAID shelf is managed by one system at a time, either the DMF server or one of the DMF parallel data mover nodes.
DMF does not provide backup services for the primary disks, but instead provides a migration service for data. DMF moves only the data associated with files, not the file inodes or directories. Therefore, you must still perform regular backups to protect files that have not been migrated, as well as inodes and directory structures. You must store the backups on physical tape or in a filesystem that is not managed by DMF, such as on a RAID set that has been formatted for non-DMF use.
You will use the ov_copan(8) command to format the RAID sets for DMF-managed data or (optionally) for backups.
![]() | Note: If you wish to use the COPAN MAID for third-tier permanent storage as well as second-tier fast-mount cache storage, you can format one or more RAID sets for a filesystem (that is not managed by DMF) that can then be used for backups. For more information about tiers, see the DMF 6 Administrator Guide for SGI InfiniteStorage. |
The ov_copan command divides each RAID set into two regions (one of which is usually of zero size):
DMF uses a volume serial number (VSN) to uniquely identify a specific volume at your site. A given volume is contained within a single RAID set. The set of volumes may be scattered across any number of RAID sets, MAID shelves, and COPAN cabinets. All VSNs that are managed by a single OpenVault instance must be unique.
When used with COPAN MAID, the VSN has six characters and a specific structure. The following string represents the components of the COPAN MAID VSN:
CCSRVV |
where:
CC is the cabinet identifier that uniquely represents a cabinet at your site. This guide uses C0 (for “Cabinet 0”). You may use a different naming convention if you prefer, using two uppercase alphanumeric characters (that is, [0-9][A-Z][ 0-9][A-Z]; see “Selecting Appropriate Cabinet Identifiers” in Chapter 2).
S represents the shelf position in the cabinet. There are up to eight shelves in a COPAN cabinet, shelf 0 (bottom) through shelf 7 (top). The three-character string CCS (such as C01) is called the shelf identifier and can be used as an argument in some OpenVault commands.
R represents the RAID set. Each shelf contains 26 RAID sets available for DMF use, named A through Z. The four-character string CCSR (such as C01B) is called the RAID-set identifier and can be used as an argument in some OpenVault commands.
VV represents an individual volume within a given RAID set. Each RAID set accommodates up to 1,296 volumes, named 00 through ZZ.
For example, the VSN C01B03 identifies the fourth volume (03) on the second RAID set (B) of shelf 1 in COPAN cabinet 0, as shown in Figure 1-8.
You will use the ov_admin(8) command to add the DMF application to OpenVault and the ov_shelf(8) command to configure OpenVault to use the shelves. See Chapter 3, “Configure OpenVault”.
As an example, the procedures in this guide will lead you through the process of configuring a single cabinet with multiple shelves. You must adjust the procedures and the DMF configuration file to fit your site-specific situation.
After you complete the procedures in this guide, the cabinet will have up to eight independent OpenVault libraries, C00 through C07, and a set of up to sixteen drives per shelf, such as C00d00 through C00d15 for shelf 0. Each library will be used with a corresponding volume group, such as vg_c00 through vg_c07. (All volumes in a given volume group must reside in the RAID sets on the same COPAN MAID shelf.) The volume groups will be managed as two DMF migrate groups ( mg0 and mg1 for COPAN MAID as permanent storage or copan_fmc1 and copan_fmc2 for fast-mount cache).
Figure 1-9 and Figure 1-10 describe the parts of the configuration that relate to COPAN MAID.
![]() | Note: Figure 1-9 corresponds to the sample configuration file dmf.conf.copan_maid. In that sample file, conforming to DMF best practices, one copy of migrated data is stored in each of the two migrate groups. |
Before you configure the COPAN MAID system and DMF, do the following:
You must ensure that the DMF server and the COPAN MAID system are on the network and zoned appropriately. The RAID sets must be visible only to the active DMF server, the passive DMF server (if applicable), and the parallel data mover nodes; the RAID sets must not be visible to any other nodes. You must also install the required software for COPAN MAID and DMF. For more information, see the manuals and release notes listed in “About This Guide”.
The always on region (AOR) RAID set allows concurrent access to small, critical sections of all the other LUNs, whether their underlying RAID sets are spinning or not. The special mapping is applied to the MAID shelf from a predefined AOR configuration file that is specific to the host operating system, volume manager, filesystem, application, and capacity of the disks in the RAID set.
Before you can configure DMF for use with COPAN MAID, the AOR configuration files must be applied by SGI service personnel. For assistance, contact SGI Support.
Use the information about configuring DMF administrative directories appropriately in the “Best Practices” chapter of DMF 6 Administrator Guide for SGI InfiniteStorage to create the required filesystems and directories of the appropriate size on a general-purpose RAID storage system.
In a production system, SGI in most cases recommends that you restrict these directories to DMF use and make them the mountpoint of a filesystem, in order to limit the loss of data in the case of a filesystem failure.
![]() | Note: A DMF administrative directory must not be in a DMF-managed filesystem. |
You specify the location of these directories by using parameters in the DMF configuration file. The following lists show the directory names used by the sample configuration file (using these names will minimize the number of changes you must make):
Required to be dedicated to DMF use and to be a filesystem mountpoint:
Recommended to be dedicated to DMF use and to be a filesystem mountpoint:
/dmf/home for directories in which the DMF daemon database, library server (LS) database, and related files reside
/dmf/journals for directories in which the journal files for the daemon database and LS database will be written
/dmf/tmp for directories in which DMF puts temporary files for its own internal use
(If used) /dmf/backups for DMF backup files created by DMF backup tasks
You can use the df(1) command to verify that the filesystems required for the DMF administrative directories and the DMF-managed filesystems are mounted.
This section discusses the following:
You can configure backups to tape or to disk, such as on a reserved portion of the RAID set that will not be managed by DMF. When backing up to disk, the approximate formula for the amount of disk space that you must reserve for backups is as follows:
Backup_Space_Needed_Per_Day * (Retention_Period_In_Days + 1) = Reserved_Disk_Space |
where Backup_Space_Needed_Per_Day is a factor of:
The amount of data that is not migrated at the time a backup takes place
The size of the DMF databases, which is a function of the number of migrated files and the number of DMF copies of each file
The number of full and/or partial backups to retain, as determined by the frequency of backups and the backup retention period
For more information, see the information about configuring DMF directories appropriately in the “Best Practices” chapter of DMF 6 Administrator Guide for SGI InfiniteStorage, particularly the information about HOME_DIR size.
There are the following requirements:
The backup filesystem must not be a DMF-managed filesystem.
The backup filesystem must be visible to the DMF server. In an HA environment, it must be a filesystem that is visible from or can be moved to the active DMF server in the event of a failover.
If the backups are placed on a RAID set, the RAID set must be owned by the DMF server. For details on configuring a RAID set for backups, see “Format a Shelf for Both DMF and Backups (Optional) ” in Chapter 2.
The sample DMF configuration file for COPAN MAID does the following:
Writes the backups to disk. The disk can be on a non-DMF region of a RAID set or it can be some existing (non-DMF-managed) filesystem in your environment.
![]() | Note: You can choose to use physical tapes instead if you edit the DMF configuration file. |
Performs a full backup once a week (Sundays at 00:00) and a partial backup the remaining days (Monday--Saturday at 00:00), creating backups of all DMF-managed filesystems.
Causes all of the data in the DMF-managed filesystems to be migrated before the backups take place (except for files that do not meet the site migration policy).
Removes the bit-file identifiers from the DMF databases for permanently deleted files.
Retains the backups for four weeks (28 days). The disk space used by backups is recycled after the retention period is completed.
If you want to change these backup policies, you must modify the DMF configuration file and the procedures in this guide accordingly.
To install the DMF licenses, copy the DMF license keys into the /etc/lk/keys.dat file on the DMF server. For more information, see the chapter about licensing in DMF 6 Administrator Guide for SGI InfiniteStorage.
If this is a new DMF installation, you must use the ov_admin(8) command to do the following, as documented in the chapter about mounting service configuration tasks in DMF 6 Administrator Guide for SGI InfiniteStorage:
Initially configure the OpenVault server.
Configure OpenVault for each parallel data mover node.
![]() | Note: The other sections in that chapter do not apply to a COPAN MAID environment. You will use the instructions later in this guide instead; see “Activate the dmf Application Instances in OpenVault ” in Chapter 3. |
This guide leads you through the following steps, which you will perform as the root user:
Select appropriate COPAN cabinet identifiers to be used in the naming convention.
Format the RAID sets for each shelf, optionally including non-DMF regions for backups.
Create the volumes.
Configure OpenVault.
Configure DMF and import the DMF volumes into OpenVault.
(Optional) Configure for HA. See High Availability Guide for SGI InfiniteStorage.
![]() | Caution: After your system is configured and running, you should not stop the disks in a COPAN MAID shelf if those disks are currently being used by an OpenVault LCP. First view the output from dmstat(8) or ovstat(8) to ensure that DMF is not using the disks in that shelf, then stop the OpenVault LCP associated with that shelf, and finally stop the disks. For more information, see the “Best Practices” chapter of the DMF 6 Administrator Guide for SGI InfiniteStorage. |