This chapter discusses the following:
This section discusses the following:
The volume serial numbers (VSNs) for all volumes used by DMF must be unique, whether they are in a library managed by TMF or OpenVault. The following OpenVault component names must also be unique across the site:
Library names
Drive names
Drive groups
When you format the RAID sets on the MAID shelves, the characters you choose for the CC component of the VSN naming convention (see “Naming Conventions” in Chapter 1) for each cabinet will also dictate the names that will be chosen for the shelves and the OpenVault components (described in “Summary of the Completed Configuration” in Chapter 1).
You must consider your site's upgrade plans for additional COPAN MAID cabinets and tape libraries when choosing the CC component.
If you have an existing DMF environment, you must also ensure that there will be no conflicts with existing OpenVault components by using the precheck feature of the ov_shelf(8) command to examine the existing OpenVault VSNs and OpenVault components at your site and report any potential conflicts with the cabinet ID you have chosen:
# ov_shelf precheck cabinetID |
This command will report if any LCP, DCP, or cartridge name collisions would occur with future shelves named with the specified cabinet ID.
If there are no conflicts, no output will appear. For example, to check all of the shelves associated with a cabinet ID of C0 (that is, the eight shelf IDs C00 through C07):
# ov_shelf precheck C0 # |
However, if there are any potential conflicts, ov_shelf will report that there are unexpected results. For example, in the following check of the cabinet ID C1, the output shows that there are potential conflicts with an already existing OpenVault library named C13 and already existing drives such as C13d00:
# ov_shelf precheck C1 There is an unexpected "C13" LIBRARY record in the openvault database There is an unexpected "[email protected]" LCP record in the openvault database There is an unexpected "C13d00" DRIVE record in the openvault database There is an unexpected "[email protected]" DCP record in the openvault database There is an unexpected "C13d01" DRIVE record in the openvault database There is an unexpected "[email protected]" DCP record in the openvault database There is an unexpected "C13d02" DRIVE record in the openvault database There is an unexpected "[email protected]" DCP record in the openvault database There is an unexpected "C13d03" DRIVE record in the openvault database There is an unexpected "[email protected]" DCP record in the openvault database There is an unexpected "C13d04" DRIVE record in the openvault database There is an unexpected "[email protected]" DCP record in the openvault database There is an unexpected "C13d05" DRIVE record in the openvault database There is an unexpected "[email protected]" DCP record in the openvault database There is an unexpected "C13d06" DRIVE record in the openvault database There is an unexpected "[email protected]" DCP record in the openvault database # |
In this case, you must choose a different cabinet ID and rerun the test.
For example, you could test the cabinet ID M1. Rerun the ov_shelf command until no conflicts appear. For example:
# ov_shelf precheck M1 # |
Because the preliminary check of M1 did not show any potential conflicts, you could use it as the cabinet ID naming convention in Chapter 2, “Format the RAID sets and Create Volumes”.
![]() | Note: If you are also using TMF, you must manually verify that the cabinet ID will not conflict with any VSNs. |
To determine the available devices on the COPAN MAID shelves that are visible on the SCSI bus, do the following:
Log in to the Platform Service Manager (PSM) GUI, where ipaddress is the IP address of the PSM server:
http://ipaddress:8180/psmweb/ |
The MAID shelves will be displayed in order from shelf 7 (top) through shelf 0 (bottom), as applicable.
Drill down to the shelf details and note the WWPN of both ports on each shelf.
For example, suppose the port numbers for shelf 0 are as follows:
Port 0 WWPN: 50:00:ED:5E:35:C3:43:03 Port 1 WWPN: 50:00:ED:5E:35:C3:43:04 |
You will look for numbers similar to the above (but using lowercase and without colon delimiters) in step 4 below. Only one of the port WWPNs will be found, even if both ports are connected to the system, because ov_copan chooses one as the primary port.
For more information about PSM, see the Platform Service Manager Online Help available from the PSM Help menu.
On each node in the DMF configuration, use the list action to ov_copan and note the sg devices:
# ov_copan list |
For example, the following output from node1 shows that there are three shelves (/dev/sg4, /dev/sg71, and /dev/sg99):
node1# ov_copan list LUN Device Vendor Product Serial Number Name DMF OTH FOR UNF VSNs PB 0 /dev/sg4 COPANSYS 8814 COPANSYS0000000080900283 --- 0 0 0 26 0 16 0 /dev/sg71 COPANSYS 8814 COPANSYS0000000063800042 --- 0 0 0 26 0 16 0 /dev/sg99 COPANSYS 8814 COPANSYS0000000051200018 --- 0 0 0 26 0 16 |
This output also shows that all 26 RAID sets on each shelf are unformatted, and that the power budget (PB) allows 16 RAID sets to have disks spinning at any given time. (LUN 0 refers to the location of the shelf controller.)
For each /dev/sg N device on each node, obtain its WWPN and match that up with a corresponding shelf WWPN obtained from the PSM by using the sg_inq(8) command and searching for the relevant information in the output:
# sg_inq --page=0x83 device | grep -A2 "NAA 5" |
For example, for /dev/sg4 on node1, suppose the output is as follows:
node1# sg_inq --page=0x83 /dev/sg4 | grep -A2 "NAA 5" NAA 5, IEEE Company_id: 0xed5 Vendor Specific Identifier: 0xe35c34303 [0x5000ed5e35c34303] <<<<<<< WWPN location |
In this case, the WWPN for /dev/sg4 is 5000ed5e35c34303.
Match the device WWPN obtained in step 4 with the shelf WWPN obtained in step 2. For example, the /dev/sg4 WWPN 5000ed5e35c34303 corresponds to the shelf 0 port 0 WWPN of 50:00:ED:5E:35:C3:43:03 . Therefore, in this example, device /dev/sg4 equates to shelf 0.
This section discusses the following:
For more details, see the ov_copan(8) man page.
![]() | Caution: Formatting a RAID set will destroy any existing data on
that RAID set. The ov_copan command will prompt you
for confirmation if the operation is about to destroy existing data. When
you initially configure DMF and COPAN MAID, this is not an issue because
there is no data on the RAID sets.
If there are unrecognized XVM slices on the RAID set, the format operation will fail rather than destroy the data. You must use the xvm command to remove any such XVM slices before you can format the RAID set with ov_copan. |
This section discusses the following:
For more information, see “Naming Conventions” in Chapter 1 and the ov_copan(8) man page.
When you format a RAID set for DMF use, the formatting process initializes an XFS filesystem on the RAID set.
![]() | Note: This filesystem is reserved exclusively for use with DMF, do not modify it directly. |
ov_copan format device_or_shelfID [-N shelfID] [-m raidlist] [-t type [-r reserved_nonDMFsize]] |
where:
device_or_shelfID is the device name associated with a shelf (such as /dev/sg4 ), or, after a shelf has been named in ov_copan, the shelf identifier (such as C00). See “Matching Device Names to MAID Shelves”.
![]() | Note: You should use the recommended naming convention unless the ov_shelf precheck command found potential conflicts, as documented in “Selecting Appropriate Cabinet Identifiers”. |
shelfID is the shelf identifier (such as C00). You will only use the -N option the first time that you format a RAID set for a given shelf.
raidlist is the list of RAID sets on the shelf to be acted upon. By default, all RAID sets are chosen. raidlist is a comma-separated list of ranges, such as:
A,D-F,L-P |
type is the type of formatting, either:
dmf for DMF-managed filesystems and filesystems for DMF administrative directories (default)
other for non-DMF regions, such as for backups from the daily DMF backup
![]() | Note: To use a RAID set for a backups, the filesystem must not be managed by DMF. |
If you specify dmf, you can choose to reserve a portion of the disk for non-DMF use by specifying -r. If you do not specify -r, the entire disk is formatted for DMF use.
If you specify other, the entire disk is formatted for non-DMF use.
reserved_nonDMFsize is the amount of the disk to be reserved for non-DMF use. (The -r option only applies when the type is dmf.) By default, no space is reserved for non-DMF use. If you specify -r, you must specify a size. You can specify the following units (the default unit is b):
b = bytes |
s = sectors (512 bytes) |
k = kilobytes (2^10 bytes) |
m = megabytes (2^20 bytes) |
g = gigabytes (2^30 bytes) |
t = terabytes (2^40 bytes) |
p = petabytes (2^50 bytes) |
e = exabytes (2^60 bytes) |
% = percentage |
DMF volumes must be at least 512 MB. There can be at most 1,296 DMF volumes in a RAID set. To create DMF volumes on the formatted RAID sets of the specified shelf:
ov_copan create shelfID [-m raidlist] [-c count_of_volumes] [-s size_of_volumes] |
where:
raidlist is the list of RAID sets on the shelf to be acted upon. By default, all RAID sets are chosen. raidlist is a comma-separated list of ranges, such as:
A,D-F,L-P |
count_of_volumes is the number of volumes to create on each of the specified RAID sets. Normally, you specify either -c or -s but not both.
size_of_volumes is the size of volumes to be created (using the same unit specifications as listed above for reserved_nonDMFsize). The value can contain a trailing modifier of "+" or "-", which means to adjust the given size up or down, respectively, in order to use the remaining available space on the RAID set.
To format an entire shelf for use as DMF secondary storage, do the following:
Format all 26 RAID sets for the shelf and name the shelf.
For example, to format all 26 RAID sets for device /dev/sg4 and name the shelf C00 (because device /dev/sg4 equates to MAID shelf 0, as determined in “Matching Device Names to MAID Shelves”):
ownernode# ov_copan format /dev/sg4 -N C00 format /dev/sdj as C00A fmt DMF format /dev/sdk as C00B fmt DMF format /dev/sdl as C00C fmt DMF format /dev/sdm as C00D fmt DMF ... |
Verify that the RAID sets have been formatted.
For example, the following output shows that device /dev/sg4 (shelf 0) has the shelf name C00 and 26 RAID sets that are formatted for DMF:
ownernode# ov_copan list LUN Device Vendor Product Serial Number Name DMF OTH FOR UNF VSNs PB 0 /dev/sg4 COPANSYS 8814 COPANSYS0000000080900283 C00 26 0 0 0 0 16 0 /dev/sg71 COPANSYS 8814 COPANSYS0000000063800042 --- 0 0 0 26 0 16 0 /dev/sg99 COPANSYS 8814 COPANSYS0000000051200018 --- 0 0 0 26 0 16 |
Create DMF volumes on the shelf. The recommended size is 128 GB, specified with a trailing "+" or " -" to allow ov_copan to adjust the size to fill all available space in the DMF region. For example, the following will create volumes that are at least 128 GB:
ownernode# ov_copan create C00 -s 128g+ |
Verify that the volumes have been created. For example, the following shows that there are 1,040 volumes on shelf C00:
ownernode# ov_copan list LUN Device Vendor Product Serial Number Name DMF OTH FOR UNF VSNs PB 0 /dev/sg4 COPANSYS 8814 COPANSYS0000000080900283 C00 26 0 0 0 1040 16 0 /dev/sg71 COPANSYS 8814 COPANSYS0000000063800042 --- 0 0 0 26 0 16 0 /dev/sg99 COPANSYS 8814 COPANSYS0000000051200018 --- 0 0 0 26 0 16 |
If you want to put DMF backups on a MAID shelf, you must format a portion of a RAID set or an entire RAID set as non-DMF. Which method you choose depends upon the amount of space needed, as determined in “Determine the Backup Requirements for Your Site” in Chapter 1. This section discusses the following:
To use physical tapes for DMF backups, see the DMF 6 Administrator Guide for SGI InfiniteStorage.
If you will store DMF backups on a RAID set, note the following:
The shelf containing the RAID set must be owned by the DMF server.
You may format an entire RAID set or reserve just a portion of a RAID set for non-DMF use. When formatting a given RAID set for both DMF and non-DMF use, the DMF region must be at least 200 GB. You must use the mkfs.xfs(8) and xvm(8) commands to make use of the non-DMF regions.
The backup task can make multiple copies of the backups (specified by the DUMP_MIRRORS parameter in the DMF configuration file). You can choose to increase redundancy by formatting multiple RAID sets to store backups.
SGI recommends that you leave the backup filesystem unmounted when not in use; the backup task will mount it automatically if it is in /etc/fstab.
![]() | Note: If the backup filesystem is always mounted, OpenVault will assume that the RAID set is in use and will throttle DMF's use of the shelf back to avoid going over the power budget. The same is true for any non-DMF RAID set. SGI recommends that you dedicate an entire RAID set for use with DMF. |
To dedicate one or more entire RAID sets on a shelf for backups, do the following:
Format most of the RAID sets on the shelf for DMF use by using the -m raidlist option.
For example, to format the first 25 RAID sets (named from A through Y) of device /dev/sg71 for DMF use and name the shelf C01 (for cabinet 0, shelf 1):
dmfserver# ov_copan format /dev/sg71 -N C01 -m A-Y format /dev/sdj as C01A fmt DMF format /dev/sdk as C01B fmt DMF format /dev/sdl as C01C fmt DMF format /dev/sdm as C01D fmt DMF... |
Format the remaining RAID sets entirely as non-DMF for use with backups. For example, to format the one remaining RAID set (Z) as a non-DMF region suitable for backups:
dmfserver# ov_copan format C01 -m Z -t other format /dev/sdab as C01Z fmt Other |
Verify by viewing the output from the list action. For example, the following output shows that for shelf C01, there are 25 RAID sets that are formatted for DMF use and 1 RAID set formatted for other use:
dmfserver# ov_copan list C01 LUN Device Vendor Product Serial Number Name DMF OTH FOR UNF VSNs PB 0 /dev/sg71 COPANSYS 8814 COPANSYS0000000063800042 C01 25 1 0 0 0 16 |
Create DMF volumes on most of the RAID sets by using the -c or -s option. The recommended size is 128 GB, specified with a trailing "+" or "-" to allow ov_copan to adjust the size to fill all available space in the DMF region. For example, to create as many 128-GB volumes as will fit on RAID sets A-Y of shelf C01:
dmfserver# ov_copan create C01 -s 128g+ create 40 VSNs of size 128.92g on C01A create 40 VSNs of size 128.92g on C01B create 40 VSNs of size 128.92g on C01C create 40 VSNs of size 128.92g on C01D ... skip C01Z: not formatted for DMF... |
Verify that the volumes have been created. For example, the following shows that there are 1,000 volumes on shelf C01:
dmfserver# ov_copan list C01 LUN Device Vendor Product Serial Number Name DMF OTH FOR UNF VSNs PB 0 /dev/sg71 COPANSYS 8814 COPANSYS0000000063800042 C01 25 1 0 0 1000 16 |
Create a local-domain XVM slice that covers the whole usable space of the non-DMF RAID set and name the XVM volume that will be used for backups. The physical volume (or physvol ) will be named with the RAID set identifier:
dmfserver# xvm -domain local slice -volname backup_volume_name -all phys/copan_RAID_set_ID |
For example, to create a slice for the last RAID set in cabinet 0, shelf 1 (which has a RAID_set_ID of C01Z) and name the backup volume dmf_backups:
dmfserver# xvm -domain local slice -volname dmf_backups -all phys/copan_C01Z </dev/lxvm/dmf_backups> slice/copan_C01Zs0 |
For more information, see the xvm(8) man page and the XVM Volume Manager Administrator Guide.
Make the filesystem for the backup volume.
For example, to make the filesystem for dmf_backups in the /dev/lxvm directory:
dmfserver# mkfs.xfs /dev/lxvm/dmf_backups meta-data=/dev/lxvm/dmf_backups isize=256 agcount=32, agsize=11443303 blks = sectsz=512 attr=2 data = bsize=4096 blocks=366185696, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=178801, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 |
For more information, see the mkfs.xfs(8) man page.
Add the filesystem to /etc/fstab with the noauto mount option. For example:
/dev/lxvm/dmf_backups /dmf/backups xfs defaults,noauto |
To reserve a portion of one RAID set for backups and allow DMF use on the remainder of that RAID set and all other RAID sets on the shelf, do the following:
Format most of the RAID sets on the shelf for DMF use by using the -m raidlist option.
For example, to format the first 25 RAID sets (named from A through Y) of device /dev/sg99 for DMF use and name the shelf C02 (for cabinet 0, shelf 2):
dmfserver# ov_copan format /dev/sg99 -N C02 -m A-Y format /dev/sdj as C02A fmt DMF format /dev/sdk as C02B fmt DMF format /dev/sdl as C02C fmt DMF format /dev/sdm as C02D fmt DMF... |
Reserve a portion of the remaining RAID sets for backups by using the -m and -r options.
For example, to reserve 500 GB for backups of the last RAID set (Z) of shelf C02:
dmfserver# ov_copan format C02 -m Z -r 500g format /dev/sdac as C02Z fmt DMF |
Verify by viewing the output from the list action. For example, the following shows that all 26 RAID sets are formatted at least in part for DMF use:
dmfserver# ov_copan list C02 LUN Device Vendor Product Serial Number Name DMF OTH FOR UNF VSNs PB 0 /dev/sg99 COPANSYS 8814 COPANSYS0000000051200018 C02 26 0 0 0 0 16 |
Create DMF volumes on the DMF-regions of the RAID sets by using the -c or -s option. The recommended size is 128 GB, specified with a trailing " +" or "-" to allow ov_copan to adjust the size to fill all available space in the DMF region. For example, to create as many 128-GB volumes as will fit on RAID sets A-Y and the DMF-region of RAID set Z of shelf 2:
dmfserver# ov_copan create C02 -s 128g+ create 40 VSNs of size 128.92g on C02A create 40 VSNs of size 128.92g on C02B create 40 VSNs of size 128.92g on C02C create 40 VSNs of size 128.92g on C02D ... create 20 VSNs of size 129.48g on C02Z |
![]() | Note: The size and number of volumes are different for C02Z because the DMF region on that RAID set is smaller. |
Verify by viewing the output from the list action. For example:
dmfserver# ov_copan list C02 LUN Device Vendor Product Serial Number Name DMF OTH FOR UNF VSNs PB 0 /dev/sg99 COPANSYS 8814 COPANSYS0000000051200018 C02 26 0 0 0 1020 16 |
Find the sector where the slice will start by examining the output of the show action to xvm:
dmfserver# xvm -domain local show -v phys/copan_RAID_set_ID | grep -A5 "Physvol Usage" |
For example, for the C02Z RAID set:
dmfserver# xvm -domain local show -v phys/copan_C02Z | grep -A5 "Physvol Usage" Physvol Usage: Start Length Name --------------------------------------------------- 0 1880909568 slice/copan_C02Zs0 1880909568 1048576000 (unused) |
For more information, see the xvm(8) man page and the XVM Volume Manager Administrator Guide.
Create a local-domain XVM slice that begins at the slice location displayed in the previous step and name the XVM volume that will be used for backups. The physical volume (or physvol) will be named with the RAID set identifier:
dmfserver# xvm -domain local slice -volname backup_volume_name -start sector phys/copan_RAID_set_ID |
For example, to create a slice for the last RAID set in shelf 2 (which has a RAID_set_ID of C02Z) beginning at sector 1880909568 and name the backup volume dmf_backups:
dmfserver# xvm -domain local slice -volname dmf_backups -start 1880909568 phys/copan_C02Z </dev/lxvm/dmf_backups> slice/copan_C02Zs1 |
Make the filesystem for the backup volume. For example:
dmfserver# mkfs.xfs /dev/lxvm/dmf_backups meta-data=/dev/lxvm/dmf_backups isize=256 agcount=16, agsize=8192000 blks = sectsz=512 attr=2 data = bsize=4096 blocks=131072000, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=64000, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 |
For more information, see the mkfs.xfs(8) man page.
Add the filesystem to /etc/fstab with the noauto mount option. For example:
/dev/lxvm/dmf_backups /dmf/backups xfs defaults,noauto |