This chapter discusses the following:
To delete a volume from a RAID set, do the following:
Determine if the volume has any data on it by executing the dmvoladm(8) command on the DMF server and examining the output for the DATA LEFT column:
dmvoladm -l libraryserver_name -c "list VSNlist" |
For example, the following shows that C00A00 has data but C00A01 does not:
dmfserver# dmvoladm -l maid_ls -c "list C00A00-C00A01" DATA EOT EOT WR/FR VSN VOLGRP LB DATA LEFT WRITTEN CHUNK ZONE HFLAGS AGE -------------------------------------------------------------------------- C00A00 vg_c00 al 4847.408715 14932.408715 9 5 --------- 3d C00A01 vg_c00 al 0.000000 5500.000000 4 3 --------- 7d Database was not modified. |
If the volume is empty, skip to step 3.
If the volume has data on it, do the following on the DMF server:
Flag the volume as sparse:
dmfserver# dmvoladm -l LCPname -c "update VSN to hsparse on hro on" |
For example:
dmfserver# dmvoladm -l maid_ls -c "update C00A00 to hsparse on hro on" Updated 1 record. |
Verify the flag settings, which should show r and s. For example:
dmfserver# dmvoladm -l maid_ls -c "list C00A00" DATA EOT EOT WR/FR VSN VOLGRP LB DATA LEFT WRITTEN CHUNK ZONE HFLAGS AGE -------------------------------------------------------------------------- C00A00 vg_c00 al 4847.408715 4847.408715 9 5 ---r---s- 3d |
Wait for the library server to merge data off the volume. This will take some time, depending on the amount of data on the volume and the number of available drives. Repeat step 1 until there is no data left on the VSN.
![]() | Note: If the volume is damaged and the merge fails, use the dmemptytape(8) command to attempt to recover as much data as possible from the volume. |
Delete the empty volume from the DMF database:
dmfserver# dmvoladm -l LCPname -c "delete VSN" |
For example, for VSN C00A01:
dmfserver# dmvoladm -l maid_ls -c "delete C00A01" Deleted 1 record. |
Delete the empty volume from the OpenVault database by using the ov_vol(8) command from the OpenVault server:
dmfserver# ov_vol -D -a dmf -v VSN |
For example, if dmfserver is the OpenVault server:
dmfserver# ov_vol -D -a dmf -v C00A00 Volume deleted: volume name = 'C00A00', application name = 'dmf' cartridgeID = 'ooBC9k53kkIADV25', side = 'SideA', partition = 'PART 1' |
Take note of the cartridge ID number for this VSN, which you will use in the following step.
Purge the cartridge ID from the OpenVault database by executing the ov_purge(8) command from the OpenVault server:
dmfserver# ov_purge -C 'CartridgeID' |
For example, step 4 above shows that the cartridge ID for VSN C00A00 is ooBC9k53kkIADV25:
dmfserver# ov_purge -C 'ooBC9k53kkIADV25' Are you sure you want to purge all information for cartridge with cartridge ID = ooBC9k53kkIADV25 (Y/N)? y Deleted partition PART 1 Deleted cartridge ooBC9k53kkIADV25 |
Delete the volume from a formatted RAID set by executing the delete operation of the ov_copan command on the node that owns the shelf:
ov_copan delete VSN |
For example, if node1 owns shelf C00 , to delete the volume with VSN C00A00:
node1# ov_copan delete C00A00 delete VSNs on C00A |
After you delete all of the volumes on a RAID set (see “Deleting a Volume from a RAID set”), you can unformat the RAID set to remove all label and GPT partition information.
Enter the following:
# ov_copan unformat device_or_shelfID [-m raidlist] |
For example, to unformat all RAID sets on shelf 1:
# ov_copan unformat C01 Device C01A is a DMF partition with 40 VSNs Continuing will destroy existing data. Continue (y or n)? y Device C01B is a DMF partition with 40 VSNs Continuing will destroy existing data. Continue (y or n)? y ... unformat C01A unformat C01B ... |
For example, to unformat RAID sets W and X on shelf 1:
ownernode# ov_copan unformat C01 -m W,X Device C01W is a DMF partition with 40 VSNs Continuing will destroy existing data. Continue (y or n)? y Device C01X is a DMF partition with 40 VSNs Continuing will destroy existing data. Continue (y or n)? y unformat C01W unformat C01X |
Before you add a new COPAN MAID cabinet or additional shelves to your environment, you should verify that there are no existing OpenVault components that will conflict with the new shelf IDs you want to use. Do the following:
For more details, see “Selecting Appropriate Cabinet Identifiers” in Chapter 2.
To verify the existing OpenVault components for a formatted MAID shelf, enter the following from the node that owns the shelf:
ownernode# ov_shelf check shelfID |
For example, if node1 owns shelf C07 and there are no errors, there will be no output:
node1# ov_shelf check C07 node1# |
If there are errors, you may need to update the OpenVault configuration. See “Updating OpenVault Components After a Power Budget Change”.
If you increase or decrease the power budget, run the following command to update the OpenVault components:
ownernode# ov_shelf update shelfID |
If you want to remove the OpenVault components for a shelf, execute the following command from the node that owns the shelf:
ownernode# ov_shelf delete shelfID |
To also delete any cartridge records cartridge/partition/volume/side records, include the -c option:
ownernode# ov_shelf -c delete shelfID |
Removing these records can be time-consuming. The ov_shelf command will report its progress by printing a dot character every few seconds. For example:
ownernode# ov_shelf -c delete C01 ..... |
If the command is successful, there will be no other output.
![]() | Note: For instructions about starting and stopping services in an HA environment, see High Availability Guide for SGI InfiniteStorage. |
To stop all of the local LCPs and DCPs, and the OpenVault server (if running on the server node):
dmfserver# service openvault stop Stopping OpenVault ... OpenVault stopped |
To stop individual LCPs or DCPs, run ov_admin on the owner node and use the relevant menu selections:
ownernode# ov_admin Name where the OpenVault server is listening? [dmfserver] OpenVault Configuration Menu for server "dmfserver" Configuration on Machines Running LCPs and DCPs 1 - Manage LCPs for locally attached Libraries 2 - Manage DCPs for locally attached Drives ... |
For more information, see the ov_admin(8) man page.