Under IRIX or SGI ProPack 6 for Linux, the XVM snapshot feature provides the ability to create virtual point-in-time images of a filesystem without causing a service interruption. The snapshot feature requires a minimal amount of storage because it uses a copy-on-write mechanism that copies only the data areas that change after the snapshot is created.
Snapshot copies of a filesystem are virtual copies, not actual media backup for a filesystem. You can, however, use a snapshot copy of a filesystem to create a backup dump of a filesystem, allowing you to continue to use and modify the filesystem while the backup runs.
You can also use a snapshot copy of a filesystem to provide a recovery mechanism in the event of data loss due to user errors such as accidental deletion. A full filesystem backup, however, is necessary in order to protect against data loss due to media failure.
![]() | Caution: Do not mount an XVM snapshot volume or an XVM snapshot base volume for use with SGI DMF or any other DMAPI-compliant hierarchical storage manager. If you require this feature, contact your SGI system support engineer (SSE) or other authorized support organization representative. Do not run incremental dumps of an XVM snapshot filesystem. For further information, contact your SGI system support engineer (SSE) or other authorized support organization representative. |
![]() | Note: Use of the snapshot feature of the XVM Volume Manager requires a FLEXlm license on IRIX or LK license on SGI ProPack for Linux. The snapshot feature is supported in local domain only. |
To create snapshot volumes of a filesystem, use the following procedure:
Before creating a snapshot you must set up an XVM volume to use as a repository volume, in which original copies of regions of data that have changed on the filesystem are stored. You then must create and mount the filesystem that will be used as the repository; you can do this with the XVM repository command, as described in “Setting up a Repository Volume”.
The repository may be used for more than one base volume with the same domain.
Create the snapshot of the filesystem. You must use the same repository volume to create additional snapshots of the same filesystem. You can use a different volume for snapshots of other filesystems.
When you create a snapshot, XVM creates a snapshot volume, which is a virtual volume containing the regions that have changed in the base filesystem volume since the snapshot was created.To access the snapshot, mount the snapshot volume.
There are two volume element types that are specific to the snapshot feature: the snapshot volume element and the copy-on-write volume element. Each snapshot volume contains a snapshot volume element below the subvolume. When you create a snapshot volume, a copy-on-write volume element is inserted below the subvolume in the base volume.
These procedures are described in “XVM Snapshot Administration”.
You can also perform the following procedures when administering XVM snapshot volumes:
Grow the repository
Delete oldest snapshot
List current snapshots
Show available repository space
Delete a repository volume
The procedures are described in “XVM Snapshot Administration”.
This section provides information on the following tasks:
Setting up a repository volume
Growing a repository volume
Creating a snapshot volume
Deleting a snapshot volume
Listing the current snapshots
Showing available repository space
Deleting a repository volume
To set up a repository volume, first you create an XVM logical volume. A repository volume can have any legal XVM topography.
The size of the XVM volume that you will need will depend on several factors:
The size of the filesystem for which you are creating a snapshot. A repository volume that is approximately 10% of this size could be a starting estimate.
The volatility of the data in the volume. The more of the data that changes, the more room you will need in the repository volume.
The length of time you will be keeping each snapshot before deleting it.
After you have created the XVM logical volume that you will use for the repository, you must initialize the repository volume. You can do this with the following command, where repository_volume is the name of the XVM logical volume:
xvm:local> repository -mkfs vol/repository_volume |
![]() | Caution: Executing the -mkfs option destroys data. This command should be used only once, to create the repository filesystem. |
This command issues an mkfs command of the repository volume. The repository command no longer mounts the repository. An open of any snapshot or base volume using that repository will cause it to get mounted. It gets closed again when the last snapshot or base using it gets closed.
Use the following command to create the snapshot, where basevolume is the name of the volume of which the snapshot is to be made and repository_volume is the volume to be used for the repository filesystem.
xvm:local> vsnap -create repository vol/repository_volume vol/basevolume |
The snapshot volume name is the base volume name with%n appended, where n is the snapshot number, starting with 0. For example, the first snapshot of volume basevol is basevol%0, the second snapshot is basevol%1, etc.
The default size of the data regions that are copied is 128 blocks (64k). You can set the region size when you create the first snapshot of a volume with the -regsize n parameter of the vsnap -create command, where n is the desired region size in filesystem blocks (in units of 512 bytes).
For information on setting snapshot region size, see “Snapshot Region Size and System Performance”.
You cannot change the region size after the first snapshot has been created. To change the region size, you must delete all snapshots of a volume (vsnap -delete -all vol) and create new snapshots.
You can delete the oldest snapshot of a volume with the following command:
vm:local> vsnap -delete repository vol/repository_volume vol/basevolume |
The -all parameter deletes all snapshots on the indicated volumes.
You can display the current snapshot and copy-on-write volume elements with the show command.
The following command lists the snapshot volume elements:
xvm:local> show snapshot |
The following command lists the copy-on-write volume elements:
xvm:local> show copy-on-write |
You can use the df command to view the available repository space (if you are running on a Linux system, you need to mount the filesystem before doing the df command, and then unmount it afterwards):
df /dev/lxvm/volname |
The following example shows the results of a df command on a repository filesystem:
bayern2 # df /dev/lxvm/dks0d4s0 Filesystem Type blocks use avail %use Mounted on /dev/lxvm/dks0d4s0 xfs 35549600 1376 35548224 1 |
To delete a repository volume, first remove the repository designation from the volume. You can then delete the volume as you would a standard XVM volume, as described in “Deleting Volume Elements: Using the delete Command” in Chapter 4.
Use the following command to remove the repository designation and to disable a repository volume named repository_vol:
xvm:local> repository -delete repository_vol |
This section provides an example of basic snapshot configuration. It includes the following procedures:
Configure an XVM logical volume and create and mount the filesystem. In this example, the volume is named stripedvol.
Create an XVM volume to use as the repository volume for snapshots of stripedvol.
Create a snapshot of stripedvol.
This example uses the first configuration example provided in Chapter 5 of the XVM Volume Manager Administrator's Guide, “Creating a Logical Volume with a Three-Way Stripe.” In this case, however, we will create the XVM logical volume as a local volume. Refer to the XVM manual for explanations of each step in this example.
This example creates a simple logical volume that stripes data across three disks, using the entire usable space of each disk to create a single slice on the disk.
# xvm xvm:local> label -name disk0 dks2d70 disk0 xvm:local> label -name disk1 dks2d71 disk1 xvm:local> label -name disk2 dks2d72 disk2 xvm:local> slice -all disk* </dev/lxvm/disk0s0> slice/disk0s0 </dev/lxvm/disk1s0> slice/disk1s0 </dev/lxvm/disk2s0> slice/disk2s0 xvm:local> stripe -volname stripedvol slice/disk0s0 slice/disk1s0 slice/disk2s0 </dev/lxvm/stripedvol> stripe/stripe0 xvm:cluster> show -top stripedvol vol/stripedvol 0 online subvol/stripedvol/data 106627968 online stripe/stripe0 106627968 online,tempname slice/disk0s0 35542780 online slice/disk1s0 35542780 online slice/disk2s0 35542780 online xvm:local> exit # mkfs /dev/lxvm/stripedvol meta-data=/dev/lxvm/stripedvol isize=256 agcount=51, agsize=262144 blks data = bsize=4096 blocks=13328496, imaxpct=25 = sunit=16 swidth=48 blks, unwritten=1 naming =version 2 bsize=4096 mixed-case=Y log =internal log bsize=4096 blocks=1632 realtime =none extsz=65536 blocks=0, rtextents=0 |
You can now mount the filesystem:
# mkdir /stripedvol # mount /dev/lxvm/stripedvol /stripedvol |
Configure an XVM logical volume that you will use as the repository volume for snapshots of stripedvol.
Label the XVM physvol where the repository volume will reside. In this example, the XVM physvol is named repdisk.
# xvm xvm:local> label -name repdisk dks3d70 repdisk |
When creating the XVM volume that you will use as a repository volume, you should ensure that it is contained on a separate XVM physvol than the filesystem for which it will serve as a repository. Otherwise you will see performance degradation.
Create the slice on repdisk that you will use for the repository volume. In this example, the slice is about 10% of the size of the stripedvol filesystem. Since stripedvol is 106627968 blocks, the repository volume earmarked for stripedvol is 10700000 blocks.
In this example, the volume that contains the slice is named reposvol.
xvm:local> slice -volname reposvol -start 0 -length 10700000 repdisk </dev/lxvm/reposvol> slice/repdisks0 |
When you configure your repository volume, you should ensure that the volume exhibits the same general performance as the filesystem for which it will serve as a repository in terms of the underlying hardware. This helps avoid I/O bottlenecks when making repository copies of filesystem changes.
When determining the size of the repository volume, you should consider how much data will be changing and how often the data will change over the lifetime of the snapshot. You should take into account how many snapshot regions will be changing and the size of the snapshot regions. For information on setting snapshot region size, see “Snapshot Region Size and System Performance”.
Initialize the repository volume. This command also issues an mkfs command of the repository volume.
xvm:local> repository -mkfs reposvol meta-data=/hw/vol/local/reposvol isize=256 agcount=8, agsize=167188 blks data = bsize=4096 blocks=1337500, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 mixed-case=Y log =internal log bsize=4096 blocks=1200 realtime =none |
![]() | Caution: Executing the -mkfs option of the repository command destroys data. Use this option only once on a repository volume, when you create the repository filesystem. |
Use the following command to create a snapshot of stripedvol in the repository volume reposvol:
xvm:local> vsnap -create -repository reposvol stripedvol repository name = reposvol writing all SBs new uuid = 9db5572c-5fe4-1028-8e9e-08006911cbcb |
The first snapshot volume that this command creates is stripedvol%0.
After you have created a snapshot, you can use the snapshot copy of a filesystem to create a backup dump of the filesystem, allowing you to continue to use and modify the filesystem while the backup runs.
The following commands create a backup dump using stripedvol%0:
# mkdir /snap # mount /dev/lxvm/stripedvol%0 /snap # xfsdump -f dumpfile_path /snap |
It is possible to mount a snapshot within the base filesystem being snapped. For example, if /base is the mountpoint of the base, you could mount the snapshot at /base/snap, for example.
For further information on creating backups using XVM snapshots, see “Snapshot Backup Considerations”.
This section provides an example of a shell script that creates hourly snapshots of a filesystem. In this example, snapshots are retained for twenty-four hours, after which the oldest snapshot is deleted before a new snapshot volume is created.
Note that XVM snapshots do not provide backup against media failure; a full filesystem backup is necessary to protect against this. You can use snapshot filesystems in the event of data loss due to user errors such as accidental deletion.
![]() | Note: A snapshot volume name is the base volume name with %n appended, where n is the snapshot number, starting with 0. The value of n will continue to increase with each new snapshot. To begin the numbering scheme over again, you must delete all snapshots of a volume. |
#!/bin/sh # # Create hourly snapshot and mount it. # # This script is intended to be called from a cronjob at certain times # per day. It will cycle through 24 snapshots, allowing one snapshot per # hour. # # Note that the name of the mount point is identified by the hour the # script is run. If the script is run more than once in an hour, the # first snapshot created that hour will be unmounted and the new snapshot # will be mounted in its stead. # # This script will take care of creating the mount directory and mount # points for the hourly snapshots, if they don't already exist. It will # also not delete any snapshots until after the 24th snapshot. Thus, # it's not necessary to run a different script for the first 24 # snapshots. # # To call this script: # # sh ./snap basevol repvol regsize # # where basevol is the name of the base volume # repvol is the name of the repository volume # regsize is the name of the region size # # Once the first snapshot has been created, the repvol and regsize # parameters will default to whatever was set for the first snapshot. # # If regsize isn't specified for the first snapshot, it defaults to 128. # BASEVOL=$1 REPVOL=$2 REGSIZE=$3 MAXSNAPSHOTS=24 MOUNTDIR=/snapshot XVM=/sbin/xvm if [ “$XVM show -t vol/$BASEVOL 2> /dev/null | grep copy-on-write” ] then COW=`$XVM show -t vol/$BASEVOL 2> /dev/null | grep copy-on-write | awk `{print $1}'` if [ “x$REGSIZE” = “x” ] then REGSIZE=`$XVM show -v $COW 2> /dev/null | grep “blks/reg:” | awk ` {print $10}'` fi FIRST=`$XVM show -v $COW 2> /dev/null | grep “first snap idx” | awk `{print $4}'` NEXT=`$XVM show -v $COW 2> /dev/null | grep “next snap idx” | awk `{print $8}'` if [ $FIRST -eq $NEXT ] then CUR=$NEXT else CUR=`expr $NEXT - 1` fi if [ $CUR -lt 0 ] then CUR=`expr $CUR + 65536` fi else FIRST=0 NEXT=0 CUR=0 if [ “x$REGSIZE” = “x” ] then REGSIZE=128 fi fi SNAPSHOTS=`expr $NEXT - $FIRST` if [ $SNAPSHOTS -lt 0 ] then SNAPSHOTS=`expr $SNAPSHOTS + 65536` fi NEXTPATH=$1%$NEXT CURPATH=$1%$CUR FIRSTPATH=$1%$FIRST OLDEST=/dev/lxvm/$FIRSTPATH MOUNTPT=$MOUNTDIR/$BASEVOL.`date +%H` NEWEST=/dev/lxvm/$NEXTPATH # Now do the work. # # Make sure the mount point is there. if [ ! -d $MOUNTDIR ] then if [ -e $MOUNTDIR ] then echo “Snapshot directory $MOUNTDIR not a directory” >&2 exit fi mkdir $MOUNTDIR fi if [ ! -d $MOUNTPT ] then if [ -e $MOUNTPT ] then echo “Snapshot mount point $MOUNTPT not a directory” >&2 exit fi mkdir $MOUNTPT fi # Unmount and delete the oldest snapshot, but only if we've already made the # initial snapshots. # Unmount $MOUNTPT in case we're out of sync umount $MOUNTPT if [ $SNAPSHOTS -ge $MAXSNAPSHOTS ] then umount $OLDEST $XVM vsnap -delete vol/$BASEVOL fi # Now create a new snapshot. $XVM vsnap -regsize $REGSIZE -repository vol/$REPVOL -create vol/$BASEVOL mount $NEWEST $MOUNTPT |
If your repository volume fills, you will not be able to perform I/O. You should take care to ensure that this does not occur. You can, however, determine in advance what action XVM should take in this circumstance, as described in “Determining System Behavior on Full Repository”.
You can use the df command to view the available repository space:
# df /dev/lxvm/reposvol Filesystem Type blocks use avail %use Mounted on /dev/lxvm/reposvol xfs 10690400 288 10690112 1 |
If you find that your repository volume is filling up, you can grow your repository volume. To grow a repository volume, you add slices to the XVM logical volume by inserting a concat into the existing volume and then executing the xvm repository -grow command.
The following procedure grows the repository volume reposvol created in the procedure described in “Creating the Repository Volume”.
Display the logical volume reposvol, showing the topology of the volume:
xvm:local> show -top reposvol vol/reposvol 0 online,repository subvol/reposvol/data 10700000 online,repository slice/repdisks0 10700000 online,repository |
Change the volume reposvol to include a concat container:
xvm:local> insert concat slice/repdisks0 </dev/lxvm/reposvol> concat/concat0 |
Display the results of the insert command:
xvm:local> show -top reposvol vol/reposvol 0 online,repository subvol/reposvol/data 10700000 online,repository concat/concat0 10700000 online,tempname,repository slice/repdisks0 10700000 online,repository |
Create a free slice to attach to the concat. This example creates a second slice on repdisk, the same XVM physvol that contains the first slice.
xvm:local> slice -start 10700000 -length 10700000 repdisk </dev/lxvm/repdisks1> slice/repdisks1 |
Attach the slice to reposvol.
xvm:local> attach slice/repdisks1 concat0 </dev/lxvm/reposvol> concat/concat0 |
Display the results of the attach command:
xvm:local> show -top reposvol vol/reposvol 0 online,repository subvol/reposvol/data 21400000 online,repository concat/concat0 21400000 online,tempname,repository slice/repdisks0 10700000 online,repository slice/repdisks1 10700000 online,repository |
Grow the repository volume:
xvm: local> repository -grow reposvol meta-data=/xvm/repositories/local/9db55708-5fe4-1028-8e9e-08006911cbcb isize=256 agcount=8, agsize=167188 blks data = bsize=4096 blocks=1337500, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 mixed-case=Y log =internal bsize=4096 blocks=1200 realtime =none extsz=65536 blocks=0, rtextents=0 data blocks changed from 1337500 to 2675000 |
If your repository volume fills, you will not be able to perform I/O. This will likely lead to a filesystem shutdown when the xfs filesystem attempts to write metadata. You can, however, use the XVM change repfull command to determine in advance that XVM will delete the oldest snapshot of a volume when the repository volume fills, causing you to lose access to the snapshot but allowing the filesystem I/O to continue.
In short, when determining what the system should do when the repository volume fills, you can choose between one of these two options:
Delete the oldest snapshot in the repository volume, keeping the filesystem active
Shut the filesystem down but keep the existing snapshot
![]() | Caution: Use extreme care when determining whether the XVM volume manager will automatically delete a snapshot volume when the repository is full, as this may cause you to lose needed snapshot data. In general, you should ensure that the repository volume does not fill, as described in “Growing a Repository Volume”. |
The following command configures the repfull parameter of the XVM volume stripedvol to specify that the oldest snapshot of stripedvol should be deleted automatically in order to free repository space when an I/O operation can't complete due to a full repository:
xvm:local> change repfull deloldest vol/stripedvol vol/stripedvol |
After setting this parameter, the oldest stripedvol snapshot will be deleted if the repository volume for stripedvol snapshots fills. Once enough space is freed, the I/O operation will continue. From that point, any I/O operation to the deleted snapshot will return an error, but the stripedvol filesystem will not shut down and there will be no xfs error message.
To specify that XVM return an error when I/O cannot be completed on stripedvol due to a full repository, use the following command:
xvm:local> change repfull error vol/stripedvol vol/stripedvol |
In this case, if the repository volume for stripedvol snapshots fills, XVM will return an error and it is likely that the stripedvol filesystem will eventually shutdown.
The default repfull parameter for an XVM volume is error. The repfull parameter is associated with an XVM volume, so you can set it even if you have not created any snapshots for that volume or if you have deleted all the snapshots for that volume.
When you create a snapshot volume with the XVM -vsnap -create command, the default size of the data regions that are copied to the snapshot volume is 128 blocks (64k). It may be possible to tune your system performance by setting the region size to a different value.
You can set the region size when you create the first snapshot of a volume with the -regsize n parameter of the vsnap -create command, where n is the desired region size in filesystem blocks (in units of 512 bytes).
You cannot change the region size after the first snapshot has been created. To change the region size, you must delete all snapshots of a volume (vsnap -delete -all vol) and create new snapshots.
You may want to take the following factors into consideration when determining the optimal region size for your snapshot:
If the XVM volume of which you are taking a snapshot base is striped, the region size should be a multiple of the stripe unit.
You may need to take into account I/O size and ensure that the region size will not cross disk boundaries in an underlying RAID unit, just as you would when you are setting up the base volume for optimal performance.
If the data that is likely to change is not contiguous on the disk, then a larger region size would cause unchanged data to be copied unnecessarily. If the data is contiguous on the disk, then the region size can be larger,
You should take the following factors into consideration when you are using xfsdump to perform backup dumps of a snapshot filesystem:
The XVM snapshot features creates a unique UUID for each snapshot. In general, xfsdump stores the history of a filesystems's dumps based on its UUID. However, xfsdump is able to detect whether it is dumping a snapshot and treats a snapshot dump as a dump of the base filesystem. This means that xfsdump can perform incremental dumps on snapshot. The dump history is updated using the base filesystem's UUID, mount point, and character device rather than the snapshot's.
The xfsdump command includes a -x option that you can use if you want to dump a snapshot as a new filesystem rather than treating it as a dump of the base filesystem.
When dumping quotas using xfsdump, xfsdump stores the quotas in a file called xfsdump_quotas in the root of the filesystem. Since snapshots are generally mounted read-only, this precludes users from saving quota information when dumping snapshots. If quota information must be dumped, then it should be possible to mount the snapshot read-write and do the dump. No other changes should be made to the snapshot however, so you should use this procedure with caution.
You should take the following factors into consideration when you are using a third-party backup application package to perform backup dumps of a snapshot filesystem:
Most backup packages identify a filesystem as a directory (which may or may not be a mount point). For this reason, snapshots must always be mounted in the same locations while they are being backed up.
Due to the way some backup packages determine whether or not to include a given file in an incremental backup, some files which should be in the incremental backup will be skipped. To minimize the chance of this occurring, the amount of time between when the snapshot is taken and when the backup is performed should be minimized. Most backup packages support running a pre-backup command. SGI recommends setting the pre-backup command to run a script to create and mount the snapshot.
You should take the following factors into consideration when you are using either xfsdump or a third-party backup application package to perform backup dumps of a snapshot filesystem:
If you are using a backup package you must ensure that you do not dump snapshots out of order.
In general, you should not use DMF in conjunction with the XVM Snapshot feature. Specifically, you should consider the following caveats:
DMF files in a snapshot should not be modified
Snapshots of a DMF-managed filesystem should not be added to the DMF config file.
Offline DMF files in a snapshot are not recallable.
The only case where DMF and snapshot filesystems should be used together is when using xfsdump to back up the snapshot of a DMF-managed filesystem.