You configured a LUN initially when you created it as described in Chapter 2. After you create a LUN, you can change the LUN bind parameters, except for its RAID type, without unbinding the disk modules that constitute the LUN. Also without unbinding, you can also change the LUN read caching (prefetch) parameters, which the array automatically set to their default values when you created the LUN.
However, if you want to change the LUN RAID type or the number or capacity of the disk modules in the LUN, you must unbind the disk modules that constitute it, and then rebind the desired disk modules into another LUN.
After you configure a LUN, you might have to adjust parameters such as cache size or read-ahead caching parameters. This chapter has guidelines for this kind of fine-tuning.
This chapter contains the following sections:
![]() | Note: For optimum system performance, enable command-tagged queuing as explained in “Enabling Command-Tagged Queuing” in Chapter 2. |
You can use the LUN toolbar in the Array Configuration window for many steps explained in this chapter. Figure 3-1 shows LUN toolbar buttons and their functions.
You may need to move disk modules. Generally, modules should not be moved from one slot to another; but if moving one is absolutely necessary, the system operator or service person can move it with the following requirements:
Do not move disk modules in slots 00, 01, and 02, which contain the licensed internal code.
The disk module must be unbound. Moving a module that is part of a LUN to another slot makes all information on the LUN inaccessible. “Unbinding a LUN” explains how to unbind LUNs.
You must remove and install the disk module while the array is powered on. Use the procedure explained in the Origin FibreVault and Fibre Channel RAID Owner's Guide.
In a Fibre Channel RAID enclosure, you can choose any disk modules for a LUN of any RAID type without affecting the performance or high availability of the LUN. For simplicity, however, choose consecutive disk modules.
You can change these LUN parameters for any LUN except a hot spare without unbinding the LUN, provided that it is available to the operating system:
default SP (any LUN)
read and write cache state (any LUN except RAID 3)
rebuild time (RAID 5, 3, 1, or 1/0 LUNs only)
verify time (RAID 5, 3, 1, or 1/0 LUNs only)
auto assignment state (any LUN)
![]() | Note: It is recommended that auto assign be disabled (unselected) for all LUNs except hot spares. |
minimum latency reads state (RAID 3 LUNs only)
prefetch (read-caching) parameters (any LUN)
![]() | Note: Changing LUN bind parameters does not affect the data stored on the LUN. You can determine LUN parameters except for prefetch by looking at the LUN Information window; see “Using the LUN Information Windows” in Chapter 4. |
This section explains how to change each of these parameters except default SP and prefetch. For information on changing the default SP, see “Transferring Control of a LUN (Manual Trespass)”; for information on changing prefetch parameters, see “Changing Prefetch (Read-Ahead Caching) Parameters”.
For details on the meaning of these settings, see “Planning the Bind” in Chapter 2.
You can change a specific LUN parameter from the Array Configuration window using options in the LUN menu, or you can use the Change Bind Parameters window to change several parameters at once.
In the Storage System Manager window, select the array with the LUN whose rebuild time, verify time, auto assignment state, minimum latency reads, read cache state, or write cache state you want to change.
Click the configure button at the left end of the toolbar, or choose Configure from the Array menu to open the Array Configuration window for the selected array. Figure 3-2 shows this window.
In the Array Configuration window, select the LUN whose parameters you want to change.
![]() | Tip: When you select an LUN, clicking the right mouse button in its LUN selection area brings up a menu with the same functions as the LUN toolbar buttons. |
Change the desired parameters: choose LUN and then the option, such as Change Rebuild Time. Select the desired option from the list that opens. Any change to the rebuild time takes effect immediately.
Choices that are not available for the type of the LUN you selected are grayed out; for example, minimum latency reads is not available for any LUN other than RAID 3.
![]() | Note: The LUN can use read or write caching only if the read or write cache, respectively, is enabled for its default SP and certain hardware requirements are met. See “Enabling or Disabling Array Caching”. |
Follow these steps:
In the Storage System Manager window, select the array with the LUN whose rebuild time, verify time, auto assignment state, minimum latency reads, read cache state, or write cache state you want to change. Open the Array Configuration window (see step 2 on page 66).
In the Array Configuration window, select the LUN whose parameters you want to change.
Choose Change Bind Parameters from the LUN menu. The Change Bind Parameters window for the selected LUN opens; Figure 3-3 shows an example.
![]() | Caution: Although you can use the Change Bind Parameters window to change the default SP, read the information in “Transferring Control of a LUN (Manual Trespass)” before doing so. The process requires powering the storage system off and then on again for the change to take effect. |
Change the desired parameter by clicking the appropriate button and selecting a value from the list.
Choices that are not available for the type of the LUN you selected are grayed out; for example, minimum latency reads is not available for any LUN other than RAID 3.
![]() | Note: The LUN can use read or write caching only if the read or write cache, respectively, is enabled for its default SP. See “Enabling or Disabling Array Caching”. |
When the bind parameters for the LUN are set as you want, click OK.
![]() | Note: Any change to the rebuild time takes effect immediately. |
This section explains how to transfer control of a LUN from the SP that owns it (the primary route to the LUN) to the other SP (the secondary route to the LUN). Transferring control of a LUN is also know as changing SP ownership of a LUN. This procedure is also known as manual trespass.
Use the procedure in this section when the system operator or service person has installed a second SP and you want to assign some of the LUNs to the new SP. Another use is to balance LUNs between two SPs.
Depending on the type of server, you might want to use the procedure if any of the following failure situations occurs:
A failed SP has been replaced, and you want to transfer control to the working SP.
In an array connected to two host bus adapters, one adapter or the connection to it fails, and you want the working adapter to access the LUNs owned by the failed adapter.
One server in a dual-server configuration fails, and you want the working server to access the failed server's LUNs.
XLV volumes and the auto-assignment parameter also transfer control of a LUN from one SP to another. For information on auto assignment, see page 47 in Chapter 2; for information on XLV volumes, see Getting Started with XFS Filesystems.
Transferring control of a LUN from one SP to the other can affect how the operating system accesses the LUN. However, any change you make in ownership does not take effect until the SP is rebooted.
To transfer control of a LUN (manual trespass), you can use these commands:
trespass: See “trespass” in Chapter 6. This procedure does not require rebooting
scsifo: See its man page, scsifo<1M). This procedure does not require rebooting.
ssmgui: see the following steps. This procedure requires rebooting.
To use ssmgui to transfer control of a LUN, follow these steps:
Because this procedure requires rebooting the SP (either by powering the array off and then on again, or by using the rebootSP command), make sure all users are off the system before you begin and that no I/O operations are in progress.
In the Storage System Manager window, select the array with the LUN for which you want to change the default SP. Open the Array Configuration window (see step 2 on page 66).
In the Array Configuration window that opens, choose Change Default SP from the LUN menu.
In the list that appears, select the SP that you want to control the LUN.
In the confirmation window that opens, click Yes.
Unmount all filesystems associated with the LUN.
Power the array off and then on again, following procedures explained in the Origin FibreVault and Fibre Channel RAID Owner's Guide. Alternatively, you can use the rebootSP command; see “rebootSP” in Chapter 6.
Make the LUNs available to the server's operating system:
scsiha -lp controllernumber; ioconfig -f /hw |
For example:
scsiha -lp 3; ioconfig -f /hw |
You can also change LUN ownership in the Change Bind Parameters window; see “Changing LUN Parameters in the Change Bind Parameters Window”.
Typically, you unbind a LUN only if you want to add disk modules to it or use its disk modules in a different LUN. In either of these situations, make sure that the LUN contains the disk modules you want. To determine the disk modules that make up a LUN, follow these steps:
In the Storage System Manager window, select the array with the LUN you want to unbind. Open the Array Configuration window (see step 2 on page 66).
In the LUN selection area, select the LUN whose disk modules you want to identify.
In the Disk Field, each disk module in the LUN is surrounded by a box with the same color as the box surrounding the selected LUN.
You can also determine the disk modules in a LUN by using the LUN IDs. If the LUN IDs do not appear in any disk modules in the Disk field, choose Show LUN IDs from the View menu.
![]() | Caution: Unbinding a LUN destroys any data on it. Observe these precautions: |
Back up any data you want to retain from the LUN before beginning the unbind procedure.
If you plan to unbind a LUN, make sure no users are conducting I/O with any filesystems or partitions on the array.
Unbind only one LUN at a time.
Unbinding the last LUN may prevent you from accessing the storage system.
If you unbind all of the LUNs and lose access to the storage system, see “Re-Establishing Communication With an Array” in Appendix A.
To unbind a LUN, follow these steps:
Make sure that no I/O operations to this LUN are in progress. If appropriate, back up data from the LUN.
In the Storage System Manager window, select the array with the LUN for which you want to change the default SP. Open the Array Configuration window (see step 2 on page 66).
In the LUN selection area, select the LUN whose disk modules you want to unbind.
Click the unbind LUN button near the middle of the LUN toolbar, or choose Unbind from the LUN menu. A window displays a warning message and asks you to confirm the unbind operation.
If you are sure, click Yes in the warning window to confirm the operation.
A window opens telling you that the LUNs were unbound. The LUN icon disappears from the LUN selection area of the Storage System Manager window.
If you want to change the LUN RAID type or the number or capacity of its disk modules, you must change the bound configuration of a LUN, which means that you destroy the LUN and recreate it.
To change the number of disk modules in a LUN, you must follow the same guidelines as you do to create a new configuration; see “Binding Disk Modules” in Chapter 2.
All disk modules in a LUN must have the same capacity to utilize disk space fully. Binding other types of LUNs with RAID 3 LUNs in the same array is not supported.
![]() | Caution: Before you change LUN type, you must unbind the LUN, which destroys any data on it. Observe these precautions: |
Back up any data you want to retain from the LUN before beginning the unbind procedure.
If you plan to unbind a LUN, make sure no users are conducting I/O with any filesystems or partitions on the array.
Unbind only one LUN at a time.
Unbinding the last LUN may prevent you from accessing the storage system.
If you unbind all of the LUNs and lose access to the storage system, see “Re-Establishing Communication With an Array” in Appendix A.
To change the LUN RAID type, follow these steps:
Make sure that no I/O operations to this LUN are in progress.
If appropriate, back up data from the LUN.
Unmount all affected filesystems associated with the LUN.
Unbind the LUN following instructions in “Unbinding a LUN”.
If you are adding new disk modules for the LUNs, you do not need to unbind them.
If you are moving data from RAID 3 LUNs to RAID 5 LUNs, enable mixed mode.
Move disk modules or install new disk modules if needed; follow instructions in the Origin FibreVault and Fibre Channel RAID Owner's Guide.
If you are using newly installed disk modules for the LUNs, copy the data from the old disk modules in the LUN onto them.
Bind the disk modules into the desired LUNs, following instructions in “Binding Disk Modules” in Chapter 2.
Make the newly created LUNs available to the operating system as described in “Making LUNs Available to the Server Operating System” in Chapter 2.
You can replace any disk modules in an array with higher capacity modules, as long as you do not replace all three disk modules that contain the licensed internal code database at the same time. These disk modules (also known as database disk modules) have disk IDs 00, 01, and 02 in the first enclosure (DPE) in a chain, or the only enclosure in the array. At least two are required for DPE operation. If all are removed at the same time, contact with the array is lost.
![]() | Caution: Do not power off the array during this procedure. |
To install higher capacity disk modules, follow these steps:
Unbind the LUNs whose disk modules you want to replace, following instructions in “Unbinding a LUN”.
Replace all disk modules you want to replace.
If you are replacing the first three disk modules in a DPE (first or only enclosure in a chain), replace all disk modules except for the one in slot 00; leave the disk module in slot 00 in place.
Bind the disk modules into the desired LUNs, following instructions in “Binding Disk Modules” in Chapter 2.
![]() | Tip: Use a minibind in this case: ssmcli bind with the -z option. For information on the -z option, see page 146 in Chapter 6. |
When you bind the LUNs, the SP copies the licensed internal code from the 00 disk module onto the other two database disk modules (01 and 02).
This section contains the following topics:
At power-on, an array enables the read and write caches on each SP if the required hardware is working and the read and write cache partitions have nonzero sizes. Array read and write caching is enabled when the SP read and write caches are enabled. (For more information on caching, see “Caching” in Chapter 2.)
You can disable or enable array read or write caching without affecting the information stored on the LUNs:
Enable the array read or write caching for the array to use caching for LUNs with their read or write caches enabled.
Disable the array read and write caches before changing the memory partitions and most of the LUN caching parameters.
Disable the array write caching before an SP is replaced.
You disable or enable array write caching by setting the state of each SP's write cache to Disable or Enable, respectively.
![]() | Note: Write cache state on one SP is independent of that on the other SP, so you must set the state of each cache separately. |
To disable or enable array write caching, open the Array Configuration window for the arrays whose write caching you want to disable or enable. (Figure 3-3 shows this window.)
Use the Array Configuration window as follows:
To disable write caching, click the write cache disable button near the left end of the array toolbar, or choose Array > Write Cache State > Disable.
To enable write caching, click the write cache enable button, or choose Array > Write Cache State > Enable.
To determine whether write caching is enabled or disabled, look at the Write Cache State field in the SP's Cache Information window; see “SP Cache Information” in Chapter 4.
It may take the array a while to disable write caching if the SPs need to write data in the write cache to disk. As a result, the Write Cache State field in the SP's Cache Information window may continue to show Enabled. Poll the array every few seconds to get the latest status: click the manual poll button (toward the right in the array toolbar, as shown in Figure 2-8) or choose Poll from the Array menu.
To disable read caching for an SP, such as SP A, click the SP A disable read cache button near the left end of the array toolbar, or choose Array > Read Cache State > SP A > Disable.
To disable read caching for an SP, such as SP A, click the SP A enable read cache button, or choose Array > Read Cache State > SP A > Enable.
To determine whether write caching is enabled or disabled, look at the Read Cache State field in the SP's Cache Information window; see “SP Cache Information” in Chapter 4. If desired, poll the array every few seconds to get the latest status: click the manual poll button (toward the right in the array toolbar, as shown in Figure 2-8) or choose Poll from the Array menu.
The cache is managed by pages instead of sectors. Cache page size specifies the number of KB storage in one cache page. Available page size values are 2, 4, 8, or 16 KB; the default is 2 KB.
The larger the page size, the more continuous sectors the cache stores in a single page. Generally, page size should be 8 KB for general fileserver applications and 2 or 4 KB for database applications.
You can determine page cache size by looking at the either SP's Cache Information window; see “SP Cache Information” in Chapter 4.
To change cache page size, follow these steps:
Select the arrays whose cache page size you want to set or change.
Disable array write caching for the SPs in the array, as explained in “Disabling or Enabling Array Write Caching”.
Disable read caching for each SP in the array, as explained in “Disabling and Enabling Read Caching for an SP”.
In the Array Configuration window, choose Set Page Size from the Array menu.
In the menu that appears, select the value for the page size.
Re-enable array write and read caching.
The write cache high and low watermarks determine when the SPs start and stop flushing their write caches, respectively. When an SP flushes its write cache, it writes its dirty pages to disk. A dirty page is a write cache page with modified data that has not been written to disk.
The high watermark is the percentage of dirty pages in the write cache. When this percentage is reached, the SPs begin flushing their write cache. The default value is 96 percent. If you specify a lower value, the SPs start flushing the write cache sooner.
The low watermark is the percentage of dirty pages in the write cache. This value determines when write cache flushing stops. The default value is 80 percent. When the low watermark is reached during a flush operation, the SPs stop flushing the write cache.
Note the following:
Available watermark values are 0% through 100%; default values are 96% for high watermark and 80% for low watermark.
The high watermark cannot be a smaller percentage than the low watermark.
To turn off watermark processing, set both the low and high watermarks to 100%.
To change the watermarks, follow these steps:
Check that all read and write caching for the array is disabled. See “Enabling or Disabling Array Caching”.
To change the high watermark:
In the Array Configuration menu, choose Array > Set Watermark > High Watermark.
In the window that opens, enter the high watermark value you want.
To change the low watermark:
In the Array Configuration menu, choose Array > Set Watermark > Low Watermark.
In the window that opens, enter the high watermark value you want.
Prefetching (read-ahead caching) allows the SP to anticipate the data that an application will request so that the SP can read the data into its read cache before the data is needed. The SP monitors I/O requests to each LUN for sequential reads; if it finds any, it prefetches data automatically from the LUN. You can define a specific type of prefetch operation by specifying the values of the prefetch parameters.
You set prefetch parameters in the Prefetch window; see Figure 3-4.
Prefetch parameters are as follows:
Prefetch type | Determines whether to prefetch data of a variable length (Variable, the default) or a constant length (Constant), or to disable prefetching (None). Your selection determines whether the Constant Parameters, Variable Parameters, or no parameters are accessible. Prefetching works best when the host issues many large sequential read requests to a physical unit. The SP monitors I/O for sequential reads; if it finds any, it prefetches the data automatically.If the amount of data to be read varies from request to request, specify variable prefetch type (the default). If the amount of data read is fairly constant, specify Constant. | |
Retention | Determines whether prefetched data has equal or favored priority over host-requested data when the read cache becomes full. Specify Favor Prefetch for most applications. | |
Disable size | Determines when a read request is so large that prefetching data would not be beneficial; for example, if the amount of requested data is equal to or greater than the size of the read cache. Recommended sizes: 129 (the default), 257, 513, 1025, 2049. A prefetch operation is not performed for a read request if the amount of data requested is equal to or greater than the disable size. | |
Idle count | Amount of time in 100-ms units that a LUN must be below the idle threshold in order to be considered idle. For example, 40 equals 4000 ms. Once a LUN is considered idle, any dirty pages in the cache can begin idle time flushing. Determine the idle threshold with ssmcli getcontrol (see “getcontrol” in Chapter 6. Leave this parameter set to its default value of 40 (or set it higher). | |
Constant Parameters: |
Segment Size: number of blocks of data to prefetch for each read request. An SP reads one segment at a time from the LUN because smaller prefetch requests interfere less with other host requests. | |
Variable Parameters: |
Segment Multiplier: determines the amount of data to retrieve from the LUN in a single prefetch operation. Specify a factor equal to or less than the prefetch multiplier. If the segment multiple is equal to the prefetch multiplier, prefetch operations are not divided into segments. The amount of data prefetched for each read request equals or is less than the number of blocks of data requested multiplied by the prefetch multiplier. Maximum Prefetch: maximum number of blocks to prefetch for variable-length prefetching. |
Table 3-1 summarizes values and ranges for prefetch parameters.
Table 3-1. Prefetch Parameter Values and Ranges
Parameter Type | Parameters | Valid Values |
---|---|---|
General | Prefetch Type | Constant |
| Retention | Equal Priority |
| Disable Size | 0 to 4097 sectors; recommended sizes: 129 (default), 257, 513, 1025, 2049 |
| Idle Count | 1 to 100; recommended: 40 (default) or higher |
Constant | Prefetch Size | 0 to 2048 blocks (default: 4) |
| Segment Size | 0 to 254 blocks (default: 4) |
Variable | Prefetch Multiplier | 0 to 32 (default: 4) |
| Segment Multiplier | 0 to 32 (default: 4): must be equal to or less than the prefetch multiplier |
| Maximum Prefetch | 1 to 1024 blocks (default: 512) |
Use the default values unless you are certain that the applications that access the LUN will benefit from different values. To change prefetch parameters, follow these steps:
Display the Array Configuration window for the array with the LUN whose parameters you want to set.
In the LUN selection area of the Array Configuration window, select the LUN whose parameters you want to change.
Select LUN > Prefetch > Change Prefetch Parameters. The Change Prefetch Parameters window appears; Figure 3-4 shows an example.
Change prefetch values according to the needs of the applications that access the LUN. Change them from the defaults only if you are certain that the applications will benefit from different values.
Do not change the idle count; leave it at the default of 40 (or set it higher).
When the parameters are the way you want them, click OK. In the confirmation window that appears, click Yes.
You can also change individual prefetch parameters using options in the Prefetch submenu of the LUN menu.
To reset all prefetch parameters to the defaults, choose LUN > Prefetch > Set Default Prefetch Values, or click Use Default Prefetch Values at the upper left corner (see Figure 3-4).
![]() | Note: The LUN cannot use read caching with the prefetch parameters you specify until you enable the SP's read cache, as explained in “Disabling and Enabling Read Caching for an SP”. |
You can use a LUN's cache information window to determine how caching and prefetching are affecting LUN performance. The LUN Cache Information window provides the following information:
the hit ratios for the read and write caches, which can tell you if caching is helping LUN performance
number of prefetched blocks and unused prefetched blocks, which tells you if prefetching is efficient for this LUN
number of forced flushes, which suspend other I/O
To display the LUN Cache Information window, see “Using the LUN Information Windows” in Chapter 4. To interpret the caching information, see “LUN Cache Information” in Chapter 4.
To upgrade an array to support caching, necessary hardware must be installed and you must set up array caching. Hardware requirements for caching are as follows:
Read caching requires a storage processor (SP) with at least 16 MB of read cache memory.
Write caching requires a Fibre Channel RAID enclosure with
a link controller card (LCC) for each SP in the Fibre Channel RAID enclosure
two functional power supplies in the Fibre Channel RAID enclosure
disk modules in slots 00 through 08 (no failed disk modules)
a fully functional (charged, plugged in, and connected standby power supply with a fully charged battery
two SPs, each with at least 16 MB of write memory
Each SP must have the same number of DIMMs; all DIMMs for the read and write memory module locations must be the same size.
![]() | Note: Read cache DIMMs are physically separate from write cache DIMMs. |
The amount of write cache you can allocate depends on the SP part number; see Table 3-2 and Table 3-3.
Table 3-2. Cache Capability for SP 9470198
User-Configurable | User-Configurable | Cache Configuration | |
---|---|---|---|
2 KB | 14 MB | 0 | 100% write cache |
4 KB | 28 MB | 0 | 100% write cache |
8 KB | 56 MB | 0 | 100% write cache |
16 KB | 113 MB | 0 | 100% write cache |
Table 3-3. Cache Capability for SP 9470207
User-Configurable | User-Configurable | Cache Configuration | |
---|---|---|---|
2 KB | 194 MB | 0 | 100% write cache |
4 KB | 388 MB | 0 | 100% write cache |
8 KB | 504 MB | 0 | 100% write cache |
16 KB | 504 MB | 0 | 100% write cache |
Necessary hardware is installed by a qualified Silicon Graphics System Support Engineer; contact your authorized service provider.
To set up caching after the required hardware is installed, enable array write and read caching for the desired SPs, as explained in “Changing LUN Bind Parameters That Do Not Require Unbinding”. Enable write caching for the LUNs that you want to use write caching; see page 55 in “Binding Disk Modules” in Chapter 2.