This chapter explains how to use the RAID GUI to configure an array. It contains the following sections:
When you start a RAID GUI session and no server hostnames are in the ssmhosts file, the list of host identifiers in the window is empty. The Host Administration window appears, as shown in Figure 2-1.
(You can open the Host Administration window during an ssmgui session from the Storage System Manager window by choosing Select Hosts in the Storage Management window's File menu. In this situation, the list contains the hostnames of the servers with currently managed arrays. You can add or remove server hostnames from this list.)
To add a server to the list, follow these steps:
In the Host Identifier field, type the server hostname.
Click Add.
Continue to add server hostnames as desired.
Servers on this list are called managed servers.
![]() | Note: If you include the same server under two names (for example, as machinename and as machinename.domain.com), the GUI opens each as a separate host, which thus takes twice as long as necessary. |
When the list contains all the server hostnames you want, click OK. A window opens asking if you want to save the new host list in the ssmhosts file in the home directory.
Register your choice:
Click Yes if the list contains only servers with storage systems that you want to manage in most future ssmgui sessions.
Click No if you added servers that you want to manage in the current session only.
ssmgui creates an icon for each array that it finds connected to each server whose hostname you added to the list (unless the icon already exists). ssmgui displays these icons in the array selection area of the Storage System Manager window (see Figure 2-3). If an array is connected to two servers, only one icon appears.
Figure 2-2 shows a Host Administration window with a server hostname.
If ssmgui cannot communicate with a server you added to the list, it displays an error message.
To remove a server hostname, select it in the Host Administration window and click Remove. The RAID GUI asks for confirmation.
Save the list as explained in “Using the Host Administration Window to Add Servers”. ssmgui removes the icon for that array unless the array is connected to another server on the list. It also removes the hostname from the Host Administration window.
This section explains how to use RAID GUI features to select an array to configure:
Use the array selection filter (Arrays Accessible By field) near the top of the Storage System Manager window to display only those arrays on a server. Figure 2-3 points out this portion of the Storage System Manager window, and other features.
Other ways to specify arrays are as follows:
To display the icon for a specific array for all managed servers, follow these steps:
![]() | Note: This display does not affect the list of managed servers and their arrays as listed in the Host Administration menu. |
To display icons for all arrays for one managed server:
If the Arrays Accessible By field (see Figure 2-3) shows the hostname of the server whose arrays you want represented in the array selection area, do nothing.
If this field does not show the server hostname, pull down the selection list and select the desired server hostname. In the array selection area, ssmgui shows icons for arrays that it finds connected to this server.
To display icons for specific arrays for one managed server only, follow these steps:
If necessary, select the desired server hostname in the Arrays Accessible By field (see Figure 2-3). In the array selection area, ssmgui shows icons for arrays that it finds connected to this server.
In the array selection area, select the icon for each array that you do not want to manage.
From the Array menu, choose Unmanage. ssmgui removes the icons for the selected arrays from the array selection area.
To convert unmanaged arrays to managed arrays, follow these steps:
Choose Manage Arrays from the File menu. The Manage Arrays window opens, as shown in Figure 2-4.
If the Host Identifier field does not contain the name of the server with the unmanaged arrays that you want to start managing, pull down its selection list and select the server's hostname (or type it in the Host Identifier field). The names of the server's unmanaged arrays appear in the Unmanaged Arrays list on the left, as shown in Figure 2-4.
For each array you want to start managing:
Select the array name in the Unmanaged Arrays list.
Click the right arrow button. The array's name moves to the Selected Arrays list, on the right.
When the Selected Arrays list contains the names of only those unmanaged arrays that you want to start managing, click OK. An array icon for each selected array appears in the array selection area of the Storage System Manager window.
The array selection area (see Figure 2-3) contains an icon for each array specified by the array selection filter. The icon for an array consists of the array name and a graphic.
The default name for an array is determined by the host and device entries in the ssmagent.config file on the server connected to the array. You can change the array's default name by choosing Name in the Array menu or by using the name array button at the left end of the toolbar. Changing the name does not affect the configuration file.
The color of an icon for an array indicates its health:
Gray indicates that no failure is detected.
Amber indicates a fault in some part of the array.
The graphic for an array icon indicates the status of the array:
Accessible: ssmgui can communicate with the array.
Inaccessible: ssmgui cannot communicate with the array because its name is wrong in the ssmagent configuration file on its server, or because the ssmagent on its server was started by a user who was not logged in as root.
The icon for an inaccessible array has a shield over it:
Unsupported: The device entry in the ssmagent configuration file on its server is for a device that ssmgui does not support (for example, an internal disk on the server).
The icon for an unsupported array has a question mark over it:
Accessible, but faulted: ssmgui can communicate with the array, but one or more components is faulty. Chapter 5, “Identifying and Correcting Failures,” explains how to investigate and correct faulted components.
The icon for a faulted array is amber and contains an F:
Only one icon for each array appears in the array selection area even it if is connected to more than one server.
To determine if an array is connected to more than one server, use the Array Information button or choose Information from the Array menu. This process is explained in “Using the Array Information Window” in Chapter 4.
You can use the toolbar in the Storage System Manager window for many steps explained in this guide. Figure 2-5 shows toolbar buttons and their functions.
These buttons duplicate functions available in the Storage System Manager window's Array menu.
![]() | Tip: When you select an array, click the right mouse button in the array selection area to bring up a menu with the same functions as the toolbar buttons. |
In the array selection area, click the icon for the array. A black box appears around the icon, as shown in Figure 2-6.
To display the Array menu when you select an array, use the right mouse button and hold the button down. The black box appears around the icon and the Array menu drops down to the right of the icon.
To select more than one array at a time:
Click the icon for one array, and hold down the Shift key and click the icon for each of the other arrays you want to select.
or
Drag the cursor to create a box around the icons for the arrays you want to select.
The black box appears around each icon you select.
In the Storage System Manager window, choose Configure from the Array menu. The Array Configuration window appears; Figure 2-7 shows important features.
The title bar identifies the array that is represented in the window.
The Array Configuration window has two toolbars, the array toolbar and the LUN toolbar. Figure 2-8 shows array toolbar buttons and their functions. (LUN toolbar buttons are shown in Figure 3-1 in Chapter 3.)
These buttons duplicate functions available in the Array Configuration window's Array menu.
![]() | Note: You can also use this window to reconfigure various parameters or to display information on the array or its components. These tasks are explained in Chapter 3, “Reconfiguring and Fine-Tuning,” and Chapter 4, “Monitoring Arrays and Displaying System Statistics,” respectively. |
To use read or write caching or bind RAID 3 LUNs, you must have the required hardware, and specify the array memory partitions. This section explains
The read and write caches are dedicated DIMMs on the storage processor (SP); DIMMs are available in 8-MB and 128-MB sizes. With the maximum of four 128-MB DIMMs installed, the maximum for each type of cache is 512 MB.
You can enable or disable read cache for either SP in an array. The read cache memory that you allocate when you partition memory is shared by all the SP's LUNs for which the read cache is enabled. Read cache is independent on each SP.
Unlike read cache, write cache is always the same (mirrored) on both SPs in an array. You can enable and disable an SP's write cache or change its size; all such changes happen automatically for the array's other SP as well. The write cache memory is shared by all LUNs for which the write cache is enabled.
Besides the array caches, you can assign a specific cache to a particular LUN. However, LUN caching is not enabled until array caching is enabled.
Hardware requirements for caching are as follows:
Read caching requires an SP with at least 16 MB of read cache memory.
Write caching requires a Fibre Channel RAID enclosure with
a link controller card (LCC) for each SP in the Fibre Channel RAID enclosure, as well as in any FibreVault attached to it
two functional power supplies in the Fibre Channel RAID enclosure
disk modules in slots 00 through 08 (no failed disk modules)
a fully functional (charged, plugged in, and connected) standby power supply with a fully charged battery
two SPs, each with at least 16 MB of write memory
Each SP must have the same number of dual in-line memory modules (DIMMs); all DIMMs for the read and write memory module locations must be the same size.
![]() | Note: Read cache DIMMs are physically separate from write cache DIMMs. |
The amount of write cache you can allocate depends on the SP part number; see Table 2-1 and Table 2-2.
Table 2-1. Cache Capability for SP 9470198
User-Configurable | User-Configurable | Cache Configuration | |
---|---|---|---|
2 KB | 14 MB | 0 | 100% write cache |
4 KB | 28 MB | 0 | 100% write cache |
8 KB | 56 MB | 0 | 100% write cache |
16 KB | 113 MB | 0 | 100% write cache |
Table 2-2. Cache Capability for SP 9470207
User-Configurable | User-Configurable | Cache Configuration | |
---|---|---|---|
2 KB | 194 MB | 0 | 100% write cache |
4 KB | 388 MB | 0 | 100% write cache |
8 KB | 504 MB | 0 | 100% write cache |
16 KB | 504 MB | 0 | 100% write cache |
Necessary hardware is installed by a qualified Silicon Graphics System Support Engineer; contact your authorized service provider.
For RAID 3 LUNs, each SP must have at least as much memory as the sum of the following:
16 MB each for read and write memory locations; total 32 MB per SP
and
if possible, 15 MB of memory (8 MB minimum) allocated for each RAID 3 LUN you want to bind
SP memory is divided into three memory banks:
front-end memory: read and write memory banks on the SP board (at least 16 MB in each)
back-end memory: available memory in the disk enclosures
![]() | Note: For a Fibre Channel storage system, the front end consists of the host, the SPs, and the communication between them, and the back end consists of the disk modules. |
CPU or control RAM: the licensed internal code (LIC) running on the SPs uses 1 MB (mixed mode disabled) or 8 MB (mixed mode enabled) from the read memory and 1 or 8 MB from the write memory on the SP board, plus a dedicated memory bank on the board. (Mixed mode is explained later in this chapter, in “Enabling and Disabling Mixed Mode”.)
The LIC allocates the read cache partition to the back-end memory, the write cache partition to front-end memory, and the RAID 3 partition across both. The LIC uses the control RAM to manage the read and write caches and the RAID 3 partition (for example, it stores the cache page tables in the control RAM).
Memory that you can allocate to the read and write caches and the RAID 3 partition is the unallocated memory in the front-end and back-end memory; it is known as user free memory.
Note the following points about allocating memory partitions to specific memory areas:
Changing the RAID 3 partition size causes ssmgui to reboot the array. Rebooting restarts the SPs in the array, terminating all outstanding I/O to the array. If you plan to change RAID 3 partition size, make sure that no users are conducting I/O with any filesystems or partitions on the array, and unmount these filesystems or partitions. After the process is completed, remount the filesystems or partitions and notify the users.
The minimum memory required for accessing a RAID 3 LUN is 4 MB on each SP (2 MB each per read cache and write cache).
The default amount of memory in read cache and write cache is 15 MB each, with mixed mode disabled as shipped. With mixed mode enabled, it is 8 MB per read cache and write cache.
The LIC uses 8 MB from read cache and 8 MB from write cache with mixed mode enabled. With mixed mode disabled (the default as shipped), it uses 1 MB from read cache and 1 MB from write cache.
You might not be able to allocate all of the user free memory to a specific partition.
For example, because the read cache partition is only in the back-end memory (disk storage), you cannot allocate user free memory in the front end to the read cache partition. Similarly, because the write cache partition is only in the front end, you cannot allocate any user free memory in the back end to the write cache partition.
You might not be able to increase the size of one partition without decreasing the size of another partition.
When you increase the size of a partition, ssmgui tries to assign user free memory to the partition you are increasing. When ssmgui allocates all the user free memory that is available for the partition, it attempts to take memory from another partition to accommodate your request. As a result, when you increase the size of one partition, the size of another partition might decrease. This size change is shown graphically in the Memory Partition window, as discussed on page 41 in “Partitioning Array Memory.”
For example, the size of the RAID 3 partition influences the size of the read cache partition, so that when you increase the size of the read cache partition, the size of the RAID 3 partition might decrease.
Note that when you decrease the size of a partition, the amount of deallocated memory returns to the user free partition, and the size of the other partitions do not increase.
You might not be able to allocate all user free memory to a partition if the page tables for the new partition sizes would not leave enough free control RAM for the LIC.
The size of the page tables for the read or write cache partition are affected by the size of the cache page size in addition to the size of the partition. A small cache page size results in more cache pages, and thus a larger page table and less free control RAM for the LIC. Conversely, a large cache page size results in less cache pages, and therefore smaller cache page tables and more free control RAM for the LIC. (You can set the cache page size using the Set Page Size option in the Array menu of the Array Configuration window.)
You partition memory for each SP using the Memory Partition window, which depicts the enclosure's two SPs and the amount of memory allocated to various functions.
Specifying memory partitions consists of these basic steps:
![]() | Caution: If you change the RAID 3 partition size, ssmgui reboots the array. Rebooting restarts the SPs in the array, terminating all outstanding I/O to the array. If you plan to change RAID 3 partition size, make sure no users are conducting I/O with any filesystems or partitions on the array, and unmount these filesystems or partitions. After the process is completed, remount the filesystems or partitions and notify the users. |
Before you allocate memory, you must disable write and read caching for the SPs in the array. To disable caching, follow these steps:
In the Storage System Manager window, select the arrays whose SP memory you want to partition: click the configure array button near the left end of the array toolbar, or choose Configure from the Array menu.
To specify memory for one array, double-click the array icon to open the Array Configuration window.
In the Array Configuration window (see Figure 2-7), disable array write and read caching:
To disable write caching for both SPs, click the write cache disable button (second from left in the array toolbar, as shown in Figure 2-8), or choose Array > Write Cache State > Disable.
To disable read caching for SP A, click the SP A disable read cache button on the array toolbar, or choose Array > Read Cache State > SP A > Disable.
To disable read caching for SP B, click the SP B disable read cache button on the array toolbar, or choose Array > Read Cache State > SP B > Disable.
Because disabling write caching may take a while if the SPs need to write data in the write cache to disks, you must determine when array write and read caching is disabled before proceeding with allocating memory.
Click the SP information button for an SP in the array (near the right end of the array toolbar). The SP Information window for the SP opens; Figure 2-9 shows an example.
In the SP Information window, click Cache. The SP Cache Information window opens; Figure 2-10 shows an example.
Check the Read Cache State and Write Cache State entries; they should be Disabled.
Disabling write caching may take a while if the SPs need to write data in the write cache to disks. If the entry is still Enabled, poll the array every few seconds to get the latest status: click the manual poll button (toward the right in the array toolbar, as shown in Figure 2-8) or choose Poll from the Array menu.
When the Read Cache State and Write Cache State entries are Disabled, click Close to close the SP Information window.
Repeat these steps for the other SP in the array.
To allocate memory, follow these steps:
In the Array Configuration window, click the partition memory button, or choose Partition Memory from the Array menu. The Memory Partition window for the array opens, as shown in Figure 2-11.
Below the pie charts, the fields—Read Cache, Write Cache, RAID 3, Extended, LIC System, and User Free—show the amount of SP memory allocated to those partitions of SP memory.
User free memory is the amount of memory not already allocated to the other memory partitions; its default size is equal to the total memory minus that required for the LIC. Thus, the User Free field shows the amount of memory you can allocate to the other partitions. See “Memory Partitioning and SP Memory” for an explanation of the interaction and requirements of memory partitions.
If desired, and if the array has the necessary hardware as explained in “Hardware Requirements for Caching”, set SP A read cache partition size by dragging the slider until the desired number of megabytes appears in the field to the right of the slider, or by typing the size directly in the field. As you move the slider, the pie chart shows the amount of memory allocated to read cache. (Memory partitions are color-coded; the legend near the bottom of the window explains color coding.)
![]() | Note: Array read caching for SP A is disabled if the SP's read cache partition is 0 MB. It stays disabled you until allocate memory to the partition, and enable caching as explained in “Enabling Caching”. |
Follow the same procedure to set SP B read cache partition size. Read caches for the two SPs are independent of each other.
If desired, and if the array has the necessary hardware as explained in “Hardware Requirements for Caching”, set SP A write cache partition size by dragging the slider until the desired number of megabytes appears in the field to the right of the slider, or by typing the size directly in the field.
As you move the slider, the pie charts for both SPs change to show the portion of memory allocated to each SP's write cache partition, because the write cache partition must be the same size on each SP (unlike the read cache partition).
![]() | Note: Array write caching for SP A is disabled if the SP's write cache partition is 0 MB. It stays disabled you until allocate memory to the partition. Use a minimum of 2 MB for the write cache partition on each SP. |
If desired, and if the array has the necessary hardware as explained in “Hardware Requirements for RAID 3 LUNs”, set RAID 3 partition size.
Drag the slider until the desired number of MB appears in the field to the right of the slider, or type the size directly in the field.
As you move the slider, the pie charts for both SPs change to show the portion of memory allocated to each SP's RAID 3 partition, because it must be the same size on each SP.
![]() | Note: If valid RAID 3 LUNs have been bound and you reduce RAID 3 memory to 0 MB, the RAID 3 LUNs are unowned and are no longer accessible after a reboot. |
To put the memory allocation changes into effect, click OK at the lower left of the Memory Partition window. In the configuration window that appears, click Yes to confirm the new sizes. The reboot starts.
After the reboot, remount any filesystems or partitions you unmounted.
If you have allocated memory to the Read Cache or Write Cache partition, use the Array Configuration window to enable read caching for SP A or SP B or array write caching:
To enable array write caching for both SPs, click the write cache enable button (at the far left of the array toolbar), or choose Array > Write Cache State > Enable.
To enable read caching for SP A, click the SP A enable read cache button or choose Array > Read Cache State >SP A > Enable.
To enable read caching for SP B, click the SP B enable read cache button or choose Array > Read Cache State >SP B > Enable.
You have set up the array(s) to perform read caching, write caching, or both. The array uses the default values for caching parameters. To change these settings, see Chapter 3, “Reconfiguring and Fine-Tuning.” Otherwise, proceed to the next section, “Planning the Bind.”
![]() | Note: After you enable read or write caching, the software informs you that enabling was successful, regardless of whether the array has the necessary hardware for such caching. |
You may have to restart the agent on the server connected to the array whose memory you just partitioned; see next section, “Planning the Bind.”
You must bind disk modules into LUNs so that the server's operating system can recognize them. Consider the following before you bind LUNs:
number of disk modules and their capacities
parameters for the LUN, such as SP ownership, caching, and so on
possible effect of bind operation on operations in progress (whether rebooting is required)
Table 2-3 summarizes the number of disk modules you can have in LUNs, and other important factors for binding.
Table 2-3. Binding Disk Modules
LUN Type | Disk Modules | Notes |
---|---|---|
Any LUN | Unbound disk modules only; all must have the same capacity | For LUNs except RAID 3, you can deallocate RAID 3 memory (see “Setting Up Array Memory for Caching or RAID 3 LUNs”). Binding RAID 3 LUNs with other LUN types in the same array is not supported. |
3 minimum | You can bind one less module per LUN that you eventually use by selecting an empty slot icon. However, the LUN operates in a degraded mode until a disk module is installed in the empty slot and the array integrates it into the LUN. You can select the modules in any order. | |
5 or 9 | You can bind one less module per LUN than you eventually use by selecting an empty slot icon. However, the LUN operates in a degraded mode until a disk module is installed in the empty slot and the array integrates it into the LUN. You can select the modules in any order. Check RAID 3 memory before binding, as explained in“Partitioning Array Memory”. If you do not allocate adequate memory to the RAID 3 partition, the LUN is unowned. Binding other LUN types in the same array is not supported. | |
2 |
| |
Even number: | Disk modules are paired into mirrored images in the order in which you select them: see Figure 2-14 on page 53. | |
3 minimum, 16 maximum | You can select the modules in any order. | |
Individual disk | 1 |
|
Hot spare | 1 | You cannot bind as hot spares disk modules 0:0 through 0:8. The capacity of a hot spare must be at least as great as the capacity of the largest disk module that it might replace. For a RAID 3 LUN, only one hot spare is used in case of disk failure; a second hot spare is not used in case of a second disk failure. |
This section contains the following topics:
![]() | Note: Binding takes a while; how long varies with the type of SP and size of the disk modules. A RAID 3 or RAID 5 LUN might take more than four hours. |
Using a menu choice in the Array menu of the Array Configuration window, you must enable mixed mode for the LUNs in an array or leave it disabled (the default), depending on LUN type; see Table 2-4.
If Menu Item Is... | Mixed Mode State Is... | RAID 5, 1, 1/0, 0 | RAID 3 and Hot Spares |
---|---|---|---|
Disable Mixed Mode | Enabled | These LUNs can be bound and accessed. | These LUNs can be bound and accessed. |
Enable Mixed Mode | Disabled | These LUNs can be bound, but not accessed until mixed mode is enabled. | These LUNs can be bound and accessed; RAID 3 operation is optimized. |
Note the following:
Mixed mode is disabled by default, which optimizes performance for the array's RAID 3 transfers. With mixed mode disabled (the menu item reads Enable Mixed Mode), you can bind other RAID types, but you cannot access them.
If you are binding or accessing RAID 5, 1, 1/0, or 0 LUNs, enable mixed mode (the menu item must read Disable Mixed Mode).
If RAID 5, 1, 1/0, 0 LUNs are bound when mixed mode is disabled, they are unowned; if you then enable mixed mode, they become owned.
If mixed mode is enabled, disabling it reboots the array's SPs and restarts the LIC. Before beginning this process (clicking Disable Mixed Mode), make sure that no users are conducting I/O with any filesystems or partitions on the array.
RAID 3 and hot spares work whether mixed mode is enabled or disabled, but RAID 3 is optimized if it is disabled.
Steps for these processes are included in the instructions for binding LUNs later in this chapter.
Arrays set up according to instructions in “Setting Up Array Memory for Caching or RAID 3 LUNs” use default values for low and high watermarks and for read caching prefetch parameters. To change these settings, follow instructions in “Binding Disk Modules”.
When you set up a LUN, it uses certain bind parameters:
Not all of these parameters apply to all RAID types, as summarized in Table 2-5.
Table 2-5. RAID Types and Bind Parameters
RAID Type | Default SP | Rebuild Time | Stripe (Element) Size | Auto Assign | Caching | |
---|---|---|---|---|---|---|
RAID 5 | Required | Required | Required | Yes | Disable | Read and write |
RAID 3 | Required | Required | Not required | Yes | Disable | Not required |
RAID 1 | Required | Required | Not required | Yes | Disable | Read and write |
RAID 1/0 | Required | Required | Required | Yes | Disable | Read and write |
RAID 0 | Required | Not required | Required | No | Disable | Read, write, or both |
Individual disk | Required | Not required | Not required | No | Disable | Read, write, or both |
Hot spare | Not required | Not required | Not required | No | Not required | Not required |
The LUN number must be specified for all RAID types in Table 2-5. RAID 3 also requires minimal latency reads.
Table 2-6 summarizes the values that ssmgui sets for standard LUN parameters.
Table 2-6. Default LUN Parameters
|
| ||||||
---|---|---|---|---|---|---|---|
RAID 5 | 4 hours | 4 hours | 128 | 5 | Enabled | Disabled | N/A |
RAID 3 | 4 hours | 4 hours | N/A | 5 | N/A | Disabled | Disabled |
RAID 1 | 4 hours | 4 hours | N/A | 2 | Enabled | Disabled | N/A |
RAID 1/0 | 4 hours | 4 hours | 128 | 6 | Enabled | Disabled | N/A |
RAID 0 | N/A | N/A | 128 | 5 | Enabled | Disabled | N/A |
Individual disk | N/A | N/A | N/A | 1 | Enabled | Disabled | N/A |
For all LUN types, the default SP is determined by load balancing; if only one SP is connected, that SP is the default.
The bind procedure varies depending on the type of LUN you are binding. This section contains the following topics:
![]() | Note: Using LUN buttons in the Storage System Manager toolbar (see Figure 2-5) to bind LUNs is not recommended. Use them only under the following conditions: |
You are not changing the mixed mode setting; see “Enabling and Disabling Mixed Mode”.
You are not changing any bind parameters from the defaults; see Table 2-6.
There are no performance considerations; disk modules are bound in order of availability and cannot be selected or specified.
You are using the standard number of disk modules for the LUN; see Table 2-6.
To set bind parameters for RAID 5, 1, 1/0, or 0, follow these steps:
If you plan to use caching, allocate memory as necessary; see “Setting Up Array Memory for Caching or RAID 3 LUNs”. You can deallocate any memory allocated to RAID 3 memory (binding RAID 3 memory and other types of LUNs in the same array is not supported). This process may involve rebooting.
To speed up binding, turn off automatic polling if it is on (see the field at the lower right of the Storage System Manager window): click the button at the far right of the Storage System Manager toolbar (as shown in Figure 2-8).
To see status information during binding, poll for it manually by clicking the manual poll button next to the automatic poll button in the toolbar, or choose Poll from the Array menu. To see the polling information, double-click the LUN icon in the Array Configuration window to open the LUN Information window (see “LUN Configuration Information” in Chapter 4 for an explanation of fields in the window).
In the Storage System Manager window, select the array whose disk modules you want to bind. You can select more than one array if you want to bind all their disk modules into the same type of LUN.
Double-click on the array to open its Array Configuration window.
In the Array menu of the Array Configuration window, make sure mixed mode is enabled. If you see the menu choice Disable Mixed Mode, then mixed mode is currently enabled; proceed to step 6.
If you see the menu choice Enable Mixed Mode, then mixed mode is currently disabled, and you must enable it.
Open the Bind LUNs dialog for the array(s): click the button near the middle of the LUN (lower) toolbar, or choose Bind LUN from the Array menu. Figure 2-12 shows an example of the Bind LUNs dialog.
Select disk modules for the LUN. If the array has disk modules in more than one enclosure, select the enclosures containing the disk modules you want to bind:
To select disk modules from all enclosures in the array: If the Unbound Disks field contains All Chassis, continue to step 8. If it contains the name of an enclosure, pull down its selection list and choose All Chassis.
To select disk modules from one enclosure in the array: If the Unbound Disks field shows the name of the enclosure you want, proceed to step 8. Otherwise, choose the enclosure name from the pull-down menu.
In the Unbound Disks area, select the disk modules that you want to bind into a LUN, and click the right arrow button. (Alternatively, you can use both mouse buttons to drag and drop the disk modules from the Unbound Disk area to the Bind Disks area.) Figure 2-13 shows disk modules selected for binding.
See Table 2-3 for the number of disks in various types of LUNs.
![]() | Tip: To select multiple consecutive disk modules, click the first disk module icon and drag the cursor over the other disk modules. (Alternatively, you can hold down the Ctrl key and click disk modules.) |
If you move a wrong disk module to the Bind Disks area, select it and click the left arrow button to move it back to the Unbound Disks area. (Or drag and drop it into the Unbound Disks area.
If you are binding a RAID 1/0 LUN, the order in which you select modules is important; Figure 2-14 diagrams this order.
In the RAID Type field, select the RAID type. The list displays only RAID types that are available for the number of disk modules you selected.
Make sure Auto Assign in the Options section of the Bind LUNs dialog is unselected (the default), as shown in Figure 2-15. For more information on auto assign, see page 47.
To change other bind parameters in the Bind LUNs dialog:
LUN ID: Select another LUN hexadecimal identifier (ID) from the LUN ID field.
The default LUN ID is the next hex number available, starting with 0 and ending with 1f. The list displays only numbers that are available. The default number is 0 for the first LUN that you bind, regardless of the number of SPs or servers attached to the array. The default number for the second LUN you bind is 1; for the third LUN, it is 2; for the fourth LUN, it is 3, and so on. You can specify a nondefault number if desired. After you bind a LUN with a nondefault number, the default number for the next LUN is the lowest number you skipped. The maximum number of LUNs is 32.
For hot spares, assign LUN numbers starting with the highest number available and continue downwards.
Rebuild Time: The default time of 4 hours is adequate for most situations. Generally, the rebuild takes as long as is required. For more information on rebuild time, see page 46.
If the LUN is not a RAID 0 LUN, individual disk, or hot spare, and you want a different rebuild time from that in the Rebuild Time field, enter the number of hours in the field or click the field list button and select the desired number of hours from the list that opens.
Actual rebuild time can differ significantly from the time you specify, especially for a RAID 1/0 LUN. Since a RAID 1/0 with n disk modules can continue functioning with up to as many as n/2 failed disk modules and only one disk module at a time is rebuilt, the actual rebuild time for such a LUN is the time you specify multiplied by the number of failed disk modules.
![]() | Note: For a rebuild time of greater than 4 hours, you must change the rebuild time after the LUN is bound, as explained in “Changing LUN Bind Parameters That Do Not Require Unbinding” in Chapter 3. |
Verify Time: If the LUN is not a RAID 0 LUN, individual disk, or hot spare, and you want a different verify time from that in the Verify Time field, enter the number of hours in the field or click the field list button and select the desired number of hours from the list that opens. For more information on verify time, see page 46.
![]() | Note: For a verify time of greater than 4 hours, you must change the verify time after the LUN is bound, as explained in “Changing LUN Bind Parameters That Do Not Require Unbinding” in Chapter 3. |
Element Size: For a RAID 0, RAID 1/0, or RAID 5 LUN, if you want the LUN to have a stripe element size with a different number of sectors from the number in the Element Size field, select the desired number of sectors from the field list.
Generally, use the smallest stripe element size that rarely forces access to another stripe. The default stripe element size for RAID 5 is 128 sectors. Any size you choose should be an even multiple of 16 sectors; supported values are 4, 8, 16, 32, 64, and 128. For more information on element size, see page 47.
Read Cache: For LUNs other than RAID 3 or hot spares, to set the state of the read or write cache for the LUN, click the SP's read cache button near the left end of the array toolbar. The display indicates whether read or write cache for the default SP is enabled or disabled for the LUN, as shown in Figure 2-16.
Enabling read cache for any type of LUN (except a RAID 3 LUN or hot spare) is recommended.
If you enable the default SP read cache for a LUN, caching occurs only when the array read is enabled for the default SP. You enable the array read cache for SP A from the Array Configuration window using the appropriate button (such as the SP A enable read cache button) on the array toolbar or by choosing, for example, Array > Read Cache State > SP A > Enable.
The read cache memory that you allocated when you partitioned memory is shared by all LUNs for which the read cache is enabled.
Write Cache: For LUNs other than RAID 3 or hot spares, to set the state of the read or write cache for the LUN, click the SP's read cache button near the left end of the array toolbar. The display indicates whether write cache for the default SP is enabled or disabled for the LUN.
Enabling write cache for a RAID 5 LUN is highly recommended; enabling it for other LUN types for which it is possible is also recommended. Write cache is always mirrored and thus requires two SPs.
If you enable the default SP write cache for a LUN, caching occurs only when the array write cache is enabled for the default SP. You enable the array read or write cache for SP A from the Array Configuration window using the appropriate button (such as the SP A enable write cache button) on the array toolbar or by selecting, for example, Array > Write Cache State > SP A > Enable.
The write cache memory that you allocated when you partitioned memory is shared by all LUNs for which the write cache is enabled.
Default SP: If the LUN is not a hot spare, click the button for the other SP to change the LUN's default owner to that SP. This option is available only for arrays with two SPs. For more information on default SP, see page 47.
When all bind parameters for the LUN are the way you want them, click Bind at the lower left in the Bind LUNs dialog.
In the confirmation window that opens, click Yes to start the bind operation. A window opens stating that the bind operation was successfully initiated; click OK.
A blue icon for the LUN appears in the Unowned LUNs area in the Array Configuration window. A small letter T in the icon indicates its transitional state.
Binding takes a while; how long varies with the type of SP and size of the disk modules. (A RAID 5 LUN might take more than four hours.) When polling determines that the bind operation is completed, the LUN icon moves to the selection area for its default SP and becomes gray.
![]() | Note: To stop a bind that is in progress, use the RAID CLI rebootsp subcommand; see “rebootSP” in Chapter 6. Alternatively, you can use the GUI to remove all drives in the bind. |
Once the LUN is bound, change the mixed mode setting from Disabled to Enabled: choose Enable Mixed Mode from the Array menu. (This message indicates that mixed mode is currently disabled.)
In the confirmation window that opens, click Yes to enable mixed mode.
To set bind parameters for RAID 3 and hot spares, follow these steps:
Make sure the memory allocated for RAID 3 is as you want it; see “Setting Up Array Memory for Caching or RAID 3 LUNs”. This process may involve rebooting.
To speed up binding, turn off automatic polling if it is on; see step 2 on page 50.
In the Storage System Manager window, select the array whose disk modules you want to bind. You can select more than one array if you want to bind all their disk modules into the same type of LUN.
Double-click on the array to open its Array Configuration window.
In the Array menu of the Array Configuration window, look for the mixed mode menu choice:
If you see Enable Mixed Mode, then mixed mode is currently disabled; proceed to step 6.
If you see Disable Mixed Mode, then mixed mode is currently enabled; you must disable it.
If you must disable mixed mode, follow these steps:
![]() | Caution: Disabling mixed mode reboots the array's SPs and restarts the LIC. |
Make sure that no users are conducting I/O with any filesystems or partitions on the array.
Click Disable Mixed Mode in the Array menu of the Array Configuration window.
In the warning window that appears, click Yes.
Select all the disk modules for the LUN; for RAID 3, select either 5 or 9. See instructions at step 6 on page 51. In binding hot spares, you can select as many disks in as many enclosures as you like, and bind them all at the same time.
![]() | Note: Binding other types of LUNs with RAID 3 LUNs in the same array is not supported (except hot spares). |
In the RAID type field, select RAID 3 or Hot Spare Only RAID types that are available for the number of disk modules you selected appear in the list.
For RAID 3 LUNs, make sure Auto Assign in the Options section of the Bind LUNs dialog is unselected (the default), as shown in Figure 2-16. For more information on auto assign, see page 47.
Change other bind parameters:
LUN ID: see page 53.
For hot spares, assign LUN numbers starting with the highest number available and continue downwards.
Verify Time: For a RAID 3 LUN, if you want a different verify time from that in the Verify Time field, enter the number of hours in the field or select the desired number of hours from the list that opens. For more information on verify time, see page 46.
![]() | Note: For a verify time of greater than 4 hours, you must change the verify time after the LUN is bound, as explained in “Changing LUN Bind Parameters That Do Not Require Unbinding” in Chapter 3. |
Read Cache, Write Cache: Enabling these is not supported for RAID 3 LUNs or hot spares.
Minimal Latency Reads: To change this option, which is available only for RAID 3 LUNs, click Minimal Latency Reads in the Options section. For more information on minimal latency reads, see page 48.
Default SP: If the LUN is not a hot spare, click the button for the other SP to change the LUN's default owner to that SP. This option is available only for arrays with two SPs. For more information on default SP, see page 47.
If your application that is writing to the RAID 3 LUN is single-threaded and performance is more important than data integrity, you can increase the performance of these applications by enabling RAID 3 buffering:
Display the Array Configuration window for the array whose RAID 3 write buffering you want to enable.
Determine whether the array's RAID 3 write buffering is enabled: click the array information window button near the right end of the array toolbar in the Array Configuration window, or choose Array Information from the Array menu. A window opens; Figure 2-17 shows an example.
If the array's RAID 3 write buffering is not already enabled, choose Enable RAID3 Write Buffering from the Array menu.
In the confirmation window that opens, click Yes.
(To disable RAID 3 write buffering, choose Disable RAID3 Write Buffering from the Array menu, and click Yes in the confirmation window.)
When all bind parameters for the LUN are the way you want them, click Bind at the lower left in the Bind LUNs dialog.
In the confirmation window that opens, click Yes to start the bind operation. A window opens stating that the bind operation was successfully initiated; click OK.
A blue icon for the LUN appears in the Unowned LUNs area in the Array Configuration window. A small letter T in the icon indicates its transitional state.
Binding takes a while; how long varies with the type of SP and size of the disk modules. (A RAID 3 LUN might take more than four hours.) When polling has determined that the bind operation is completed, the LUN icon moves to the selection area for its default SP and becomes gray.
![]() | Note: To stop a bind that is in progress, use the RAID CLI rebootsp subcommand; see “rebootSP” in Chapter 6. Alternatively, you can use the GUI to remove all drives in the bind. |
Once the LUN is bound, change the mixed mode setting from Enabled to Disabled:
![]() | Caution: Disabling mixed mode reboots the array's SPs and restarts the LIC. |
Make sure that no users are conducting I/O with any filesystems or partitions on the array.
Choose Disable Mixed Mode from the Array menu of the Array Configuration window.
In the confirmation window that appears, click Yes.
![]() | Note: A RAID 3 LUN can use only one hot spare in case of disk failure. If a second hot spare is available and a second disk module in the RAID 3 LUN fails, the LUN does not use the second hot spare. |
When all LUNs are assigned, make the LUNs available to the server's operation system:
Reboot the system.
or
Command-tagged queueing (CTQ) allows multiple outstanding commands to a single SCSI target (that is, a LUN in a storage system), resulting in increased I/O performance.
The SP supports SCSI-2 queuing of requests for its LUNs. The requests are first-come, first-served in that all requests can be sent to one LUN from one initiator and cause a Queue Full status (unexpected SCSI status byte 0x28) to be returned for all other I_T_L SCSI selections. This condition continues until one of the outstanding requests completes and thus frees queue space. The SP can handle up to 250 CTQs.
If Queue Full status is returned for a given I/O request, that request is retried. If the requests cannot be sent to the SP after four retries, the request is aborted, which, in the case of a write request, can have unfortunate consequences.
When CTQ is enabled (with fx) for a given LUN, the default CTQ depth for the LUN is 2; this value is stored in the LUN's volume header. You must use fx to change this value. Table 2-7 defines the maximum CTQ depth values per LUN for single-hosted SPs and dual-hosted SPs.
Table 2-7. Maximum CTQ Depths per LUN
Number of LUNs | Single-Hosted SPs | Dual-Hosted SPs |
---|---|---|
1 | 250 | 126 |
2 | 125 | 62 |
3 | 62 | 31 |
4 | 31 | 15 |
For optimum system performance, enable command-tagged queuing. Table 2-8 shows performance benefits of CTQ.
Table 2-8. CTQ Performance Benefits for 2 KB Random Read, 16 Threads
CTQ | avque | r+w/s | blks/s | w/s | wblks/s | avwait (μs) | avserv (ms) |
---|---|---|---|---|---|---|---|
Disabled | 16.0 | 69 | 137 | 0 | 0 | 217.6 | 14.5 |
Enabled | 16.0 | 304 | 607 | 0 | 0 | 48.8 | 3.3 |
The fx program syntax is as follows.
fx -x “controllertype(controller_number,drive_number,lun_number)” |
Follow these steps:
Enter the fx command with appropriate parameters; for example:
fx -x “dksc(6,2,2)” |
Output such as the following appears:
fx version 6.4, Aug 3, 1998 ...opening dksc(6,2,2) ...controller test...OK Scsi drive type == SGI RAID 5 0757 fx: Warning: bad sgilabel on disk creating new sgilabel ----- please choose one (? for help, .. to quit this menu)----- [exi]t [d]ebug/ [l]abel/ [a]uto [b]adblock/ [exe]rcise/ [r]epartition/ [f]ormat |
Update parameters; at the fx> prompt, enter
fx> /label/set/param |
Output such as the following appears:
fx/label/set/parameters: Error correction = (enabled) fx/label/set/parameters: Data transfer on error = (enabled) fx/label/set/parameters: Report recovered errors = (enabled) fx/label/set/parameters: Delay for error recovery = (enabled) fx/label/set/parameters: Err retry count = (0) fx/label/set/parameters: Transfer of bad data blocks = (enabled) fx/label/set/parameters: Auto bad block reallocation (write) = (enabled) fx/label/set/parameters: Auto bad block reallocation (read) = (enabled) fx/label/set/parameters: Read ahead caching = (disabled) |
At the Enable CTQ prompt, enter enable:
fx/label/set/parameters: Enable CTQ = (disabled) enable |
At the CTQ depth prompt, enter 10:
fx/label/set/parameters: CTQ depth = (2) 10 |
Output such as the following appears.
fx/label/set/parameters: Read buffer ratio = (0/256) fx/label/set/parameters: Write buffer ratio = (0/256) * * * * * W A R N I N G * * * * * |
At the following prompt in the last line above, enter yes:
about to modify drive parameters on disk dksc(6,2,2)! ok? yes |
The following output appears:
----- please choose one (? for help, .. to quit this menu)----- [exi]t [d]ebug/ [l]abel/ [a]uto [b]adblock/ [exe]rcise/ [r]epartition/ [f]ormat |
Type exit to exit fx. The following message appears:
label info has changed for disk dksc(6,2,2). write out changes? (yes) |
Type y to write the changes to disk.