This chapter describes how to bind disks into LUNs. It explains
moving disk modules
planning the bind
binding disks into RAID units
getting disk group (LUN) information
The chapter concludes with information on dual processors, load balancing, and device names.
Generally, disk modules should not be moved from one slot to another, but if moving one is absolutely necessary, the system operator or service person can move it with the following precautions:
The disk module must be unbound. Moving a module that is part of a LUN to another slot may destroy all information on the entire LUN.
The disk module must be removed and reinserted while the storage system is powered on.
All the disk modules that you use for a LUN must have the same capacity to use the modules' disk space fully.
For an individual unit or hot spare, use the next empty slot on an internal bus that is already used, provided it is not one of the slots for a database or a cache vault disk module (slots A0, B0, C0, D0, E0, or A3). For example, to add an individual unit when slots A0, B0, C0, D0, E0, A1, and B1 are used, use slot A2 or B2.
Follow instructions in “Replacing a Disk Module” in Chapter 5.
filesystem capacity considerations for RAID
determining RAID levels
planning a RAID-3 bind
You can bind as many as 16 drives in a LUN; for 4 GB drives, this capacity totals 64 GB. However, filesystems have capacity limits that must be taken into account.
For IRIX 5.3 and earlier, an EFS filesystem is a maximum of 8 GB.
For IRIX 5.3 and earlier, an XFS filesystem can be over a terabyte; however, the device parameter portion of the volume header is limited. Because values cannot exceed the number of bits allocated for each of the fields in the volume header (sec/trk, trks/cyl, #cyls), decrease the #cyls to within the 16-bit value available and increase one of the other fields.
To determine allowed capacity limits for an IRIX release, view the header files dvh.h and dksc.h in /usr/include/sys.
Before you create LUNs, plan the RAID levels you need. The LUNs are bound in a particular order, depending on the RAID level for each one:
first: RAID-1_0
next: RAID-0, RAID-3, or RAID-5
next: RAID-1
last: individual disk units and hot spares
Note the rules outlined in Table 4-1 for binding disk modules into LUNs.
RAID-3 for systems with SPs that have firmware revision level 9.0 and higher (SP model 7305) and RAID agent 1.55 and higher has enhanced performance compared to previous firmware revision levels. This version of RAID-3 (“fast” RAID-3) has specific memory characteristics and requirements, which are explained in this section.
![]() | Note: For information on determining these revision levels, see “Getting Device Names With getagent” or “Viewing SP Status Information” in Chapter 3. |
An SP's memory is divided into RAID–3 space, storage-system buffer space, write cache space, and read cache space. You allocate RAID-3 memory before binding LUNs; this chapter gives specific steps for each user interface.
Storage-system buffer space in a PowerPC-based SP is always 4 MB. You can allocate all or part of the remaining memory for RAID-3 and other uses. For example, if the SP in a system has 16 MB of memory, memory can be allocated for:
4 MB required for system buffers
read or write cache for non-RAID-3 LUNs, if any
memory assigned to any previously bound RAID-3 LUNs, if any
memory for RAID-3 LUNs
For RAID-3 LUNs, you allocate RAID-3 memory instead of read or write cache. For example, if a system with one 16-MB SP system is to contain two RAID–3 LUNs only, you can allocate all 12 MB (after the 4 MB required for system buffers) to RAID–3 memory.
You then split this memory between the two LUNs, assigning 6 MB to each. Figure 4-1 diagrams this example.
In systems with two SPs, the amount of SP memory allocated for RAID-3 must be the same for each SP.
A RAID-3 LUN uses the RAID-3 memory on the SP that owns it. Therefore, if failure occurs and ownership of the failed SP is transferred, the surviving SP must have enough memory for its own LUNs and that of the failed SP. Figure 4-2 diagrams an example.
For failover systems, both SPs should have the same amount of physical memory. If the two SPs in a system have different amounts of physical memory, ownership of RAID-3 LUNs transfers from a failed SP to a working SP only if the working SP has
4 MB required for system buffers
memory allocated for read or write cache for non-RAID-3 LUNs, if any
total RAID-3 memory for its RAID-3 LUNs and the other SP's RAID-3 LUNs
minimum of 4 MB per each RAID-3 LUN
Figure 4-3 diagrams an example.
To get the most out of a storage system with RAID-3, follow these guidelines:
Bind all LUNs in the system to RAID-3 only. In a storage system with only RAID–3 LUNs, no memory is allocated for storage-system read or write caching; all memory is allocated for RAID-3.
Allocate the maximum of 6 MB of storage-system memory for each RAID–3 LUN. For two RAID–3 LUNs, allocate 12 MB of storage-system memory for the RAID–3 memory partition and 6 MB to each of the two RAID–3 LUNs when you bind them. Since an SP requires 4 MB of storage-system memory for system buffers, each SP needs 16 MB of memory. Failover requires 32 MB.
The physical disk unit number is also known as the logical unit number, or LUN. (The unit is a logical concept, but is recognized as a physical disk unit by the operating system; hence, the seemingly contradictory names.) The LUN is a hexadecimal number between 0 and 7.
Unlike standard disks, physical disk unit numbers (LUNs) lack a standard geometry. Disk capacity is not a fixed quantity between disk-array LUNs. The effective geometry of a disk-array LUN depends on the type of physical disks in the array and the number of physical disks in the LUN.
This section explains
binding disks using the command-line interface
binding disks using RAID5GUI
Storage systems typically ship from the factory with disk modules bound in groups of five as RAID-5 units. Binding disks into different LUNs consists of
unbinding LUNs
allocating memory for RAID-3 LUNs
binding disks
verifying the LUNs
When you change a physical disk configuration, you change the bound configuration of a physical disk unit. The physical disk unit configuration changes when you add or remove a disk module, or physically move one or more disk modules to different slots in the chassis.
Unbinding a LUN divides it into its constituent disk modules, allowing you to rebind the modules in any valid configuration you choose.
Unlike binding, unbinding takes only a few moments. You can unbind a LUN only if it is broken or bound, that is, its modules must be in the Enabled or Broken state. You can unbind a hot spare only if it is on standby.
To unbind a disk, use the unbind parameter with the raidcli command. Follow these steps:
In an IRIX window, use raidcli getagent to get the device name (node number):
raidcli getagent |
If necessary, use raidcli getdisk to verify the disk positions.
Use raidcli unbind to unbind the individual disk units:
raidcli -d device unbind lun-number [-o] |
In this syntax, device is the device name as returned by getagent and lun-number is the number of the logical unit to unbind. The -o flag specifies that the user is not prompted for permission.
The unbind parameter has no output.
The following example destroys LUN 3 without prompting for permission:
raidcli -d sc4d2l0 unbind 3 -o |
For more information on the unbind parameter, see “unbind” in Appendix B.
If the LUNs you are creating are RAID-3, you must allocate a special amount of memory dedicated to RAID-3 before starting the bind. This option is available only for SP model 7305/firmware revision 9.0 and higher and RAID agent 1.55 and higher. (Use raidcli -d <device> getsp to determine an SP's firmware revision level and model number.)
Follow these steps:
![]() | Note: The system cannot be used during allocation of RAID-3 memory. |
Determine the amount of memory you can use for each SP. If necessary, use raidcli getcache to determine each SP's total memory. Look for Raid 3 memory size for spA/spB in the output.
![]() | Note: For RAID-3, you must allocate at least 2 MB of RAID-3 memory per LUN for each SP. Failover requires a minimum of 4 MB per LUN for each SP. See the guidelines in “Planning a RAID-3 Bind,” earlier in this chapter. |
Quiesce the bus: log users off and back up the system. The system cannot be used during RAID-3 memory allocation.
Make sure read and write caches are disabled and there are no unassigned dirty pages. For example:
raidcli -d sc4d2l0 getcache |
In the output, look for these lines:
Cache State: Disabled ... Unassigned Cache Pages: 0 ... Prct Dirty Cache Pages = 0 ... SPA Read Cache State enabled SPB Read Cache State enabled |
Use raidcli setcache to set the total amount of caching dedicated to RAID-3. (For full details of setcache, see “Setting Cache Parameters” in Chapter 7.)
Note these guidelines:
Storage-system buffer space in a PowerPC-based SP is always 4 MB. You can allocate all or part of the remaining memory for RAID-3.
Allocating write/read cache memory decreases the amount of memory available for RAID-3 memory. If the storage system uses RAID-3 LUNs only, no write and read caches are used; do not allocate memory for this purpose.
Each SP must have at least 2 MB allocated for RAID-3. For maximum RAID-3 performance, allocate 6 MB of RAID-3 memory for each RAID-3 LUN you want to bind.
For failover, each SP must have the same amount of memory allocated. Furthermore, the total memory you set for each SP must include memory for this SP plus that of the other SP. See “Planning a RAID-3 Bind,” earlier in this chapter.
![]() | Caution: Setting RAID-3 cache memory reboots the firmware. Make sure the system is not in use before proceeding. |
The following example sets the RAID-3 cache to 6 MB:
raidcli -d sc4d2l0 setcache 1 -r3 6 |
For failover, RAID-3 memory assigned to each SP must be the total of the SPs used in the failover system. This example sets the RAID-3 cache for a system in which each SP has 6 MB:
raidcli -d sc4d2l0 setcache 1 -r3 12 |
When you press <Enter> to complete the command, a message informs you that the SPs will be rebooted.
Wait two minutes; then stop and restart the RAID agent:
/etc/init.d/raid5 stop /etc/init.d/raid5 start |
Create LUNs for each SP using the -r3 option to allocate RAID-3 memory; for example:
raidcli -d sc4d2l0 bind r3 2 a2 b2 c2 d2 e2 -r3 4 raidcli -d sc2d2l0 bind r3 3 a3 b3 c3 d3 e3 -r3 4 |
In this example:
r3 sets the RAID level to RAID-3
2 sets the LUN number to 2; 3 sets the LUN number to 3
a2 b2 c2 d2 e2 binds disks to LUN 2 and a3 b3 c3 d3 e3 binds disks to LUN 3; five disks are required for each RAID-3 LUN
-r3 4 sets RAID-3 cache memory for each LUN to 4 MB
For failover, you must set a total of the memory for this SP's LUNs plus that of the other SP; each SP must have the same amount of memory allocated. That is, the RAID-3 cache memory you set with raidcli bind -r3 for each SP must be half the amount you have set with raidcli setcache -r3 in step 4. See “Planning a RAID-3 Bind,” earlier in this chapter.
To group physical disks into RAID-0, RAID-1, RAID-1_0, RAID-3, or RAID-5 units or to create a hot spare, become superuser and use
raidcli -d device bind raid-type lun-number disk-names [optional-args] |
Variables in this syntax are as follows:
-d device | Target RAID device, as returned by raidcli getagent; see “Getting Device Names With getagent” in Chapter 3. | |
raid-type | ||
lun-number | Logical unit number to assign the unit (a hexadecimal number between 0 and 7). | |
disk-names | Indicates which physical disks to bind, in the format bd, where b is the physical bus name (a through e; be sure to use lowercase) and d is the device number on the bus (0 through 3). For example, a0 represents the device 0 on bus A, and e2 represents device 2 on bus E. A RAID-0 bind requires a minimum of 3 and a maximum of 16 disks. A RAID-1 bind requires 2 disks on separate buses. A RAID-1_0 bind requires an even number of disks (minimum 2; maximum 16). For high availability, a member of each image pair must be on a different bus. Select the disks in this order: first disk on first bus, first disk on second bus, second disk on first bus, second disk on second bus, third disk on first bus, third disk on second bus, and so on. A RAID-3 or RAID-5 bind also requires separate buses, each with 5 disks. Legal RAID-3 or RAID-5 bind configurations are
A RAID-5 bind requires a minium of 3 disks and a maximum of 15 disks. For high availability, use groups of 5. A hot spare bind requires 1 disk, which cannot be A0, B0, C0, D0, E0, or A3. The capacity of the hot spare must be at least as great as the capacity of the largest disk module it might replace. All disks in a bind should have the same capacity, so that disk space is used fully. |
![]() | Note: If the LUN you are creating is RAID-3, you must allocate SP memory before starting the bind; see “Allocating Memory for RAID-3 LUNs” earlier in this chapter. |
The optional arguments are as follows:
-r rebuild-time | Maximum time in hours to rebuild a replacement disk. Default is 4 hours; legal values are any number greater than or equal to 0. A rebuild time of 2 hours rebuilds the disk more quickly but degrades response time slightly. A rebuild time of zero hours rebuilds as quickly as possible but degrades performance significantly. If your site requires fast response time and you want to minimize degradation to normal I/O activity, you can extend the rebuilding process over a longer period of time, such as 24 hours. You can change the rebuild time later without damaging the information stored on the physical disk unit.
| |||
-s stripe-size | Number of blocks per physical disk in a RAID stripe. Default is 128; legal values are any number greater than zero. The smaller the stripe element size, the more efficient the distribution of data read or written. However, if the stripe size is too small for a single host I/O operation, the operation requires accessing two stripes, thus causing the hardware to read and/or write from two disk modules instead of one. Generally, it is best to use the smallest stripe element size that will rarely force access to another stripe. The default stripe element size is 128 sectors. The size should be an even multiple of 16 sectors (8 KB). For RAID-3, the legal value is 1. | |||
-c cache-flags |
The default is none. Caching is not specified for RAID-3. | |||
-z stripe-count | Sets the number of stripes in a LUN. For example, if you bind a RAID-5 LUN with a stripe count of 2, you partition the LUN into two stripes, thus preventing access to the remaining available space. This option is useful for short bind operations. Legal values are any number greater than or equal to 0. The default value is 0, which binds the maximum number of stripes available. | |||
-r3 raid3-memory-size |
|
![]() | Note: Although bind returns immediate status for a RAID device, the bind itself does not complete for 15 to 60 minutes, depending on system traffic. Use raidcli -d device getlun 0 | grep Bound to monitor the progress of the bind. When the bind is complete, each disk is noted as “Bound But Not Assigned” in the getlun output. |
The following example binds disks A1, B1, C1, D1, and E1 into a RAID-5 logical unit with a logical unit number of 3, a four-hour maximum rebuild time, and a 128-block stripe size per physical disk, with read cache enabled:
raidcli -d sc4d2l0 bind r5 3 a1 b1 c1 d1 e1 -r 4 -s 128 -c read |
The following example binds A2 and B2 into a RAID-1 logical unit with a LUN number of 2 and a four-hour maximum rebuild time, with read cache enabled:
raidcli -d sc4d2l0 bind r1 2 a2 b2 -r 4 -c read |
The following example binds A1, B1, C1, D1, and E1 into a RAID-3 logical unit with a LUN number of 3, a four-hour maximum rebuild time, and a 128-block stripe size per physical disk, with read cache enabled:
raidcli -d sc4d2l0 bind r3 3 a1 b1 c1 d1 e1 -r 4 -c read |
The following example binds disks A1, B1, C1, and D1 into a RAID-1_0 logical unit with a LUN number of 1, a four-hour maximum rebuild time, and a 128-block stripe size per physical disk, with read cache enabled:
raidcli -d sc4d2l0 bind r1_0 1 a1 b1 c1 d1 -r 4 -s 128 -c read |
The following example binds A2, B2, C2, D2, and E2 into a RAID-0 logical unit with a LUN number of 3, and a 128-block stripe size per physical disk, with read cache enabled:
raidcli -d sc4d2l0 bind r0 3 a2 b2 c2 d2 e2 -s 128 -c read |
The following example binds disk E3 as a hot spare with a LUN number of 7:
raidcli -d sc4d2l0 bind hs 7 e3 |
There is no output for raidcli with the bind parameter. Errors are printed to stderr.
![]() | Note: For complete messages, use the -v option. |
Verify that the LUNs are the way you want them; see “Getting Disk Group (LUN) Information” later in this chapter.
Binding disks consists of these steps:
unbinding LUNs as necessary
for RAID-3 LUNs, allocating memory
creating LUNs
Storage systems typically ship from the factory with disks bound in groups of five as RAID-5 units. To change this configuration, you must unbind LUNs to create the LUNs you want.
![]() | Note: Creating a LUN usually takes 15 to 60 minutes, depending on system traffic. |
To use RAID5GUI to unbind disks, follow these steps:
Open the Summary View window for the Challenge RAID storage system. Figure 4-4 shows an example.
Click the button of a LUN you want to unbind. The LUN information window appears, as shown in Figure 4-5.
Click Unbind. Verification dialog boxes appear; answer them appropriately to unbind the individual disk unit.
The SP unbinds the LUN. In the Bind window, the black outline disappears from around the disk module buttons, and select buttons appear next to the disk module buttons for the newly unbound disk modules. In the Summary View window, the state of the newly unbound disk modules changes from Enabled to Unbound, the LUN number disappears from next to the disk module buttons, and the button for the LUN that was unbound disappears.
If you are creating RAID-3 LUNs, you must allocate SP memory before binding them. Follow these steps:
![]() | Note: This option is available only for firmware revision 9.0 and higher (SP 7305) and RAID agent 1.55 and higher. |
Log users off the system and back it up. The system cannot be used during allocation of RAID-3 memory.
Disable automatic polling. Choose “Poll Settings...” in the Options menu and set Manual Polling n the Poll Settings window.
Make sure read and write caches are disabled and no unassigned dirty pages exist:
In the Equipment View or Summary View, click the button for the SP for which you are binding LUNs.
In the SP Summary window that appears, click Cache. The SP Cache Summary window appears; Figure 4-6 shows an example.
As in the example, the Cache Status for both read and write caches should be Disabled, the percentage of dirty pages should be 0%, and the number of unassigned dirty pages should be 0.
Close the SP Cache Summary window and the SP Summary window.
Click Cache in the Equipment or Summary View.
In the SP Cache Settings window that appears, click Parameters.... The SP Cache Parameters window appears, as shown in Figure 4-7.
In the RAID3 text field for each SP used for RAID-3, set SP memory, noting these guidelines:
Storage-system buffer space in a PowerPC-based SP is always 4 MB. You can allocate all or part of the remaining memory for RAID-3.
Allocating write/read cache memory decreases the amount of memory available for RAID-3 memory. If the storage system uses RAID-3 LUNs only, no write and read caches are used; do not allocate memory for this purpose.
Each SP must have at least 2 MB allocated for each RAID-3 LUN. For maximum RAID-3 performance, allocate 6 MB of RAID-3 memory for each RAID-3 LUN you want to bind.
Each SP in a failover system must have the same amount of memory allocated for RAID-3 LUNs. For details, see “Planning a RAID-3 Bind,” earlier in this chapter.
(Other settings in this window are explained in “Setting Cache Parameters” in Chapter 7.)
When you allocate RAID-3 memory for one SP, the software automatically allocates the same amount of RAID-3 memory for the other SP.
When you are satisfied with the memory allocation, click Set in the Cache Parameters window. A message appears:
Current settings will cause FLARE to reboot, Continue? |
Click Yes in this window. The firmware reboot starts. A message appears, as shown in Figure 4-8.
Click OK and exit RAID5GUI.
After two minutes have elapsed since the firmware reboot began, stop and restart the RAID agent and restart RAID5GUI:
/etc/init.d/raid5 stop /etc/init.d/raid5 start /usr/raid5/raid5gui |
To use RAID5GUI to create LUNs, follow these steps:
To speed up the binding operation, turn off automatic polling; see “Using RAID5GUI Automatic Polling” in Chapter 3. To see status information during binding, poll for it manually by clicking Poll in the Equipment or Summary View toolbar.
Click Bind in the Equipment View or Summary View. The Bind window for that chassis appears; Figure 4-9 shows an example for RAID-5 LUNs.
You can also open this window by selecting “Bind” in the Equipment View or Summary View's Configure menu.
The disk module buttons in the Bind window resemble those in the Summary View window. You can click on a disk module button to display information about it.
A disk module button for an unbound disk has a select button next to it, as shown in Figure 4-9.
The parameters for the LUN that you can select in the Bind window vary, depending on the RAID level. Table 4-2 summarizes the LUN parameters that are available for the various RAID levels.
Table 4-2. RAID Levels and LUN Parameters
RAID Level | LUN | Element | Default | RAID-3 | Write Caching | Read Caching | ||
---|---|---|---|---|---|---|---|---|
RAID-0 | x |
| x | x | Disable recommended | N/A | Acceptable | Recommended |
RAID-1 | x | x |
| x | Disable recommended | N/A | Acceptable | Recommended |
RAID-1_0 | x | x | x | x | Disable recommended | N/A | Acceptable | Recommended |
RAID-3 | x | x |
| x | Required | N/A | N/A | |
RAID-5 | x | x | x | x | Disable recommended | N/A | Highly recommended | Recommended |
Individual disk | x |
|
| x | Disable recommended | N/A | Acceptable | Recommended |
Hot spare | x |
|
|
| N/A | N/A | N/A | N/A |
In the Raid Type text field, select the RAID level for the first LUN you want to create. If you plan to use different RAID levels for the LUNs you create, follow the order specified in “Planning the Bind,” earlier in this chapter.
If you select RAID-3, the Bind window changes; Figure 4-10 shows an example.
Notice that selecting RAID-3 disables the LUN caching and element size selections and displays a section for RAID-3 memory allocation.
In the LUN (Hex) text field, select the LUN number you want for the first LUN.
The default number is 0 for the first LUN that you bind, regardless of the number of SPs or hosts attached to the Challenge RAID storage system. The default number for the second LUN you bind is 1; for the third LUN, it is 2; for the fourth LUN, it is 3, and so on. You can select nondefault numbers if desired.
The Challenge RAID storage system allows a maximum of eight LUNs (0-7).
![]() | Note: For hot spares, assign LUN numbers starting with the highest number available and continue downward. |
All disk modules in a bind should have the same capacity to use the modules' disk space fully.
If the LUN you are creating is RAID-1, -1_0, -3, or -5, use the Rebuild Time text field to select a different rebuild time from the default 4 hours, if desired.
The rebuild time is the amount of time that the storage system allots to reconstruct the data on either a hot spare or a new disk module that replaces a failed disk module in a LUN. It applies to all RAID LUNs except RAID-0. The time you specify determines the amount of resource the SP devotes to rebuilding instead of to normal I/O activity.
The default time of 4 hours is adequate for most situations. A rebuild time of 2 hours rebuilds the disk more quickly, but degrades response time slightly. A rebuild time of 0 hours rebuilds the disk module as quickly as possible, but degrades response time significantly. If your site requires fast response time and you want to minimize degradation to normal I/O activity, you can extend the rebuilding process over a longer period of time, such as 24 hours.
![]() | Note: The actual rebuild time can differ significantly from the time you specify, especially for a RAID-1_0 LUN, or a LUN containing 4 GB disk modules. Since a RAID-1_0 with n disk modules can continue functioning with up to as many as n/2 failed drive modules and only one drive at a time is rebuilt, the actual rebuild time for such a LUN is the time you specify multiplied by the number of failed drives. For 4 GB disk modules, the minimum rebuild time should be 2 hours, because a rebuild time of less than 2 hours actually takes about 2 to 3 hours. If you specify any other time, the actual rebuild time for the disk module is about twice as long. |
If the LUN you are creating is RAID-0, -1_0, or -5, use the Element Size text field to select a different number of sectors for the stripe element for the default (128 sectors).
The stripe element size is the number of disk sectors that the storage system can read or write to a single disk module without requiring access to another disk module (assuming that the transfer starts at the first sector in the stripe). The stripe element size can affect the performance of a RAID-5 or RAID-1_0 LUN. A RAID-3 LUN has a fixed stripe element size of one sector, a value that you cannot change.
The smaller the stripe element size, the more efficient the distribution of data read or written. However, if the stripe size is too small for a single I/O operation, the operation requires access to two stripes, which causes the hardware to read or write from two disk modules instead of one. Generally, you use the smallest stripe element size that rarely forces access to another stripe.
For the LUN to have a different default SP owner than that shown at Default SP, select the diamond-shaped radio button for the other SP.
The default SP assumes ownership of the LUN after the storage system's power is cycled. If the storage system has two SPs, you can choose to bind some LUNs using one SP and the rest using the other SP. Splitting the LUNs can balance the load across the SPs and establish the primary server for the LUN in a dual-server configuration. The primary route to a LUN is the route through the default SP, and the secondary route is through the other SP.
Disable auto-assign by deselecting the Auto Assign select button in the Assignment field. Disabling auto-assign is recommended for most uses, and is required if failover software is present.
Auto-assign controls the ownership of the LUN when an SP fails in a storage system with two SPs. With auto-assign enabled, if the SP that owns a LUN fails and the server tries to access that LUN through the second SP, the second SP assumes ownership of the LUN so that access can occur. The second SP continues to own the LUN until the SP's power is cycled (turned off and on again). When the power is cycled, ownership of each LUN returns to its default SP. If auto-assign is disabled in the previous situation, the other SP does not assume ownership of the LUN, so the access to the LUN does not occur.
For a LUN to be used in a high-availability configuration, auto-assign must be disabled. By default, auto-assign is disabled in RAID5GUI.
For any LUN except RAID-3 with RAID-3 memory assigned, click the button at Cache Enabled in the LUN Caching field to enable or disable caching. If the caching is enabled, click the button at Write Cache or Read Cache in the LUN Caching field to enable or disable write or read caching.
Whether or not you should use caching for a specific LUN depends on the LUN's RAID type. These fields are not available for fast RAID-3. See Table 4-2 for recommendations for using caching for the different RAID levels.
If you are binding a RAID-3 LUN, change the default memory size if desired. Divide the amount of RAID-3 memory you allocated for the SP (as explained earlier in “Assigning Memory for RAID-3 LUNs”) evenly between this LUN and other RAID-3 LUNs for this SP. For a failover system, take half the amount of RAID-3 memory you allocated for this SP and divide it among this SP's RAID-3 LUNs.
Once you have selected all the parameters for the LUN, select each disk module for the LUN by clicking on the select button that appears next to each disk module button. Follow the guidelines in Table 4-1 in “Planning the Bind,” earlier in this chapter.
Clicking on a select button selects the disk module; a heavy black border appears around the disk module button to show that it is selected. To select more than one disk module, hold down a <Shift> key and click the disk module's select button. To deselect a disk module button, click the disk module's select button again.
![]() | Note: If you are creating a RAID-1_0 LUN, the order in which you select the modules is important. The first module you select is the first part of the mirror's primary image, the second module is the first part of the mirror's secondary image, the third module is the second part of the primary image, the fourth module is the second part of the secondary image, and so on for any other modules in the LUN. In other words, the first and second modules selected are a pair of peer image parts, the third and fourth modules are another pair of peer image parts, and so on. Figure 4-11 diagrams this scheme. |
The RAID5GUI Bind window displays a different color border for each pair of disk modules you select. The first and second modules you select have a red border, the third and fourth a blue border, and so on. Deselecting one disk module in a pair causes the other module in the pair to be deselected, so that both are no longer paired and are available.
For high availability, the modules in a pair must be on different SCSI buses. For highest availability and performance in a RAID-1_0 LUN, select modules on consecutive SCSI buses. For example, for a six-module LUN, select these modules in order: A0, B0, C0, D0, E0, and A1. Modules A0 and B0 are peers, C0 and D0 are peers, and E0 and A1 are peers.
When you have selected all disk modules for the LUN and the LUN parameters specify the values you want, click Bind.
If you selected fewer than the minimum number of disk modules required for the LUN's RAID type, a popup appears stating the minimum number of modules required. Click OK in the popup and continue to select the additional modules needed. See Table 4-1 in “Planning the Bind,” earlier in this chapter.
If you selected at least the minimum number of disk modules required for the RAID level you are binding, verification dialog boxes appear; answer them appropriately to bind the disk modules into the LUN.
The bind operation begins, and the borders around the buttons of the disk modules in the LUN disappear, along with their select buttons. In the Summary View window, the state of each disk module in the LUN changes from Unbound to Binding. Generally, binding takes about 15 to 60 minutes for 2 GB disk modules, depending on system traffic. When the bind operation is completed, the state of each disk module in the LUN changes to Ready.
Repeat steps 11 and 12 for the remaining LUNs you want to create.
If you are binding RAID-3 LUNs in a failover system, take half the SP's RAID-3 memory and divide it evenly between this SP's LUNs. The remaining half of this SP's memory covers failover for the other SP. See “Planning a RAID-3 Bind,” earlier in this chapter for more detail.
When a LUN is assigned to an SP, its state becomes Assigned; the state of its disk modules becomes Enabled when you use the SP that owns the LUN as the communications path to the chassis.
Verify that the LUNs are the way you want them; see “Getting Disk Group (LUN) Information” later in this chapter.
This section explains
getting LUN information using the command-line interface
getting LUN information using RAID5GUI
To display information on a logical unit and the components in it, use the getlun parameter:
raidcli -d device getlun lun-number |
The following example displays information about LUN 3:
raidcli -d sc4d2l0 getlun 3 |
For a sample output, see “getlun” in Appendix B.
![]() | Note: Information on individual disks is not displayed unless statistics logging is enabled with raidcli getcontrol. See “Getting Information About Other Components” in Chapter 3 of this guide. |
This section explains
using the Summary View for LUN information
using the LUN information window
getting LUN read/write statistics
Use the Summary View to determine LUN information and verify that LUNs you created are bound as you want them. Figure 4-12 shows an example Summary View with LUNs.
A LUN button tells you the following information:
hexadecimal number
RAID level, as shown in Figure 4-13
status, as indicated by the color of the button:
gray: operating normally
blue: binding, rebuilding, or, in the case of a hot spare, being used to replace a failed disk module
amber: failed
white: disk module is not present
Make sure that the disk module buttons surrounded by black borders represent the disk modules that you want in the LUN. If any of these disk modules is a module that you do not want in the LUN or you want additional disk modules in the LUN, unbind the LUN (see “Unbinding LUNs in a New Storage System,” earlier in this chapter) and then rebind it with the correct disk modules.
Clicking a LUN button displays the LUN Information window; Figure 4-14 shows an example.
Verify that the fields in the LUN Information window are as you want them.
In this window, the Description field indicates the RAID type (level), as shown in Table 4-3.
Table 4-3. RAID Types in the LUN Information Window
RAID Type | Explanation |
---|---|
RAID-0 | Nonredundant individual access array |
RAID-1 | Mirrored pair |
RAID-1_0 | Mirrored RAID-0 group |
RAID-3 | Parallel access array |
RAID-5 | Individual access array |
DISK | Individual disk unit |
HOT SPARE | Hot spare |
The State field indicates the LUN's operational state. Table 4-4 lists possibilities.
LUN State | Meaning |
---|---|
Assigned | Owned by an SP and all its disk modules are enabled |
Binding | Being bound |
Bound Not Assigned | Bound but not assigned to either SP |
Failed | Shut down by SP because a component is broken |
Rebuilding | One or more of its disk modules is rebuilding or equalizing |
Replacing XY | Hot spare replacing failed disk module with disk ID XY in another LUN |
Standby | Hot spare available for replacing a failed disk module in another LUN |
Other fields are as follows:
The Stripe Element Size field indicates the number of sectors that the Challenge RAID storage system can read or write to a single disk module without requiring access to another disk module in the LUN. The default size is 128 sectors. This size was specified when the LUN was bound. Stripe element size does not apply to a RAID-1 LUN, individual unit, or hot spare.
The Maximum Rebuild Time field shows the maximum number of hours the storage system can take to integrate a hot spare or new disk module. This time was specified when the LUN was bound.
The Cache Read Hit Ratio and Cache Write Hit Ratio fields show the percentage of cache read hits and cache write hits for the LUN. For more information on these parameters, see “Viewing Cache Statistics” in Chapter 7.
To change the Description (RAID type) or Element Size parameters, click Unbind to unbind the LUN (see instructions in “Unbinding LUNs in a New Storage System,” earlier in this chapter) and then rebind it with the correct parameters.
To change caching parameters or the maximum rebuild time, click Change in the LUN Information window, as explained in “Changing LUN Parameters” in Chapter 8.
Click Statistics in the LUN Information window to view the read and write statistics for the LUN compiled since the last time the statistics log for the SP that owns the LUN was turned on (see “Enabling and Disabling the Statistics Log” in Chapter 3 for more information). Figure 4-15 shows an example LUN Statistics window.
Table 4-5 explains the entries in this window.
Table 4-5. LUN Statistics Window Entries
Entry | Meaning |
---|---|
Average LUN Request Service Time | Average number of milliseconds that all the disk modules in the LUN required to execute an I/O request after the request reached the top of the queue |
Number of Reads Number of Writes | Total read and write requests made to all the disk modules in the LUN |
Number of Blocks Read Number of Blocks Written | Total data blocks read from and written to all the disk modules in the LUN |
Number of Read Retries Number of Write Retries | Total times read and write requests to all the disk modules in the LUN were retried |
Click Errors in the LUN Information window to view the number of remapped sectors and the number of hard and soft read and write errors. Figure 4-16 shows an example.
Table 4-6 explains the entries in this window.
Table 4-6. LUN Errors Window Entries
Entry | Meaning |
---|---|
Number of Hard Read Errors Number of Hard Write Errors | Total read or write errors for all the disk modules in the LUN that persisted through all the retries. An increasing number of hard errors might mean that one or more of the LUN's disk modules is nearing the end of its useful life. |
Number of Soft Read Errors Number of Soft Read Errors | Total number of read or write errors for all the disk modules in the LUN that disappeared before all the retries. An increasing number of soft errors might indicate that one of the LUN's disk modules is nearing the end of its useful life. |
Remapped Sectors | Total disk sectors on all the disk modules in the LUN that were faulty when written to, and thus were remapped to different parts of the disk modules. |
You can use other buttons in the LUN Information window to unbind the LUN, change the LUN's parameters, or display the information for the next LUN.
![]() | Note: To view settings for SPs and LUNs in the system, select View Setting in the Options menu, in either the Summary View or Equipment View, as explained in “Viewing Settings” in Chapter 3. |
If your storage system has two SPs (split-bus, dual-bus/dual-initiator, or dual–interface/dual–processor), you can choose which disks to bind on each SP. This flexibility lets you balance the load on your SCSI–2 interfaces and SPs.
The SP on which you bind a physical disk unit is its default owner. The route through the SP that owns a physical disk unit is the primary route to the disk unit, and determines the device name of the disk unit. The route through the other SP is the secondary route to the disk unit.
When the storage system is running, you change the primary route to a physical disk unit by transferring ownership of the physical disk unit from one SP to another. The change in ownership and the new route take effect at the next power-on. See “Transferring Control of a LUN” in Chapter 8 for information.