Storage-system caching requires
a fully charged storage-system backup battery
cache enabling using the raidcli setcache command, as explained in this chapter
disk modules in slots A0, B0, C0, D0, and E0 as a fast repository for cached data
for mirrored caching: two SPs, each with at least 8 MB of memory; both SPs must have the same amount of memory
for nonmirrored caching: one SP with at least 8 MB of memory
Caching cannot occur unless all these conditions are met.
This chapter explains
setting cache parameters
viewing cache statistics
upgrading Challenge RAID to support caching
changing unit caching parameters
This section explains
setting cache parameters using the command-line interface
setting cache parameters using RAID5GUI
The cache parameters you specify for the entire storage system are the cache size of 8 or 64 MB, depending on the amount of memory the SPs have, and the cache page size, as 2, 4, 8, or 16 KB.
To set up caching, use the raidcli setcache command:
raidcli -d device setcache enable | disable [-u usable] [-p page] [-l low] [-h high] -sma system-memory-SPA, -smb system-memory-SPB -sta read-cache-state-SPA, stb read-cache-state-SPB -rca read-cache-size-SPA, -rcb read-cache-size-SPB, -r3 raid3-memory-size |
In this syntax, the variables and their meanings are as follows:
enable | disable | ||||
-u usable | Size in megabytes to use for caching, not greater than the SP memory size as displayed by raidcli getagent (see “Getting Device Names With getagent” in Chapter 3). Valid values are 0 through 64 in increments of 1 MB. The command-line interface does not let you specify more memory than you have. If you specify less than you have, the remaining memory is unused.
| |||
-p page | Size in KB of pages into which to partition the cache. Valid sizes are 2, 4, 8, and 16. The default is 2, regardless of whether caching is enabled or disabled. Generally, set cache page size to 8 KB for IRIX file server applications and 2 KB or 4 KB for database applications. | |||
-l low | Percentage of cache full that discontinues flushing. Valid values are 0 through 100; the default is 50, regardless of whether caching is enabled or disabled. | |||
-h high | Percentage of cache full that initiates flushing. Valid values are 0 through 100; the setting must be greater than the low watermark. The default is 75, regardless of whether caching is enabled or disabled. | |||
-sma system-memory-SPA, -smb system-memory-SPB |
| |||
-sta read-cache-state-SPA, stb read-cache-state-SPB |
| |||
-rca read-cache-size-SPA, -rcb read-cache-size-SPB |
| |||
-r3 raid3-memory-size |
|
![]() | Note: The sum of an SP's write cache size, system memory size, and read cache size must be less than or equal to the SP's physical memory size. |
This command has no output.
You can change the cache size, the cache page size values, or the type of caching for any physical disk unit without affecting the information stored on it. Follow these steps:
Disable the cache:
raidcli -d device setcache 0 |
Wait for the cache memory to be written to disk, which may take several minutes.
Reassign cache size and reenable caching with, for example, a cache size of 64 MB:
raidcli -d device setcache 1 -u 64 [parameters] |
![]() | Note: Before changing a cache parameter, you must always disable cache. |
To set up storage-system caching, you specify the read/write cache parameters and the size of the memory partitions. This section describes
when to disable or enable read and write caching
disabling and enabling read and write caching
selecting read and write cache parameters
specifying memory partitions
When the Challenge RAID storage system is powered on, it enables read and write caching if the required hardware is working and the read and write cache size parameters have values other than zero.
You can disable or enable storage-system caching without affecting the information stored on the LUNs. You must disable certain caches before changing the memory partition parameters and most LUN caching parameters. You should also disable the storage-system write cache before replacing an SP.
In a storage system with two SPs, you can disable or enable an SP's read cache only if the server's agent configuration file contains a device entry with a communication channel to the SP (the device must exist in the /usr/raid5/raid5_agent.config file). If this entry is not present, the SP's Read Cache Enabled parameter is dimmed in the SP Cache Settings window.
To disable or enable storage-system caching, follow these steps:
Click Cache in the Equipment View or Summary View toolbar, or select “Caching” in the Equipment View or Summary View Options menu. The SP Cache Settings window appears, as shown in Figure 7-1.
To change the write cache status, click the Write Cache Enabled box. Answer the confirmation popup accordingly.
To change an SP's read cache status, click the Read Cache Enabled box for that SP. Answer the confirmation popup accordingly.
Disabling a cache may take several minutes. The parameter in the SP Cache Settings window for any cache you disabled remains dimmed until the cache is actually disabled.
If a popup appears saying that caching cannot be enabled, make sure that the hardware required for caching is operating. The SP(s) must be powered on, as indicated by the gray color of the SP buttons in the Equipment and Summary Views.
If you cannot enable write caching, make sure that
the BBU is fully charged
![]() | Note: The only sure indication that the BBU is not fully charged is the appearance of event log messages that the BBU is recharging. |
there are disk modules in slots A0, B0, C0, D0, andE0
all disk modules are functioning normally; none are currently being rebuilt
if an SP was just replaced, the replacement SP has the same amount of memory as the remaining SP
the write cache size is not zero
If you cannot enable read caching, make sure that
read cache size is not zero
if an SP was just replaced, the replacement SP has at least as much memory as the SP it replaced
Even though storage-system caching is enabled, the storage system does not use caching with a LUN unless you have enabled the read and/or write caching for the LUN itself. You can enable caching for a LUN when you bind it; you can also set caching in the Change LUN Parameters window, as explained in “Upgrading for Caching Using RAID5GUI,” later in this chapter.
Table 7-1 gives the choices and default values for read/write cache parameters.
Table 7-1. Default Read/Write Cache Parameters
Parameter | Choices | Default |
---|---|---|
Mirror write cache | Selected, Deselected | Selected |
Cache page size | 2, 4, 8, or 16 KB | 2 KB |
Write cache high watermark | 0 through 100% | 75% |
Write cache low watermark | 0 through 100% | 50% |
To set cache parameters for the SPs in the Challenge RAID storage system, follow these steps:
Click Cache in the Equipment View or Summary View toolbar, or select “Caching” in the Equipment View or Summary View Options menu. The SP Cache Settings window appears, as shown in Figure 7-1.
In the SP Cache Settings window, disable caching depending on the parameter you are changing. If you are changing
mirror write cache: disable Write Cache
cache page size: disable all caches
Click Parameters...; the Cache Parameters window appears. Figure 7-2 shows an example.
To change the cache page size, click the button beside the Cache Page Size scroll box, and select the size.
Generally, set cache page size to 8 KB for IRIX fileserver applications and 2 KB or 4 KB for database applications.
![]() | Note: You can change the write cache size only if the write cache is disabled and no unassigned dirty pages exist. |
To change the percentage for the write cache high or low watermark, enter it in the Write Cache High Watermark field or Write Cache Low Watermark field, respectively.
The high watermark is the percentage of dirty pages; when it is reached, SP begins flushing the cache. The default is 75%. If you specify a lower value, the SP(s) start flushing the cache sooner.
The lower watermark is the percentage of cache dirty pages that determines when cache flushing stops. When the write cache low watermark is reached during a flush operation, the SP stops flushing the cache.
![]() | Note: The high watermark cannot be less than the low watermark. To turn off watermark processing, set both watermark values to 100. |
To set memory partitions, proceed to the next section. Otherwise, click Set when you are finished setting cache parameters.
![]() | Note: To change prefetch (read-ahead caching) parameters for LUNs in a new Challenge RAID storage system, see “Changing LUN Parameters Using RAID5GUI” in Chapter 8. |
To restore settings to the values they had when the Cache Parameters window was opened, click Revert.
An SP's memory provides space for system buffers in addition to cache memory. An SP running firmware (Licensed Internal Code, or LIC) revision 9 requires 4 MB (PowerPC-based SP only). Firmware revision 8 requires a system buffer space of 2 MB on an AMD-based SP and 4 MB on a PowerPC-based SP.
You can allocate part or all of the remaining memory to the read cache, the write cache, or divide it between both caches. If the storage system has two SPs, the write caches on both SPs are allocated the same amount of memory automatically because they mirror each other. For this reason, you can specify one write cache size only.
If you are binding RAID-3 LUNs (firmware revision 9.0 and higher, SP model 7305, RAID agent 1.55 and higher), you must allocate RAID-3 memory before binding the LUNs; see “Planning a RAID-3 Bind” and “Binding Disks Using RAID5GUI” in Chapter 4.
Table 7-2 summarizes the memory partition fields in the Cache Parameters window.
Table 7-2. Memory Partition Settings
Setting | Meaning | Default |
---|---|---|
Write | Specifies the size of the write cache, which is the same on both SPs in a storage system with two SPs. | 0 |
Read | Specifies the size of the SP's read cache. | 0 |
RAID3 | Specifies the size of the SP's RAID-3 memory; see “Planning a RAID-3 Bind” and “Binding Disks Using RAID5GUI” in Chapter 4. | 0 |
FLARE(E) | Leave set at 0; do not change this value. (Changing the value reboots the SPs.) | 0 |
FLARE(P) | Specifies the memory required by the SP for system buffers; you cannot change this value. | 4 MB (PowerPC-based SP) |
Free | Specifies the memory available for allocation. The free memory is equal to the total SP memory (Total) minus the memory allocated to the write cache (Write), read cache (Read), extra system buffer space (FLARE(E)), and the required system buffer space (FLARE(P)). | Total SP memory minus the sum of the other partition sizes (2 MB) |
Total | The full amount of the SP's memory: 8, 16, 32, or 64 MB. | N/A |
The memory map in the Cache Parameters window shows the allocation of the SP's memory into different partitions. If the SP is present in the system, this map shows all partitions with nonzero values proportionately. If the SP is not present in the system, the partitions are all the same size and no values are shown.
In the Write and Read fields, you can specify 0, 1, 2, 3 MB, and so on, through the maximum size, where the maximum size is the Free partition's size plus the current size of the Write or Read partition. For example, if the Free partition's size is 8 MB and the Write partition's current size is 4 MB, the Write partition can have a maximum size of 12 MB. You cannot change the size of the Flare(P) partitions, and you should not change the size of the Flare(E) partition.
Table 7-3 gives suggested sizes for Read and Write partitions for PowerPC-based SPs (firmware revision 8.50 and higher; SP 7305).
Table 7-3. Read and Write Partition Sizes in MB: Firmware Revision 8.50 and Higher (No RAID-3 Memory Allocated)
Total SP Memory | FLARE(P) | Available | Suggested | Suggested |
---|---|---|---|---|
8 | 4 | 4 | 2 | 2 |
16 | 4 | 12 | 8 | 4 |
32 | 4 | 28 | 18 | 10 |
64 | 4 | 60 | 40 | 20 |
In a system with RAID-3 LUNs (firmware revision 9.0 and higher), for best performance you should allocate at least 6 MB of storage-system memory for each RAID–3 LUN. For two RAID–3 LUNs, allocate 12 MB of storage-system memory for the RAID–3 memory partition and 6 MB to each of the two RAID–3 LUNs when you bind them. Since an SP requires 4 MB of storage-system memory for system buffers, each SP needs 16 MB of memory. Failover requires 32 MB.
To maximize RAID-3 performance, bind all LUNs in the system to RAID-3 only. In a storage system with only fast RAID–3 LUNs, no memory is allocated for Cache Memory, Read partitions, or Write partitions. For information on RAID-3 memory requirements, see “Planning a RAID-3 Bind” in Chapter 4.
Table 7-4 gives suggested sizes for Read and Write partitions for AMD-based SPs. (model 7264)
Table 7-4. Read and Write Partition Sizes in MB: Firmware Revision 8.00-8.49
Total SP Memory | FLARE(P) | Available | Suggested | Suggested |
---|---|---|---|---|
8 | 2 | 6 | 3 | 3 |
16 | 2 | 14 | 8 | 6 |
32 | 2 | 30 | 18 | 12 |
64 | 2 | 62 | 40 | 22 |
To specify memory partition sizes, follow these steps:
Click Cache in the Equipment View or Summary View toolbar, or select “Caching” from the Equipment View or Summary View Options menu. The SP Cache Settings window appears, as shown in Figure 7-1.
In the SP Cache Settings window, disable caching depending on the partition size you are changing. If you are changing
write partition: disable Write Cache
read partition: disable the SP's read cache
Click Parameters...; the Cache Parameters window appears, as shown in Figure 7-2 earlier in this chapter.
To change write or read cache size, click the scroll button beside the SP's Write or Read field, as shown in Figure 7-3; select the desired size.
![]() | Note: If you set the SP's write or read cache size to 0, write or read caching is disabled until you change the size to a nonzero value. |
When you are finished changing memory partition settings, click Set; answer the confirmation popup appropriately.
![]() | Note: You can change the write cache size only if the write cache is disabled and no unassigned dirty pages exist. |
If you use storage-system caching, you can use the raidcli getcache command to get information on cache activity. The information in this command's output, particularly the percentage of cache hits, may help you decide on the most efficient cache page size and whether a physical disk unit really benefits from caching.
This section explains
viewing cache statistics using the command-line interface
viewing cache statistics using RAID5GUI
To display cache information, use
raidcli -d device getcache |
For a sample output and an explanation thereof, see “getcache” in Appendix B.
To use RAID5GUI to view cache statistics, use the SP Cache Summary window. Follow the steps in this section.
In the Equipment View or Summary View, click the button for the SP for which you want information.
In the SP Summary window that appears, click Cache. The SP Cache Summary window appears; Figure 7-4 shows an example.
The statistics in the SP Cache Summary window show the state of caches when you open the window. To update the statistics, close the window and reopen it. Table 7-5 explains read cache entries in this window.
Table 7-5. SP Cache Summary Window Read Cache Entries
Entry | Meaning |
---|---|
Cache Status | Current state of the SP's read cache. The possible states are Enabled, Disabling, and Disabled. An SP's read cache is automatically enabled at power-on if the cache size is a valid number and the SP has at least 8 MB of memory. |
Hit Ratio | The percentage of read cache hits for the SP. The ratio is meaningful only if the SP's read cache is enabled. For more information about setting up the SP's read cache, including how to disable and enable it, see “Setting Cache Parameters,” earlier in this chapter. A read hit occurs when the SP finds a sought page in cache memory, and thus does not need to read the page from disk. High hit ratios are desirable because each hit indicates at least one disk access that was not needed. You can compare the read and write hit ratios for the LUN with the read and write hit ratio for the entire Challenge RAID storage system in the SP Cache Summary window (Figure 7-4). For a LUN to have the best performance, the hit ratios should be higher than those for the storage system. A low read or write hit rate for a busy LUN might mean that caching is not helping the LUN's performance. |
Table 7-6 explains write cache entries in this window.
Table 7-6. SP Cache Summary Window Write Cache Entries
Entry | Meaning |
---|---|
Cache Status | Current state of the SP's write cache. The possible states are Enabled or Disabled, and several transition states: Initializing, Enabling, Disabling, Dumping, and Frozen. An SP's read cache is automatically enabled at power-on if the cache size is a valid number and the Challenge RAID storage system has the requirements for write caching given at the beginning of this chapter. |
Hit Ratio | The percentage of write cache hits for the SP's write cache. High ratios are desirable because each hit indicates at least one disk access that was not needed. A write hit occurs when the SP finds and modifies data in cache memory, which usually saves a write operation. For example, with a RAID 5 LUN, a write hit eliminates the need to read, modify, and write the data. High hit ratios are desirable because each hit indicates at least one disk access that was not needed. You can compare the read and write hit ratios for the LUN with the read and write hit ratio for the entire Challenge RAID storage system in the SP Cache Summary window (Figure 7-4). For a LUN to have the best performance, the hit ratios should be higher than those for the storage system. A low read or write hit rate for a busy LUN might mean that caching is not helping the LUN's performance. |
Total Pages | Total number of pages in the SP; each page has the cache page size selected when storage-system caching was set up. This number equals the cache size divided by the cache page size, minus space for checksum tables. If the storage system has two working SPs, they divide the total number of pages between them. If an SP is idle for a long period or fails, the active SP may increase its share of pages. |
Dirty Pages | Percentage of pages that have been modified in the SP's write cache, but that have not yet been written to disk. A high percentage of dirty pages means the cache is handling many write requests. |
Unassigned Dirty Pages | Percentage of unassigned dirty pages in both SPs' write caches. Unassigned dirty pages are dirty pages belonging to a LUN that is not enabled, that is, not accessible from either SP. For example, unassigned pages result if an SP fails and its write cache contains dirty pages; these pages become unassigned pages. If the LUNs owned by the failed SP are transferred to the working SP, any unassigned pages for those LUNs transfer automatically to the working SP. The working SP writes these unassigned pages to the LUNs and the Unassigned Dirty Pages value returns to 0%. If the LUN that owns the dirty pages has an irrecoverable failure, you can clear the unassigned pages by unbinding the LUN as described in |
![]() | Note: To view settings for SPs and LUNs in the system, choose “View Settings” in the Options menu, in either the Summary View or Equipment View, as explained in “Viewing Settings” in Chapter 3. |
Please note these points before you enable caching:
Because disk modules are required in slots A0, B0, C0, D0, and E0 for caching, you might need to add disk modules to your storage system.
If you add or move disk modules, you need to bind them into new physical disks or change the existing physical disk units to include them; see “Binding Disks Into RAID Units” in Chapter 4.
Do not move a disk module to another slot unless it is absolutely necessary to do so. Never move disk modules from slots A0, A3, B0, C0, D0, and E0.
Set up caching on one SP at a time.
Once the necessary hardware components have been installed, use either interface to set up caching.
To upgrade caching using raidcli, follow these steps:
Enable the read and write caches for the physical disk units that will use caching:
/usr/raid5/raidcli -d device chglun -l lun-number read |
or
/usr/raid5/raidcli -d device chglun -l lun-number write |
or
/usr/raid5/raidcli -d device chglun -l lun-number rw |
Specify the cache page size and cache size parameters, as explained earlier in this chapter.
Enable storage system caching with raidcli -d device setcache, as explained in “Setting Cache Parameters,” earlier in this chapter.
To upgrade the cache, follow these steps:
In the Summary View, click the LUN for which you want to enable caching. The LUN Information window appears, as shown in Figure 7-5.
In the LUN Information window, click Change. The Change LUN Parameters window appears, as shown in Figure 7-6.
In the Change LUN Parameters window, click Cache boxes appropriately to enable caching for the LUN. Click Set and close the window.
Set cache parameters as explained in “Setting Cache Parameters Using RAID5GUI,” earlier in this chapter.
Enable storage-system caching. In the Equipment or Summary View, click Cache, or select “Caching...” from the Options menu. The SP Cache Settings window appears, as shown in Figure 7-7.
Enable caching by clicking the boxes appropriately; exit the SP Cache Settings window.
You can use the command-line interface or RAID5GUI to change the caching parameter for a physical disk unit.
![]() | Note: If caching is enabled, you must disable it before you can change any parameter. Disabling caching can affect storage system performance; do it only when system activity is relatively low. |
To change the caching parameter for a physical disk unit, follow these steps:
Run raidcli getcache to determine if caching is enabled. If it is, make sure both SPs are powered on; run raidcli getcrus to find out.
If caching is enabled, disable it with
raidcli -d device setcache 0 |
Wait for the cache memory to be written to disk, which may take several minutes. Use raidcli getcache to check cache state (“Disabled”).
Follow instructions in “Setting Cache Parameters Using the Command-Line Interface,” earlier in this chapter.
To change the caching parameter for a physical disk unit, follow these steps:
To determine if caching is enabled, click the button for the SP for which you want information in the Equipment View or Summary View.
In the SP Summary window that appears, click Cache. The SP Cache Summary window appears, as shown in Figure 7-4. Do not close this window.
If caching is enabled, check the Equipment View to make sure both SPs are powered on.
If caching is enabled, disable it in the SP Cache Settings Window; see Figure 7-7 earlier in this chapter.
Wait for the cache memory to be written to disk; check status in the SP Cache Summary window.
Follow instructions in “Setting Cache Parameters Using RAID5GUI,” earlier in this chapter.