This guide provides an overview of the installation and configuration procedures for the following CXFS client-only nodes running SGI CXFS clustered filesystems:
Apple Computer Mac OS X
Red Hat Enterprise Linux
SGI ProPack 5 for Linux running SUSE Linux Enterprise Server 10 (SLES 10)
SUSE Linux Enterprise Server 10 (SLES 10)
Sun Microsystems Solaris
Microsoft Windows Server 2003, Microsoft Windows XP, and Microsoft Vista
A CXFS client-only node has a minimal implementation of CXFS services that run a single daemon, the CXFS client daemon (cxfs_client). A cluster running multiple operating systems is known as a multiOS cluster.
Nodes running SGI ProPack for Linux can be either CXFS server-capable administration nodes or client-only nodes. (Metadata is information that describes a file, such as the file's name, size, location, and permissions.)
For more information about CXFS terminology, concepts, and configuration, see the CXFS Administration Guide for SGI InfiniteStorage.
|Caution: CXFS is a complex product. To ensure that CXFS is installed and configured in an optimal manner, it is mandatory that you purchase SGI installation services developed for CXFS. Many of the procedures mentioned in this guide will be performed by SGI personnel or other qualified service personnel. Details for these procedures are provided in other documents. Contact your local SGI sales representative for details.|
This chapter discusses the following:
Also see Chapter 2, “Best Practices for Client-Only Nodes”.
You should use CXFS when you have multiple hosts running applications that require high-bandwidth access to common filesystems.
CXFS performs best under the following conditions:
All processes that perform read/write operations for a given file reside on the same host
Multiple processes on multiple hosts read the same file
Direct-access I/O is used for read/write operations for multiple processes on multiple hosts
Applications that perform well on a client typically do the following:
Issue large I/O requests, rather than several smaller requests
Use asynchronous or multithreaded I/O to have several I/O requests in flight at the same time
Minimize the number of metadata operations they perform
For most filesystem loads, the preceding scenarios represent the bulk of the file accesses. Thus, CXFS delivers fast local-file performance. CXFS is also useful when the amount of data I/O is larger than the amount of metadata I/O. CXFS is faster than NFS because the data does not go through the network.
This section contains the following:
Table 1-1 lists the commands installed on all client-only nodes.
Table 1-1. Client-only Commands
Controls the CXFS client control daemon
Provides status information.
Gathers configuration information in a CXFS cluster for diagnostic purposes.
Invokes the XVM command line interface
Following is the order of installation and configuration steps for a CXFS client-only node. See the specific operating system (OS) chapter for details:
Read the CXFS release notes to learn about any late-breaking changes in the installation procedure.
Install the OS software according to the directions in the OS documentation (if not already done).
Install and verify the RAID. See the CXFS Administration Guide for SGI InfiniteStorage and the release notes.
Install and verify the switch. See the CXFS Administration Guide for SGI InfiniteStorage and the release notes.
Obtain the CXFS server-side license key. For more information about licensing, see “License Keys” and CXFS Administration Guide for SGI InfiniteStorage.
If you want to access an XVM cluster mirror volume from client-only nodes in the cluster, you must have a valid XVM cluster mirror license installed on the server-capable administration nodes. No additional license key is needed on the client-only nodes. The client-only node will automatically acquire a mirror license key when the CXFS client service is started on the node.
Install and verify the host bus adapter (HBA) and driver.
Prepare the node, including adding a private network. See “Preinstallation Steps for Windows” in Chapter 9.
Install the RPMs containing the CXFS client packages onto the server-capable administration node and transfer the appropriate client packages to the corresponding client-only nodes.
Perform any required post-installation configuration steps.
Configure the cluster to define the new client-only node, add it to the cluster, start CXFS services, and mount filesystems. See Chapter 10, “Cluster Configuration”.
Start CXFS services on the client-only node to see the mounted filesystems.
If you run into problems, see the OS-specific troubleshooting section, Chapter 11, “General Troubleshooting”, and the troubleshooting chapter in CXFS Administration Guide for SGI InfiniteStorage.
There must be at least one server-capable administration node in the cluster that is responsible for updating that filesystem's metadata. This node is referred to as the CXFS metadata server. (Client-only nodes cannot be metadata servers.) Metadata servers store information in the CXFS cluster database. The CXFS cluster database is not stored on client-only nodes; only server-capable administration nodes contain the cluster database.
A server-capable administration node is required to perform administrative tasks, using the cxfs_admin command or the CXFS graphical user interface (GUI). For more information about using these tools, see the CXFS Administration Guide for SGI InfiniteStorage.
When CXFS is started on a client-only node, a user-space daemon/service is started that provides the required processes. This is a subset of the processes needed on a CXFS server-capable administration node.
The cxfs_client daemon controls CXFS services on a client-only node. It does the following:
Obtains the cluster configuration from a remote fs2d daemon and manages the local client-only node's CXFS kernel membership services and filesystems accordingly.
Obtains membership and filesystem status from the kernel.
The path to the cxfs_client command varies among the platforms supported. See Appendix A, “Operating System Path Differences”
|Note: The cxfs_client daemon may still be running when CXFS services are disabled.|
A CXFS cluster requires a consistent user identification scheme across all hosts in the cluster so that one person using different cluster nodes has the same access to the files on the cluster. The following must be observed to achieve this consistency:
Users must have the same usernames on all nodes in the cluster. An individual user identifier (UID) should not be used by two different people anywhere in the cluster. Ideally, group names and group identifiers (GIDs) should also be consistent on all nodes in the cluster.
Each CXFS client and server node must have access to the same UID and GID information. The simplest way to achieve this is to maintain the same /etc/passwd and /etc/group files on all CXFS nodes, but other mechanisms may be supported.
Only Linux and IRIX nodes can view or edit user and group quotas. Quotas are effective on all nodes because they are enforced by the metadata server.
To view or edit quota information on a Linux node, use the xfs_quota command. This is provided by the xfsprogs RPM. On an IRIX node, use repquota and edquota. If you want to provide a viewing command on other nodes, you can construct a shell script similar to the following:
# ! /bin/sh # # Where repquota lives on IRIX repquota=/usr/etc/repquota # The name of an IRIX node in the cluster irixnode=cain rsh $irixnode "$repquota $*" exit
CXFS mount scripts are provided for execution by the cxfs_client daemon prior to and after a CXFS filesystem is mounted or unmounted on the following platforms:
|Note: NFS and Samba exports of CXFS filesystems are only supported from metadata server nodes.|
The CXFS mount scripts are not supported on Mac OS X or Windows.
The CXFS mount scripts are installed in the following locations:
/var/cluster/cxfs_client-scripts/cxfs-pre-mount /var/cluster/cxfs_client-scripts/cxfs-post-mount /var/cluster/cxfs_client-scripts/cxfs-pre-umount /var/cluster/cxfs_client-scripts/cxfs-post-umount
The following script is run when needed to reprobe the Fibre Channel controllers on client-only nodes:
The CXFS mount scripts are used by CXFS to ensure that LUN path failover works after fencing. You can customize these scripts to suit a particular environment. For example, an application could be started when a CXFS filesystem is mounted by extending the cxfs-post-mount script. The application could be terminated by changing the cxfs-pre-umount script.
For information about using these scripts, see the CXFS Administration Guide for SGI InfiniteStorage.
The following script is run by cxfs_client when it reprobes the Fibre Channel controllers upon joining or rejoining membership:
For Linux nodes, you must define a group of environment variables in the /etc/cluster/config/cxfs_client.options file in order for cxfs-reprobe to appropriately probe all of the targets on the SCSI bus. For more information, see “Using cxfs-reprobe with RHEL” in Chapter 5.
On Linux nodes, the following script enumerates the world wide names (WWNs) on the host that are known to CXFS. The following example is for a Linux node with two single-port HBAs:
linux# /var/cluster/cxfs_client-scripts/cxfs-enumerate-wwns # cxfs-enumerate-wwns # xscsi @ /dev/xscsi/pci01.01.0/bus # xscsi @ /dev/xscsi/pci01.03.01/bus # xscsi @ /dev/xscsi/pci01.03.02/bus # xscsi @ /dev/xscsi/pci02.02.0/bus 210000e08b100df1 # xscsi @ /dev/xscsi/pci02.02.1/bus 210100e08b300df1
For more details about using these scripts, and for information about the mount scripts on server-capable administration nodes, see the CXFS Administration Guide for SGI InfiniteStorage.
Using a client-only node in a multiOS CXFS cluster requires the following:
A supported storage area network (SAN) hardware configuration.
|Note: For details about supported hardware, see the Entitlement Sheet that accompanies the base CXFS release materials. Using unsupported hardware constitutes a breach of the CXFS license. CXFS does not support the Silicon Graphics O2 workstation as a CXFS node nor does it support JBOD.|
A private 100baseT (or greater) TCP/IP network connected to each node, to be dedicated to the CXFS private heartbeat and control network. This network must not be a virtual local area network (VLAN) and the Ethernet switch must not connect to other networks. All nodes must be configured to use the same subnet.
The appropriate license keys. See “License Keys”.
A switch, which is required to protect data integrity on nodes without system controllers. See the release notes for supported switches.
AIX, Linux, Solaris, Mac OS X, and Windows client-only nodes must use I/O fencing to protect the data integrity of the filesystems in the cluster. Server-capable administration nodes should use serial reset lines. See “Protect Data Integrity” in Chapter 2.
There must be at least one server-capable administration node to act as the metadata server and from which to perform cluster administration tasks. You should install CXFS software on the server-capable administration nodes first.
Nodes that are not potential metadata servers should be CXFS client-only nodes. A cluster may contain as many as 64 nodes, of which as many as 16 can be server-capable administration nodes; the rest must be client-only nodes. See “Make Most Nodes Client-Only Nodes” in Chapter 2.
Set the mtcp_nodelay system tunable parameter to 1 on server-capable administration nodes in order to provide adequate performance on file deletes.
Also see “Requirements for Solaris” in Chapter 8, “Requirements for Windows” in Chapter 9, and Chapter 2, “Best Practices for Client-Only Nodes”.
CXFS requires the following license keys:
CXFS license keys using server-side licensing. Server-side licensing is required on all nodes.
|Note: As of CXFS 4.2, all server-capable administration nodes running 4.2 and client-only nodes running 4.2 require server-side licensing. If all existing client-only nodes are running a prior supported release, they may continue to use client-side license as part of the rolling upgrade policy until they are upgraded to 4.2. All client-only nodes in the cluster must use the same licensing type -- if any client-only node in the cluster is upgraded to 4.2 or if a new 4.2 client-only node is added, then all nodes must use server-side licensing.|
To obtain server-side CXFS and XVM license keys, see information provided in your customer letter and the following web page:
The licensing used for SGI ProPack server-capable administration nodes is based the SGI License Key (LK) software.
See the general release notes and the CXFS Administration Guide for SGI InfiniteStorage for more information.
XVM cluster mirroring requires a license key on server-capable administration nodes in order for cluster nodes to access the cluster mirror. On CXFS client-only nodes, the user feature where applicable is honored after the cxfs_client service is started. XVM cluster mirroring on clients is also honored if it is enabled on the server. All CXFS client nodes need an appropriate mirror license key in order to access local mirrors.
Guaranteed rate I/O version 2 (GRIOv2) requires a license key on the server-capable administration nodes.
Fibre Channel switch license key. See the release notes.
AIX using XVM failover version 2 also requires a SANshare license for storage partitioning; see “Storage Partitioning and XVM Failover V2 for AIX” in Chapter 3.
CXFS supports guaranteed-rate I/O (GRIO) version 2 clients on all platforms, and GRIO servers on server-capable administration nodes. However, GRIO is disabled by default on Linux. See “GRIO on Linux” in Chapter 5 and “GRIO on SGI ProPack Client-Only Nodes” in Chapter 7.
Once installed in a cluster, the superuser can run the following commands from any node in the cluster:
grioadmin, which provides stream and bandwidth management
grioqos, which is the comprehensive stream quality-of-service monitoring tool
Run the above tools with the -h (help) option for a full description of all available options. See Appendix A, “Operating System Path Differences”, for the platform-specific locations of these tools.
See the platform-specific chapters in this guide for GRIO limitations and considerations:
For details about GRIO installation, configuration, and use, see the Guaranteed-Rate I/O Version 2 Guide.
XVM failover version 2 (v2) requires that the RAID be configured in AVT mode. AIX also requires a SANshare license; see “Storage Partitioning and XVM Failover V2 for AIX” in Chapter 3.
To configure failover v2, you must create and edit the failover2.conf file. For more information, see the comments in the failover2.conf file on a CXFS server-capable administration node, CXFS Administration Guide for SGI InfiniteStorage, and the XVM Volume Manager Administrator's Guide.
This guide contains platform-specific examples of failover2.conf for the following:
“XVM Failover V2 on SGI ProPack Client-Only Nodes” in Chapter 7
To monitor CXFS, you can use the cxfs_info command on the client, or view area of the CXFS GUI, the cxfs_admin command, or the clconf_info command on a CXFS server-capable administration node. For more information, see “Verifying the Cluster Status” in Chapter 10.