The Network Queuing Environment (NQE) is a framework for distributing work across a network of heterogeneous systems. NQE consists of the following components:
Cray Network Queuing System (NQS)
NQE clients
Network Load Balancer (NLB)
NQE database and its scheduler
File Transfer Agent (FTA)
![]() | Note: Cray PVP systems that do not have an NQE license are limited to accessing and using only the NQE subset (NQS and FTA components). |
After you have installed NQE, follow the instructions in Chapters 3 through 6 of this administrator's guide. For information about the content of this guide, see “Scope of This Manual”.
This manual provides information on how to configure and manage NQE. This manual contains the following chapters:
Chapter 1, “NQE Overview” (this chapter) describes the components of NQE and provides an overview of NQE.
Chapter 2, “Concepts and Terms” describes concepts and terms relevant to NQE. It is meant to act as an introduction for administrators unfamiliar with batch concepts; it also can act as a reference for more experienced administrators.
Chapter 3, “Configuring NQE variables” describes how to use the nqeconfig(8) utility to configure the NQE configuration file (nqeinfo file).
Chapter 4, “Starting and Stopping NQE” describes how to start and stop NQE. It also describes the list of valid NQE components set in the NQE_DEFAULT_COMPLIST variable in the nqeinfo(5) file used by the nqeinit(8) and nqestop(8) scripts.
Chapter 5, “NQS Configuration” describes configuration and management of the NQS component of NQE. It describes the use of qmgr commands that are available only to NQS managers.
Chapter 6, “Operating NQS” describes the qmgr commands that are available to NQS operators.
Chapter 7, “NLB Administration” describes configuration and management of the NLB components of NQE.
Chapter 8, “Implementing NLB Policies” describes how to implement load-balancing policies with NQE.
Chapter 9, “NQE Database” provides an overview of the NQE database and its components, and it describes management and configuration of the NQE database and the NQE scheduler.
Chapter 10, “Writing an NQE Scheduler” describes how to modify the NQE scheduler to meet the needs of your site.
Chapter 11, “Using csuspend” describes how to use the csuspend(8) command to make use of unused cycles on a server.
Chapter 12, “Job Dependency Administration” describes how the job dependency feature affects NQE administration.
Chapter 13, “FTA Administration” describes configuration and management of the FTA component of NQE.
Chapter 14, “Configuring DCE/DFS” describes configuration of the Distributed Computing Environment (DCE) when using NQE.
Chapter 15, “Configuring ilb” describes configuration of the ilb(1) command.
Chapter 16, “Problem Solving” provides possible solutions to problems you may encounter as an NQE administrator.
Appendixes provide supplemental information to help you administer NQE.
NQE supports computing with a large number of nodes in a large network supporting two basic models:
The NQE database model that supports up to 36 servers and hundreds of clients
The NQS model that supports an unlimited number of servers and hundreds of clients
The grouping of servers and clients is referred to as an NQE cluster. The servers provide reliable, unattended processing and management of the NQE cluster. Users who have long running requests and a need for reliability can submit batch requests to an NQE cluster.
NQE clients support the submission, monitoring, and control of work from the workstation for job execution of the batch requests on the nodes. The client interface has minimal overhead and administrative cost; for example, no machine ID (mid) administration is needed for a client machine. NQE clients are intended to run on every node in the NQE cluster where users need an interactive interface to the NQE cluster. The NQE client provides the NQE GUI, which is accessed through the nqe command. The NQE client also provides a command-line interface. For a list of user-level commands, see Appendix A, “Man Page List”. For information about using the NQE GUI and the command-line interface, see the NQE User's Guide, publication SG-2148.
The Network Queuing System (NQS) initiates requests on NQS servers. An NQS server is a host on which NQS runs. As system administrator, you designate the default NQS server in the NQE configuration file (nqeinfo file); a user may submit a request to the default NQS server or submit it to a specific NQS server by using the NQE GUI Config window or by setting the NQS_SERVER environment variable. Cray NQS provides unattended execution of shell script files (known as batch requests) in batch mode. Users can monitor and control the progress of a batch request through NQE components in the NQE cluster. When the request has completed execution, standard output and standard error files are returned to the user in the default location or specified alternate location. Privileged users defined as qmgr managers can configure, monitor, and control NQS; users defined as qmgr operators can control NQS queues and requests with the qmgr utility.
The NQE database provides a central repository for batch requests in the NQE cluster. When a request is submitted to the NQE database, it works with an administrator-defined NQE scheduler to analyze aspects of the request and to determine which NQS server will receive and process the request. When the scheduler has chosen a server for your request, the lightweight server (LWS) on the selected server obtains request information from the NQE database, verifies validation, submits the copy of a request to NQS, and obtains exit status of completed requests from NQS. By default, the copy of the request is submitted directly into a batch queue on the NQS server. Because the original request remains in the NQE database, if a problem occurs during execution and the server copy of the request is lost, a new copy can be resubmitted for processing if you have the clusterwide rerun feature enabled. For additional information about the NQE database and NQE scheduler, see Chapter 9, “NQE Database”, and Chapter 10, “Writing an NQE Scheduler”.
The Network Load Balancer (NLB) provides status and control of work scheduling within the group of components in the NQE cluster. Sites can use the NLB to provide policy-based scheduling of work in the cluster. NLB collectors periodically collect data about the current workload on the machine where they run. The data from the collectors is sent to one or more NLB servers, which store the data and make it accessible to the NQE GUI Status and Load functions. The NQE GUI Status and Load functions display the status of all requests which are in the NQE cluster and machine load data.
Cray FTA allows reliable (asynchronous and synchronous) unattended file transfer across the network using the ftp protocol. Network peer-to-peer authorization allows users to transfer files without specifying passwords. Transfer may be queued so that they are retried if a network link fails. Queued transfer requests may be monitored and controlled.
The csuspend(8) utility lets you suspend and restart batch activity on a server when interactive use occurs.
The NQE_DEFAULT_COMPLIST variable in the nqeinfo(5) file contains the list of NQE components to be started or stopped (see Chapter 4, “Starting and Stopping NQE”). You can set this list to one or more of any of the following valid NQE components:
NQS | Network Queuing System | |
NLB | Network Load Balancer | |
COLLECTOR | NLB collector | |
NQEDB | NQE database | |
MONITOR | NQE database monitor | |
SCHEDULER | NQS scheduler | |
LWS | Lightweight server |
Beginning with the NQE 3.3 release, the default component list consists of the following components: NQS, NLB, and COLLECTOR.
Below find a brief a description of the valid NQE components:
The Network Load Balancer (NLB) server which receives and stores information from the NLB collectors in the NLB database which it manages.
For more information on the NLB, see “NLB” in Chapter 2.
The NQE database server which serves connections from clients, the scheduler, the monitor and lightweight server (LWS) components in the cluster to add, modify, or remove data from the NQE database. Currently, NQE uses the mSQL database.
For more information on the NQE database server, see “NQE Database” in Chapter 2.
The NQE scheduler which analyses data in the NQE database, making scheduling decisions.
For more information on the NQE scheduler, see “Scheduler” in Chapter 2.
The NQE database monitor which monitors the state of the database and which NQE database components are connected.
For more information on the NQE database monitor, see “NQE Database Monitor” in Chapter 2.
NQE clients (running on numerous machines) contain software so users may submit, monitor, and control requests by using either the NQE graphical user interface (GUI) or the command-line interface. From clients, users also may monitor request status, delete or signal requests, monitor machine load, and receive request output using the FTA.
The machines in your network where you run NQS are usually machines where there is a large execution capacity. Job requests may be submitted from components in an NQE cluster, but they will only be initiated on an NQS server node.
FTA can be used from any NQS server to transfer data to and from any node in the network by using the ftpd daemon. It also can provide file transfer by communicating with ftad daemons that incorporate network peer-to-peer authorization, which is a more secure method than ftp.
On NQS servers you need to run a collector process to gather information about the machine for load balancing and request status for the NQE GUI Status and Load windows programs. The collector forwards this data to the NLB server.
The NLB server runs on one or more NQE nodes in a cluster, but it is easiest to run it initially on the first node where you install NQE. Redundant NLB servers ensure greater availability of the NLB database if an NLB server is unreachable through the cluster.
![]() | Note: The NQE database must be on only one NQE node, there is no redundancy. |
You can start the csuspend(8) utility on any NQS server to monitor interactive terminal session (tty) activity on that server. If tty activity equals or exceeds the input or output thresholds you set, NQE suspends batch activity. When interactive activity drops below the thresholds you specify, batch work is resumed.
Client nodes provide access to the following client commands: nqe(1) (which invokes the NQE GUI), cevent(1), cqsub(1), cqstatl(1), and cqdel(1). Client nodes also require access to the FTA ftad daemon (which services FTA requests issued by the requests and NQE itself). In a typical configuration, there would be many more client nodes than any other type of NQE node. Also, you can configure the client nodes to use the ilb(1) utility so that a user may execute a load-balanced interactive command; for more information, see Chapter 15, “Configuring ilb”.
In addition to the client commands, server nodes provide access to the following commands: qalter(1), qchkpnt(1), qconfigchk(1), qdel(1), qlimit(1), qmgr(1), qmsg(1), qping(1), qstat(1), and qsub(1).
For a complete list of commands, see Appendix A, “Man Page List”.
The example given in this section shows how the NQE components work as an environment. For information about NQE user features and about setting NQE environment variables, see Introducing NQE, publication IN-2153, and the NQE User's Guide, publication SG-2148. Both documents, as well as this administration document, are available online (see Introducing NQE, publication IN-2153, for information about accessing NQE documentation online).
Figure 1-1 shows a possible NQE configuration. The user mary uses the client workstation snow, which has an NQE client interface to the NQS server latte (set by the environment variable NQS_SERVER to latte). mary wants the output from her batch request to go to her research assistant (fred) at another NQE client workstation, gale.
User mary has several batch requests to run. One of the requests (named jjob) looks like the following example:
#QSUB -eo #merge stdout and stderr #QSUB -J m #append NQS job log to stdout #QSUB -o "%[email protected]/nppa_latte:/home/gale/fred/mary.jjob.output" #returns stdout to [email protected] #QSUB -me #sends mail to submitter at completion #QSUB #optional qsub delimiter date #prints date rft -user mary -host snow -domain nppa_latte -nopassword -function get jan.data nqs.data #use FTA to transfer jan.data from latte to the NQS server (latte) cc loop.c -o prog.out #compile loop.c./prog.out rm -f loop.c prog.out jan.data nqs.data #delete files echo job complete |
The following embedded qsub option uses FTA to return the standard output file from the request to fred at the workstation gale; nppa_latte is the FTA domain:
#QSUB -o "%[email protected]/nppa_latte:/home/gale/fred/mary.jjob.output" |
The request script uses FTA (the rft command) to transfer its files as shown in the following example:
rft -user mary -host snow -domain nppa_latte -nopassword -function get jan.data nqs.data |
The FTA domain name nppa_latte and the option -nopassword indicate that peer-to-peer authorization is used, so mary does not need to specify a password; however, mary must have a .netrc file on latte to log into fred's account and mary must also have an account on gale with permission to read fred's file (see the #QSUB example, above).
User mary submits the request by using the following command line (alternatively, mary could submit the request by using the NQE GUI):
cqsub jjob |
The request is sent to the NQS server latte, since mary 's environment variable NQS_SERVER is set to latte.
The load-balancing policy for mary's site allows work to be shared among NQS systems on latte, pendulum, telltale, and gevalia. Because of the workload on the machines, mary's first request is sent to gevalia, the second and third are sent to latte, the fourth and fifth are sent to pendulum, and the sixth is sent to telltale. User mary does not need to know where her requests are executing to find out their status. She can use the NQE GUI Status window to determine their status.
Because NQE GUI Status window is refreshed periodically, user mary can monitor the progress of all her requests. Because she used the embedded #QSUB -me option, she receives mail when each request completes.
For more information about FTA and rft syntax, about using #QSUB directives, or about using the NQE GUI Status window to determine status of requests, see the NQE User's Guide, publication SG-2148.
Throughout this guide, the path /nqebase is used in place of the default NQE path name, which is /opt/craysoft/nqe on UNICOS, UNICOS/mk, and Solaris systems and is /usr/craysoft/nqe on all other supported platforms.
Figure 1-2 shows the NQE file structure.
The NQE release contains a World Wide Web (WWW) interface to NQE. You can access the interface through WWW clients such as Mosaic or Netscape. A single interface lets users submit requests (from a file or interactively), obtain status on their requests, delete requests, signal requests, view output, and save output. Online help is provided.
NQE administrators are encouraged to configure and customize this interface. It is provided so that administrators may supply users of nonsupported NQE platforms (such as personal computers) a tool that allows them to access NQE resources.
For information about managing the interface, read the /nqebase/www/README file. You can obtain the most current version of the NQE WWW interface at ftp.cray.com in the /pub/nqe/www file.
![]() | Note: The WWW interface is not available to UNICOS systems that run only the NQE subset (NQS and FTA components). |