This revision of the NQE User's Guide , publication SG-2148, supports the 3.3 release of the Network Queuing Environment (NQE).
The NQE user documentation was revised to support the following NQE 3.3 features:
Miser integration on Origin systems is supported. NQE will support the submission of jobs that specify Miser resources.
On CRAY T3E systems, NQE now supports checkpointing and restarting of jobs. This feature was initially supported in the NQE 3.2.1 release.
On CRAY T3E systems, NQE now supports the political scheduling feature. This includes obtaining fair-share information by using the multilayered user fair-share scheduling environment (MUSE) and scheduling a job for immediate execution with preferential CPU priority (prime job). (This feature was initially supported in the NQE 3.2.1 release.)
Distributed Computing Environment (DCE) support was enhanced as follows:
Ticket forwarding and inheritance is now supported on selected platforms. This feature lets users submit jobs in a DCE environment without providing passwords. Ticket forwarding is supported on all NQE platforms except Digital UNIX systems. Ticket inheritance is supported only on UNICOS and IRIX systems
IRIX systems now support access to DCE resources for jobs submitted to NQE.
Support for tasks that use a password for DCE authentication is available on all NQE 3.3 platforms.
Support for tasks that use a password for DCE authentication is available on all NQE 3.3 platforms.
The following NQE database enhancements were made:
Increased number of simultaneous connections for clients and execution nodes to the NQE database.
The MAX_SCRIPT_SIZE variable was added to the nqeinfo file, allowing an administrator to limit the size of the script file submitted to the NQE database. If the MAX_SCRIPT_SIZE variable is set to 0 or is not set, a script file of unlimited size is allowed. The script file is stored in the NQE database; if the file is bigger than MAX_SCRIPT_SIZE, it can affect the performance of NQE database and the nqedbmgr(8) command. The nqeinfo(5) man page includes a description of this new variable.
The Network Queuing System (NQS) sets several environment variables that are passed to a login shell when NQS initiates a job. One of the environment variables set is LOGNAME, which is the name of the user under whose account the job will run. Some platforms, such as IRIX systems, use the USER environment variable rather than LOGNAME. On those platforms, csh writes an error message into the job's stderr file, noting that the USER variable is not defined. To accommodate this difference, NQS now sets both the LOGNAME and USER environment variables to the same value before initiating a job. The ilb(1) man page was revised to include this new variable.
The new nqeinfo(5) man page documents all NQE configuration variables; the nqeinfo (5) man page is provided in online form only and is accessible by using the man(1) command or through the NQE configuration utility Help facility.
Array services support was added for UNICOS and UNICOS/mk systems. Array services let you manage related processes as a single unit, including processes running across multiple machines. Array services use array sessions to group these related processes together through use of a unique identifier called an array session handle (ASH). A global ASH is needed when the processes within an array session are not all running on the local node. The NQE request node now asks for a global ASH before initiating the job. NQE logs the global ASH associated with the job in a log message in the user's job log. The global ASH associated with a job is shown in a Global ASH: field in an NQE job log display. A job log display can be requested by supplying the NQE job identifier when using the qstat -j or cqstatl -j command, or the job log can be displayed through the NQE GUI by clicking on a specific job within the Status display and then selecting the Actions->Job Log menu. The global ASH for a job is also entered into the NQS log file.
The capabilities of the NQE database scheduler (LWS) have been extended.
The security enhancements to UNICOS/mk systems are supported with this NQE release.
Overall performance of the Network Load Balancer (NLB) collector was increased; new information is provided.
NQS now supports per-request limits for CPU usage, memory usage, and the number of processors when running on IRIX platforms. The per-request usage of these resources is displayed by the NQE GUI and the cqstatl and qstat commands. Requests that exceed the limits will be terminated. The periodic checkpointing of requests based on accumulated CPU time is also supported.
The NQE_DEFAULT_COMPLIST configuration variable in the nqeinfo file has replaced the NQE_TYPE configuration variable, which defines the list of NQE components to be started or stopped.
The CPU and memory scheduling weighting factors were added for application PEs. The NQS scheduling weighting factors are used with the NQS priority formula to calculate the intraqueue job initiation priority for NQS runnable jobs. This feature also restores the user-specified priority scheduling functionality (specified by the cqsub -p(blank) and qsub -p(blank) commands).
The -f option was added to the qdel(1) command; this option specifies that no request output will be returned to the user. This option behaves similarly to the -k option except that the user's standard error, standard output, and job log files are not returned to the user or stored at the execution node in the NQS failed directory.
Year 2000 support for NQE has been completed.
The appendix that documents the NQE GUI was removed from this user's guide.
Man pages were revised; man pages are provided in online form only as part of the NQE release package.
For a complete list of new features for the NQE 3.3 release, see the NQE Release Overview, publication RO-5237 .