Chapter 1. Introducing the Challenge Deskside Servers

The Challenge deskside systems, model CMN A011, are high-performance servers in a compact deskside enclosure. This guide contains information for end users about the POWER Challenge and Challenge deskside systems hardware.

Features and Options

Your Challenge deskside server comes with the following features:

  • POWER Challenge models come with an IP21 or IP25 CPU board using the R8000™ or R10000™ microprocessor on each board (see Table 1-1).

  • Challenge models come with an IP19 or IP25 CPU board using the R4400™ or R10000 microprocessor on each board (see Table 1-1).

    Note: A POWER Challenge or Challenge system with an IP25 (R10000) CPU board is called the POWER Challenge 10000 or the Challenge 10000.

  • Up to 2 GB of RAM on Challenge system memory boards and up to
    6 GB of RAM in POWER Challenge systems.

  • An IO4 board with multiple control functionality and expandability (also known as the POWERchannel-2).

  • Space for up to seven half-height SCSI peripherals in the chassis.

  • The Silicon Graphics® Ebus, which supports protocols for consistent data sharing and high-speed block data transfers between main memory and the I/O subsystem; the 256-bit Ebus (also known as the POWERpath-2 system bus) supports multiple processor operations.

  • A 40-bit address bus, which provides addressing to parity-checked high-speed data transfers between the CPU(s) and memory board(s).

  • 64-bit operating system support (on POWER Challenge).

    Table 1-1. CPU Board Type and System Relationship

    CPU board


    Applicable System






    POWER Challenge



    Challenge 10000 or POWER Challenge 10000

  • Five VME™ expansion slots (only two slots are available when a system is ordered with the visualization console option).

  • One RS-422 and three RS-232 serial ports.

  • A 25-pin parallel port.

  • An independent system status monitor (System Controller) that records error information during any unplanned shutdown.

Available options include:

  • VMEbus I/O and controller boards.

  • additional half-height and full-height SCSI devices.

  • external 1/2-inch and 8 mm SCSI-controlled tape backup systems.

  • a system console ASCII terminal.

  • CPU and memory upgrades.

  • additional IO4 controller boards.

  • a visualization console option providing a basic color graphics interface to the POWER Challenge system.

  • optional IO4 mezzanine daughter boards (also known as HIO modules) for expanded and varied functionality of the IO4.

Although the Challenge deskside servers are similar in size and external appearance to previous Silicon Graphics deskside systems, most internal features are different in design.

The internal drive rack supports up to seven half-height (or one half-height and three full-height) devices that are controlled by either one or two SCSI-2 buses.

The Challenge Board Set

The basic Challenge board set consists of

  • an IP19, IP21, or IP25 CPU board

  • an MC3 memory board (also known as a POWERpath-2™ interleaved memory board)

  • an IO4 controller board

The backplane supports the addition of two more boards selected from the three standard types listed. The Onyx deskside graphics workstation system supports a RealityEngine2™ (RE2) or VTX™ graphics board set that is not available with the Challenge deskside backplane. The POWER Challenge system does support an optional visualization console, providing a basic color graphics interface to the system.

Figure 1-1 shows a functional block diagram of the Challenge deskside subsystems. The IP25, IP21, or IP19 board is the heart of the Challenge deskside system.

The IP25 board in your Challenge 10000 or POWER Challenge 10000 deskside system can house one, two, or four MIPS R10000 64-bit microprocessors. Your system can house up to three IP25s with a potential system total of 12 microprocessors. The four-way superscalar R10000 microprocessor can fetch four instructions and issue up to five instructions per cycle. A superscalar processor is one that can fetch, execute and complete more than one instruction in parallel.

The IP21 CPU board in your POWER Challenge deskside can house either one or two R8000 microprocessors. Your system can house up to three IP21s with a potential system total of six R8000 microprocessors. Each R8000 microprocessor assembly uses a customized cache controller, a separate floating point unit, and two tag RAM and two SRAM cache units in addition to the main integer unit. Board logic on the CPU is “sliced” to give each microprocessor its own dedicated support logic. This allows each microprocessor to run independently.

Note that all optional upgrade R8000 CPU boards ordered for the POWER Challenge come with two microprocessors on each board.

Each IP19 CPU board in your Challenge deskside can house up to four MIPS R4400 64 bit RISC microprocessors. Your system can house up to three IP19s with a potential system total of 12 microprocessors. Board logic on the IP19 is “sliced” to give each R4400 its own dedicated support logic. This allows each R4400 to run independently.

The MC3 system memory board can be populated with 16 MB or 64 MB SIMM modules. The MC3 has 32 SIMM sockets. Up to 2 GB of on-board memory is available for Challenge and up to 6 GB for POWER Challenge.

Figure 1-1. Challenge Deskside System Functional Block Diagram

I/O Interfaces

The main Challenge I/O subsystem consists of one or more IO4 boards, which plug directly into the Ebus and use optional mezzanine cards. Mezzanine cards are daughter boards that plug into IO4 boards to allow expansion and customizing. See Appendix D for additional information on mezzanine boards.

Controllers for I/O devices connect to the 64-bit-wide Ibus. The Ibus connects to the Ebus through the IA and ID processors. These devices manage transfers between the 1.2 GB per second Ebus and the 320 MB per second Ibus. I/O resources connect to the 320 MB per second Ibus. Up to two optional mezzanine cards plug into the Ibus on each IO4 board.

The IO4 is the fundamental component of the I/O subsystem. It contains all of the I/O controllers needed to implement a basic Challenge system:

  • an Ethernet controller

  • two fast and wide 16-bit SCSI-2 controllers

  • a VME interface port (using the VCAM or GCAM)

  • four serial ports

  • a parallel port

In addition, the IO4 board contains the logic for a flat cable interface (FCI), which is used to connect to the VMEbus and optional visualization console. The IO4 board also has connections for mezzanine cards, which are used to provide expansion I/O controllers.

Ibus Interface

The IA and ID application-specific integrated circuits (ASICs) act as bus adapters that connect the Ibus to the much faster Ebus. In addition to making the necessary conversions back and forth between the two buses, the IA and ID ASICs perform virtual address mapping for scatter/gather direct memory access (DMA) operations and maintain cache coherency between the Ebus and the I/O subsystem.

Flat Cable Interface

The IO4 contains two FCI interfaces that are proprietary to Silicon Graphics. FCIs are synchronous, point-to-point interfaces that allow communication between devices connected by a cable. The FCI is used to connect to the VME64 bus adapter. FCIs can operate at up to 200 MB per second for VMEbus adapters.

The FCI on the first IO4 in a system is connected to the VME Channel Adapter Module (VCAM) board, which contains a VME adapter subsystem in the backplane.

POWER Challenge systems using the visualization console option have a Graphics Channel Adapter Module (GCAM) board. The GCAM contains a VME adapter subsystem and interfaces to the optional visualization console graphics board in the fifth VME slot. An FCI interface is routed to a connector on the front of the optional GCAM.

Note that the optional visualization console graphics board uses VME slots 3, 4, and 5 when installed. This leaves two VME slots available for use.

VMEbus Interface

The VMEbus is supported through a VCAM interface (GCAM with the visualization console option) connected to an IO4 board. This bus is standard equipment and is located in the main backplane, next to the Ebus. The VCAM or optional GCAM plugs directly into the IO4 board without any cabling.

The VME interface supports all protocols defined in Revision C of the VME Specification, plus the A64 and D64 modes defined in Revision D. The D64 mode allows DMA bandwidths of up to 60 MB per second. The VME interface can operate as either a master or a slave. It supports DMA to memory on the Ebus and programmed I/O operations from the Ebus to addresses on the VMEbus.

In addition to interfacing with the VMEbus, the VCAM or optional GCAM provides scatter/gather virtual address translation capability and a DMA engine that can be used to increase the performance of non-DMA VME boards. See Appendix E for additional VME information.

SCSI-2 Interface

The IO4 contains two 16-bit SCSI-2 device controllers. Each controller can operate with a bandwidth of up to 20 MB per second and can be configured for either single-ended or differential operation.

To accommodate extra SCSI channels, optional SCSI mezzanine cards contain three 16-bit SCSI-2 controllers. Two of the controllers are differential only; the third is configurable as single-ended or differential. These controllers are identical to those used on the main IO4 board.

SCSI mezzanine cards can be plugged into either or both of the mezzanine card slots on an IO4 board, allowing up to eight SCSI-2 controllers per IO4 board. With the optional visualization console the GCAM covers one of the available mezzanine connectors on the standard I04. This leaves room for a maximum of one optional SCSI mezzanine board on the first I04 (three extra SCSI connectors).

Ethernet Interface

The IO4's Ethernet interface operates at the standard Ethernet rate of 10 Mb per second and supports an AUI (15-pin) physical connection. The controller is intelligent; it requires no direct CPU involvement when packets are transmitted or received.

Parallel Port

The IO4 contains a DMA-driven parallel port capable of operating printers or performing high-speed data transfer to or from external equipment at rates up to 300 KB per second.

Serial Ports

The IO4 contains one RS-422 and three RS-232 serial ports, all of which are capable of asynchronous operation at rates up to 19.2 Kbaud. The RS-422 port can be operated at 38.4 Kbaud, provided the RS-232 ports are not all in use.

System and SCSI Backplanes

The enclosure comes with an 11-slot cardcage and backplane that includes five VME expansion slots. Note that only two VME slots are available when a POWER Challenge system uses the optional visualization console.

To the right of the cardcage is room for seven half-height (or one half-height and three full-height) SCSI devices. Each drive sits in its own individual “sled” and slides into the drive rack. When fully inserted, the drive and sled assembly plugs into the SCSI backplane at the rear of the rack.

See “SCSI Drive Rack” in Chapter 2 for specific information about peripheral locations.

SCSI I/O Devices

SCSI drives are the only devices internally supported by the Challenge deskside system. The system's drive rack has space for seven half-height devices. All drives must be front loaded after being mounted on a special drive sled. Supported devices include hard disks, Digital Linear Tape (DLT) drives, 1/4-inch cartridge, 4-mm and 8-mm tape drives, and CD-ROM drives. Installing a full-height drive (such as the 8-mm or DLT) requires using two half-height slots. See Chapter 4 for installation instructions.

System Controller

Located just above the SCSI drive rack is an on/off key switch and the System Controller display panel. The System Controller is a microprocessor-controlled, subsystem that is mounted directly to the system backplane. It monitors various system operations, including chassis temperature, system fan speed, backplane voltages, and the system clock. Battery backup supports the System Controller's NVRAM and time-of-day system clock.

When any operating parameter exceeds or drops past a specified limit, the System Controller can execute a controlled shutdown of the Challenge deskside system. During such a shutdown procedure, the System Controller maintains a log with the last error message(s) received before the shutdown.

Chapter 2 shows the location of the System Controller's front panel on the chassis. Figure 3-4 in Chapter 3 identifies its related control buttons. To understand and use the System Controller, see “Using the System Controller” in Chapter 5.

Operating Considerations

This section covers the basic requirements for physical location to ensure proper chassis operation.

The Challenge deskside chassis is designed to fit into a typical work environment. Keep the system in good condition by maintaining the following operating conditions:

  • The chassis should ideally have a 6-inch (15-cm) minimum air clearance above the top. The first line of Table 1-2 shows the side clearances required. If the chassis is positioned under a desk or other equipment and the top air clearance is less than 6 inches (15 cm), make sure that the side air clearances are at least as great as those listed on the second line of Table 1-2.

  • The chassis should be kept in a clean, dust-free location to reduce maintenance problems.

  • The available power should be rated for computer operation.

  • The chassis should be protected from harsh environments that produce excessive vibration, heat, and similar conditions.

    Table 1-2. Required Air Clearances for the Deskside Chassis

    Top Clearance

    Left Sidea

    Right Sidea



    More than 6”

    3” (8 cm)

    6” (15 cm)

    6” (15 cm)

    6” (15 cm)

    Less than 6”

    6” (15 cm)

    10” (25 cm)

    8” (20 cm)

    8” (20 cm)

a. Side as viewed from the front of the chassis.

Additional specifications are provided in Appendix A, “Hardware Specifications.”

If you have any questions concerning physical location or site preparation, contact your Silicon Graphics system support engineer (SSE) or other authorized support organization representative before your system is installed.

Chapters 2 through 5 in this guide discuss hardware topics common to all Challenge deskside configurations.