A real-time program is defined by its close relationship to external hardware. This chapter reviews the ways that IRIX gives you to access and control external devices.
![]() | Note: This section contains an overview for readers who are not familiar with the details of the UNIX I/O system. All these points are covered in much greater detail in the IRIX Device Driver Programmer's Guide (see “Related Publications and Sites”). |
It is a basic concept in UNIX that all I/O is done by reading or writing files. All I/O devices—disks, tapes, printers, terminals, and VME cards—are represented as files in the file system. Conventionally, every physical device is represented by an entry in the /dev file system hierarchy. The purpose of each device special file is to associate a device name with a a device driver, a module of code that is loaded into the kernel either at boot time or dynamically, and is responsible for operating that device at the kernel's request.
In IRIX 6.4 and later, the /dev filesystem still exists to support programs and shell scripts that depend on conventional names such as /dev/tty. However, the true representation of all devices is built in a different file system rooted at /hw (for hardware). You can explore the /hw filesystem using standard commands such as file, ls, and cd. You will find that the conventional names in /dev are implemented as links to device special files in /hw. The creation and use of /hw, and the definition of devices in it, is described in detail in the IRIX Device Driver Programmer's Guide.
To use a device, a process opens the device special file by passing the file pathname to open() (see the open(2) man page). For example, a generic SCSI device might be opened by a statement such as the following:
int scsi_fd = open("/dev/scsi/sc0d11l0",O_RDWR); |
The returned integer is the file descriptor, a number that indexes an array of control blocks maintained by IRIX in the address space of each process. With a file descriptor, the process can call other system functions that give access to the device. Each of these system calls is implemented in the kernel by transferring control to an entry point in the device driver.
Each device driver supports one or more of the following operations:
Not every driver supports every entry point. For example, the generic SCSI driver (see “Generic SCSI Device Driver”) supports only the open, close, and control entries.
Device drivers in general are documented with the device special files they support, in volume 7 of the man pages. For a sample, review the following:
dsk(7m), documenting the standard IRIX SCSI disk device driver
smfd(7m), documenting the diskette and optical diskette driver
tps(7m), documenting the SCSI tape drive device driver
plp(7), documenting the parallel line printer device driver
klog(7), documenting a “device” driver that is not a device at all, but a special interface to the kernel
If you review a sample of entries in volume 7, as well as other man pages that are called out in the topics in this chapter, you will understand the wide variety of functions performed by device drivers.
When your program needs direct control of a device, you have the following choices:
If it is a device for which IRIX or the device manufacturer distributes a device driver, find the device driver man page in volume 7 to learn the device driver's support for read(), write(), mmap(), and ioctl(). Use these functions to control the device.
If it is a PCI device without Bus Master capability, you can control it directly from your program using programmed I/O (see the pciba(7M) man page). This option is discussed in the IRIX Device Driver Programmer's Guide.
If it is a VME device without Bus Master capability, you can control it directly from your program using programmed I/O or user-initiated DMA. Both options are discussed under “The VME Bus”.
If it is a PCI or VME device with Bus Master (on-board DMA) capability, you should receive an IRIX device driver from the OEM. Consult IRIX Admin: System Configuration and Operation to install the device and its driver. Read the OEM man page to learn the device driver's support for read(), write(), and ioctl().
If it is a SCSI device that does not have built-in IRIX support, you can control it from your own program using the generic SCSI device driver. See “Generic SCSI Device Driver”.
In the remaining case, you have a device with no driver. In this case you must create a device driver. This process is documented in the IRIX Device Driver Programmer's Guide, which contains extensive information and sample code (see “Related Publications and Sites”).
The SCSI interface is the principal way of attaching disk, cartridge tape, CD-ROM, and digital audio tape (DAT) devices to the system. It can be used for other kinds of devices, such as scanners and printers.
IRIX contains device drivers for supported disk and tape devices. Other SCSI devices are controlled through a generic device driver that must be extended with programming for a specific device.
The detailed, board-level programming of the host SCSI adapters is done by an IRIX-supplied host adapter driver. The services of this driver are available to the SCSI device drivers that manage the logical devices. If you write a SCSI driver, it controls the device indirectly, by calling a host adapter driver.
The host adapter drivers handle the low-level communication over the SCSI interface, such as programming the SCSI interface chip or board, negotiating synchronous or wide mode, and handling disconnect/reconnect. SCSI device drivers call on host adapter drivers using indirect calls through a table of adapter functions. The use of host adapter drivers is documented in the IRIX Device Driver Programmer's Guide.
The naming conventions for disk and tape device files are documented in the intro(7) man page. In general, devices in /dev/[r]dsk are disk drives, and devices in /dev/[r]mt are tape drives.
Disk devices in /dev/[r]dsk are operated by the SCSI disk controller, which is documented in the dsk(7) man page. It is possible for a program to open a disk device and read, write, or memory-map it, but this is almost never done. Instead, programs open, read, write, or map files; and the EFS or XFS file system interacts with the device driver.
Tape devices in /dev/[r]mt are operated by the magnetic tape device driver, which is documented in the tps(7) man page. Users normally control tapes using such commands as tar, dd, and mt (see the tar(1), dd(1M) and mt(1) man pages), but it is also common for programs to open a tape device and then use read(), write(), and ioctl() to interact with the device driver.
Since the tape device driver supports the read/write interface, you can schedule tape I/O through the asynchronous I/O interface (see “Asynchronous I/O Basics” in Chapter 5). Be careful to ensure that asynchronous operations to a tape are executed in the proper sequence.
Generally, non-disk, non-tape SCSI devices are installed in the /dev/scsi directory. These devices so named are controlled by the generic SCSI device driver, which is documented in the ds(7m) man page.
Unlike most kernel-level device drivers, the generic SCSI driver does not support interrupts, and does not support the read() and write() functions. Instead, it supports a wide variety of ioctl() functions that you can use to issue SCSI commands to a device. In order to invoke these operations you prepare a dsreq structure describing the operation and pass it to the device driver. Operations can include input and output as well as control and diagnostic commands.
The programming interface supported by the generic SCSI driver is quite primitive. A library of higher-level functions makes it easier to use. This library is documented in the dslib(3x) man page. It is also described in detail in the IRIX Device Driver Programmer's Guide. The most important functions in it are listed below:
dsopen(), which takes a device pathname, opens it for exclusive access, and returns a dsreq structure to be used with other functions.
fillg0cmd(), fillg1cmd(), and filldsreq(), which simplify the task of preparing the many fields of a dsreq structure for a particular command.
doscsireq(), which calls the device driver and checks status afterward.
The dsreq structure for some operations specifies a buffer in memory for data transfer. The generic SCSI driver handles the task of locking the buffer into memory (if necessary) and managing a DMA transfer of data.
When the ioctl() function is called (through doscsireq() or directly), it does not return until the SCSI command is complete. You should only request a SCSI operation from a process that can tolerate being blocked.
Built upon the basic dslib functions are several functions that execute specific SCSI commands, for example, read08() performs a read. However, there are few SCSI commands that are recognized by all devices. Even the read operation has many variations, and the read08() function as supplied is unlikely to work without modification. The dslib library functions are not complete. Instead, you must alter them and extend them with functions tailored to a specific device.
For more on dslib, see the IRIX Device Driver Programmer's Guide.
A library of functions that enable you to read audio data from an audio CD in the CD-ROM drive is distributed with IRIX. This library was built upon the generic SCSI functions supplied in dslib. The CD audio library is documented in the CDintro(3dm) man page (installed with the dmedia_dev package).
A library of functions that enable you to read and write audio data from a digital audio tape is distributed with IRIX. This library was built upon the functions of the magnetic tape device driver. The DAT audio library is documented in the DTintro(3dm) man page (installed with the dmedia_dev package).
Beginning in IRIX 6.5, the PCI Bus Access driver (pciba) can be used on all Silicon Graphics platforms that support PCI for user-level access to the PCI bus and the devices that reside on it. The pciba interface provides a mechanism to access the PCI bus address spaces, handle PCI interrupts, and obtain PCI addresses for DMA from user programs. It provides a convenient mechanism for writing user-level PCI device drivers.
The pciba driver is a loadable device driver that is not loaded in the kernel by default. For information on loading the pciba driver see the pciba(7M) man page.
The pciba driver provides support for open(), close(), ioctl(), and mmap() functions. It does not support the read() and write() driver functions. Using pciba, memory-mapped I/O is performed to PCI address space without the overhead of a system call. PCI bus transactions are transparent to the user. Access to PCI devices is performed by knowing the location of the PCI bus in the hardware graph structure and the slot number where the PCI card resides. Specific information about using the pciba driver can be found in the pciba(7M) man page.
Example 6-1 shows how to use pciba to map into the memory space of a PCI card on an Origin 2000 or Onyx 2 system. The code performs an open to the address space found in base register 2 of a PCI device that resides in slot 1 of a PCI shoebox (pci_xio). Then it memory maps 1 MB of memory into the process address space. Lastly, it writes zeros to the first byte of the memory area.
Example 6-1. Memory Mapping With pciba
#define PCI40_PATH "/hw/module/1/slot/io2/pci_xio/pci/1/base/2" #define PCI40_SIZE (1024*1024) fd = open(PCI40_PATH, O_RDWR); if (fd < 0 ) { perror("open"); exit (1); } pci40_addr = (volatile uchar_t *) mmap(0, PCI40_SIZE, PROT_READ|PROT_WRITE,MAP_SHARED, fd, 0); if (pci40_addr == (uchar_t *) MAP_FAILED) { perror("mmap"); exit (1); } pci40_addr= 0x00; |
More information about pciba and user access to the PCI bus on Silicon Graphics systems can be found in the IRIX Device Driver Programming Guide.
Each Challenge, Onyx, Power Challenge, and Power Onyx system includes full support for the VME interface, including all features of Revision C.2 of the VME specification, and the A64 and D64 modes as defined in Revision D. Each Origin 2000, Origin 200, and Onyx 2 system supports VME as an optional interface.VME devices can access system memory addresses, and devices on the system bus can access addresses in the VME address space.
The naming of VME devices in /dev/vme and /hw/vme for Origin 2000 systems, and other administrative issues are covered in the usrvme(7) man page and the IRIX Device Driver Programming Guide.
For information about the physical description of the XIO-VME option for Origin and Onyx 2 systems, refer to the Origin 2000 and Onyx 2 VME Option Owner's Guide.
A number of special terms are used to describe the multiprocessor Challenge support for VME. The terms are described in the following list. Their relationship is shown graphically in Figure 6-1.
Powerpath-2 Bus | The primary system bus, connecting all CPUs and I/O channels to main memory. |
Power Channel-2 | The circuit card that interfaces one or more I/O devices to the Powerpath-2 bus. |
F-HIO card | Adapter card used for cabling a VME card cage to the PowerR Channel |
VMECC | VME control chip, the circuit that interfaces the VME bus to the Power Channel. |
All multiprocessor Challenge systems contain a 9U VME bus in the main card cage. Systems configured for rack-mount can optionally include an auxiliary 9U VME card cage, which can be configured as 1, 2, or 4 VME busses. The possible configurations of VME cards are shown in Table 6-1
Table 6-1. Multiprocessor Challenge VME Cages and Slots
Model | Main Cage | Aux Cage Slots | Aux Cage Slots | Aux Cage Slots |
---|---|---|---|---|
Challenge L | 5 | n.a. | n.a. | n.a. |
Onyx Deskside | 3 | n.a. | n.a. | n.a. |
Challenge XL | 5 | 20 | 10 and 9 | 5, 4, 4, and 4 |
Onyx Rack | 4 | 20 | 10 and 9 | 5, 4, 4, and 4 |
Each VME bus after the first requires an F cable connection from an F-HIO card on a Power Channel-2 board, as well as a Remote VCAM board in the auxiliary VME cage. Up to three VME busses (two in the auxiliary cage) can be supported by the first Power Channel-2 board in a system. A second Power Channel-2 board must be added to support four or more VME busses. The relationship among VME busses, F-HIO cards, and Power Channel-2 boards is detailed in Table 6-2.
Table 6-2. Power Channel-2 and VME bus Configurations
Number of VME Busses | PC-2 #1 | PC-2 #1 | PC-2 #2 | PPC-2 #2 |
---|---|---|---|---|
1 | unused | unused | n.a. | n.a. |
2 | F-HIO short | unused | n.a. | n.a. |
3 (1 PC-2) | F-HIO short | F-HIO short | n.a. | n.a. |
3 (2 PC-2) | unused | unused | F-HIO | unused |
4 | unused | unused | F-HIO | F-HIO |
5 | unused | unused | F-HIO | F-HIO |
F-HIO short cards, which are used only on the first Power Channel-2 board, supply only one cable output. Regular F-HIO cards, used on the second Power Channel-2 board, supply two. This explains why, although two Power Channel-2 boards are needed with four or more VME busses, the F-HIO slots on the first Power Channel-2 board remain unused.
A device on the VME bus has access to an address space in which it can read or write. Depending on the device, it uses 16, 32, or 64 bits to define a bus address. The resulting numbers are called the A16, A32, and A64 address spaces.
There is no direct relationship between an address in the VME address space and the set of real addresses in the system main memory. An address in the VME address space must be translated twice:
The VME interface hardware establishes a translation from VME addresses into addresses in real memory.
The IRIX kernel assigns real memory space for this use, and establishes the translation from real memory to virtual memory in the address space of a process or the address space of the kernel.
Address space mapping is done differently for programmed I/O, in which slave VME devices respond to memory accesses by the program, and for DMA, in which master VME devices read and write directly to main memory.
![]() | Note: VME addressing issues are discussed in greater detail from the standpoint of the device driver, in the IRIX Device Driver Programmer's Guide. |
To allow programmed I/O, the mmap() system function establishes a correspondence between a segment of a process's address space and a segment of the VME address space. The kernel and the VME device driver program registers in the VME bus interface chip and recognizes fetches and stores to specific main memory real addresses and translates them into reads and writes on the VME bus. The devices on the VME bus must react to these reads and writes as slaves; DMA is not supported by this mechanism.
For Challenge and Onyx systems, one VME bus interface chip can map as many as 12 different segments of memory. Each segment can be as long as 8 MB. The segments can be used singly or in any combination. Thus one VME bus interface chip can support 12 unique mappings of at most 8 MB, or a single mapping of 96 MB, or combinations between.
For systems supporting the XIO-VME option, which uses a Tundra Universe VME interface chip, user-level PIO mapping is allocated as follows:
all A16 and A24 address space is mapped
seven additional mappings for a maximum of 512 MB in A32 address space
DMA mapping is based on the use of page tables stored in system main memory. This allows DMA devices to access the virtual addresses in the address spaces of user processes. The real pages of a DMA buffer can be scattered in main memory, but this is not visible to the DMA device. DMA transfers that span multiple, scattered pages can be performed in a single operation.
The kernel functions that establish the DMA address mapping are available only to device drivers. For information on these, refer to the IRIX Device Driver Programmer's Guide.
Your program accesses the devices on the VME bus in one of two ways, through programmed I/O (PIO) or through DMA. Normally, VME cards with Bus Master capabilities always use DMA, while VME cards with slave capabilities are accessed using PIO.
The VME bus interface also contains a unique hardware feature, the DMA Engine, which can be used to move data directly between memory and a slave VME device.
Perform PIO to VME devices by mapping the devices into memory using the mmap() function (The use of PIO is covered in greater detail in the IRIX Device Driver Programmer's Guide. Memory mapping of I/O devices and other objects is covered in the book Topics in IRIX Programming.)
Each PIO read requires two transfers over the VME bus interface: one to send the address to be read, and one to retrieve the data. The latency of a single PIO input is approximately 4 microseconds on the Challenge or Onyx systems and 2.6 microseconds on the Origin or Onyx 2 systems. PIO write is somewhat faster, since the address and data are sent in one operation. Typical PIO performance is summarized in Table 6-3.
Table 6-3. VME Bus PIO Bandwidth
Data Unit Size | Reads for Origin/Onyx 2 Systems | Reads for
Challenge/ | Writes for Origin/Onyx 2 Systems | Writes for |
---|---|---|---|---|
D8 | 0.35 MB/second | 0.2 MB/second | 1.5 MB/second | 0.75 MB/second |
D16 | 0.7 MB/second | 0.5 MB/second | 3.0 MB/second | 1.5 MB/second |
D32 | 1.4 MB/second | 1 MB/second | 6 MB/second | 3 MB/second |
When a system has multiple VME buses, you can program concurrent PIO operations from different CPUs to different buses, effectively multiplying the bandwidth by the number of buses. It does not improve performance to program concurrent PIO to a single VME bus.
![]() | Tip: When transferring more than 32 bytes of data, you can obtain higher rates using the DMA Engine. See “DMA Engine Access to Slave Devices”. |
If a VME device that you control with PIO can generate interrupts, you can arrange to trap the interrupts in your own program. In this way, you can program the device for some lengthy operation using PIO output to its registers, and then wait until the device returns an interrupt to say the operation is complete.
The programming details on user-level interrupts are covered in Chapter 7, “Managing User-Level Interrupts”.
VME bus cards with Bus Master capabilities transfer data using DMA. These transfers are controlled and executed by the circuitry on the VME card. The DMA transfers are directed by the address mapping described under “DMA Mapping”.
DMA transfers from a Bus Master are always initiated by a kernel-level device driver. In order to exchange data with a VME Bus Master, you open the device and use read() and write() calls. The device driver sets up the address mapping and initiates the DMA transfers. The calling process is typically blocked until the transfer is complete and the device driver returns.
The typical performance of a single DMA transfer is summarized in Table 6-4. Many factors can affect the performance of DMA, including the characteristics of the device.
Table 6-4. VME Bus Bandwidth, VME Master Controlling DMA
Data Transfer Size | Reads for Origin/Onyx 2 Systems | Reads for | Writes for Origin/Onyx 2 Systems | Writes for |
---|---|---|---|---|
D8 | N/A | 0.4 MB/sec | N/A | 0.6 MB/sec |
D16 | N/A | 0.8 MB/sec | N/A | 1.3 MB/sec |
D32 | N/A | 1.6 MB/sec | N/A | 2.6 MB/sec |
D32 BLOCK | 20 MB/sec (256 byte block) | 22 MB/sec (256 byte block) | 24 MB/sec (256 byte block) | 24 MB/sec (256 byte block) |
D64 BLOCK | 40 MB/sec (2048 byte block) | 55 MB/sec (2048 byte block) | 48 MB/sec (2048 byte block) | 58 MB/sec (2048 byte block) |
A DMA engine is included as part of, and is unique to each SGI VME bus interface. It performs efficient, block-mode, DMA transfers between system memory and VME bus slave cards—cards that are normally capable of only PIO transfers.
The DMA engine greatly increases the rate of data transfer compared to PIO, provided that you transfer at least 32 contiguous bytes at a time. The DMA engine can perform D8, D16, D32, D32 Block, and D64 Block data transfers in the A16, A24, and A32 bus address spaces.
All DMA engine transfers are initiated by a special device driver. However, you do not access this driver through open/read/write system functions. Instead, you program it through a library of functions. The functions are documented in the udmalib(3x) (for Challenge/Onyx systems) and the vme_dma_engine(3x) (for Origin/Onyx 2 systems) man pages. For Challenge/Onyx systems, the functions are used in the following sequence:
Call dma_open() to initialize action to a particular VME card.
Call dma_allocbuf() to allocate storage to use for DMA buffers.
Call dma_mkparms() to create a descriptor for an operation, including the buffer, the length, and the direction of transfer.
Call dma_start() to execute a transfer. This function does not return until the transfer is complete.
![]() | Note: The Origin/Onyx 2 library also supports these functions, but they are not the preferred interface. |
For the Origin and Onyx 2 XIO-VME interface, the VME DMA engine library is used in the following sequence:
Call vme_dma_engine_handle_alloc() to allocate a handle for the DMA engine by the given pathname.
Call vme_dma_engine_buffer_alloc() to allocate the host memory buffer according to the address and byte_count pair.
Call vme_dma_engine_transfer_alloc() to allocate a transfer entity by the given parameters. Some parameters must be specified, such as the buffer handle, the VME bus address, the number of bytes that are being transferred, the VME bus address space type, and the direction of the transfer. There are two advisory parameters: the throttle size and the release mode.
Call vme_dma_engine_schedule() to schedule a transfer for the actual DMA action. This call provides a way to schedule multiple transfers for one-time DMA action.
Call vme_dma_engine_commit() to ask the library to commit all scheduled transfers. Two commitment modes are available: synchronous and asynchronous.
In synchronous mode, the library returns when the DMA is finished and an advisory parameter specifies the wait method: spin-waiting or sleep-waiting.
In asynchronous mode, the library returns instantly. Call vme_dma_engine_rendezvous() to wait until all scheduled transfers are complete. Here also are the spin-waiting or sleep-waiting options for waiting.
For more details of user DMA, see the IRIX Device Driver Programmer's Guide.
The typical performance of the DMA engine for D32 transfers is summarized in Table 6-5 and Table 6-6. Performance with D64 Block transfers is somewhat less than twice the rate shown in Table 6-5 and Table 6-6. Transfers for larger sizes are faster because the setup time is amortized over a greater number of bytes.
Table 6-5. VME Bus Bandwidth, DMA Engine, D32 Transfer (Challenge/Onyx Systems)
Transfer Size | Reads | Writes | Block Reads | Block Writes |
---|---|---|---|---|
32 | 2.8 MB/sec | 2.6 MB/sec | 2.7 MB/sec | 2.7 MB/sec |
64 | 3.8 MB/sec | 3.8 MB/sec | 4.0 MB/sec | 3.9 MB/sec |
128 | 5.0 MB/sec | 5.3 MB/sec | 5.6 MB/sec | 5.8 MB/sec |
256 | 6.0 MB/sec | 6.7 MB/sec | 6.4 MB/sec | 7.3 MB/sec |
512 | 6.4 MB/sec | 7.7 MB/sec | 7.0 MB/sec | 8.0 MB/sec |
1024 | 6.8 MB/sec | 8.0 MB/sec | 7.5 MB/sec | 8.8 MB/sec |
2048 | 7.0 MB/sec | 8.4 MB/sec | 7.8 MB/sec | 9.2 MB/sec |
4096 | 7.1 MB/sec | 8.7 MB/sec | 7.9 MB/sec | 9.4 MB/sec |
Table 6-6. VME Bus Bandwidth, DMA Engine, D32 Transfer (Origin/Onyx 2 Systems)
Transfer Size | Reads | Writes | Block Reads | Block Writes |
---|---|---|---|---|
32 | 1.2 MB/sec | 1.1 MB/sec | 1.2 MB/sec | 1.2 MB/sec |
64 | 2.0 MB/sec | 1.9 MB/sec | 2.0 MB/sec | 2.0 MB/sec |
128 | 3.3 MB/sec | 3.5 MB/sec | 3.3 MB/sec | 3.9 MB/sec |
256 | 5.1 MB/sec | 5.6 MB/sec | 5.2 MB/sec | 6.3 MB/sec |
512 | 6.9 MB/sec | 8.2 MB/sec | 7.3 MB/sec | 9.0 MB/sec |
1024 | 8.0 MB/sec | 10.5 MB/sec | 8.8 MB/sec | 12.0 MB/sec |
2048 | 9.2 MB/sec | 12.2 MB/sec | 9.8 MB/sec | 14.0 MB/sec |
4096 | 9.6 MB/sec | 12.6 MB/sec | 11.3 MB/sec | 15.1 MB/sec |
Some of the factors that affect the performance of user DMA include the following:
The response time of the VME board to bus read and write requests
The size of the data block transferred (as shown in Table 6-5)
Overhead and delays in setting up each transfer
The numbers in Table 6-5 were achieved by a program that called dma_start() in a tight loop, in other words, with minimal overhead.
The dma_start() and vme_dma_engine_commit() functions operate in user space; they are not kernel-level device driver calls. This has two important effects. First, overhead is reduced, since there are no mode switches between user and kernel, as there are for read() and write(). This is important since the DMA engine is often used for frequent, small inputs and outputs.
Second, dma_start() does not block the calling process, in the sense of suspending it and possibly allowing another process to use the CPU. However, it waits in a test loop, polling the hardware until the operation is complete. As you can infer from Table 6-5, typical transfer times range from 50 to 250 microseconds. You can calculate the approximate duration of a call to dma_start() based on the amount of data and the operational mode.
The vme_dma_engine_commit() call can be used either synchronously (as described for the dma_start() library call) or asynchronously. If the call is made asynchronously, the transfer completes (in parallel) while the process continues to execute. Because of this, the user process must coordinate with DMA completion using the vme_dma_engine_rendezvous() call.
You can use the udmalib functions to access a VME Bus Master device, if the device can respond in slave mode. However, this may be less efficient than using the Master device's own DMA circuitry.
While you can initiate only one DMA engine transfer per bus, it is possible to program a DMA engine transfer from each bus in the system, concurrently.
IRIX 6.5 adds support for the user mode serial library, or usio, which provides access to the system serial ports on Origin, O2, and OCTANE systems, without the overhead of system calls. On these systems, the device /dev/ttyus* is mapped into the user process's address space and is accessed directly by the library routines. The user mode library provides read, write, and error detection routines. In addition to the library routines, ioctl support is provided to perform functions that are not time critical, such as port configuration. The read() and write() system calls are not supported for this device type, as these functions are implemented in the user library. For complete information about usio, see the usio(7) man page.
On the Origin, O2, and OCTANE systems, support for a character-based interface on the serial ports is also provided as a low-cost alternative for applications needing bulk data transfer with no character interpretation, via the serial ports. For more information, see the cserialio(7) man page.
Systems that do not support usio or cserialio must rely on the serial device drivers and STREAMS modules for an input device that interfaces through a serial port for real-time programs. This is not a recommended practice for several reasons: the serial device drivers and the STREAMS modules that process serial input are not optimized for deterministic, real-time performance; and at high data rates, serial devices generate many interrupts.
When there is no alternative, a real-time program will typically open one of the files named /dev/tty*. The names, and some hardware details, for these devices are documented in the serial(7) man page. Information specific to two serial adapter boards is in the duart(7) man page and the cdsio(7) man page.
When a process opens a serial device, a line discipline STREAMS module is pushed on the stream by default. If the real-time device is not a terminal and does not support the usual line controls, this module can be removed. Use the I_POP ioctl (see the streamio(7) man page) until no modules are left on the stream. This minimizes the overhead of serial input, at the cost of receiving completely raw, unprocessed input.
An important feature of current device drivers for serial ports is that they try to minimize the overhead of handling the many interrupts that result from high character data rates. The serial I/O boards interrupt at least every 4 bytes received, and in some cases on every character (at least 480 interrupts a second, and possibly 1920, at 19,200 bps). Rather than sending each input byte up the stream as it arrives, the drivers buffer a few characters and send multiple characters up the stream.
When the line discipline module is present on the stream, this behavior is controlled by the termio settings, as described in the termio(7) man page for non-canonical input. However, a real-time program will probably not use the line-discipline module. The hardware device drivers support the SIOC_ITIMER ioctl that is mentioned in the serial(7) man page, for the same purpose.
The SIOC_ITIMER function specifies the number of clock ticks (see “Tick Interrupts” in Chapter 3) over which it should accumulate input characters before sending a batch of characters up the input stream. A value of 0 requests that each character be sent as it arrives (do this only for devices with very low data rates, or when it is absolutely necessary to know the arrival time of each input byte). A value of 5 tells the driver to collect input for 5 ticks (50 milliseconds, or as many as 24 bytes at 19,200 bps) before passing the data along.
The Origin, Challenge, Onyx, and Onyx 2 systems include support for generating and receiving external interrupt signals. The electrical interface to the external interrupt lines is documented in the ei(7) man page.
Your program controls and receives external interrupts by interacting with the external interrupt device driver. This driver is associated with the special device file /dev/ei, and is documented in the ei(7) man page.
For programming details of the external interrupt lines, see the IRIX Device Driver Programmer's Guide. You can also trap external interrupts with a user-level interrupt handler (see “User-Level Interrupt Handling”); this is also covered in the IRIX Device Driver Programmer's Guide.