This chapter provides an overview of IRIS ATM and the protocols upon which IRIS ATM is based.
The synchronous optic network (SONET) protocol is a physical layer communication technology, supporting transmission speeds such as 51.84 Mbit/s, 155.52 Mbit/s, 622.08 Mbit/s, and 2.488 gigabits per second. The asynchronous transfer mode (ATM) protocol is a data link and network layer switching protocol that supports almost any bit rate. SONET defines the manner in which data is encoded and transported over the line (that is, the fiber optic connection). ATM defines the manner in which the data is grouped into packets and how it is routed from endpoint to endpoint. ATM handles very small-sized cells in a manner that allows simultaneous transmission of multiple data streams at different rates. The streams can be different types of digitized data (for example, audio, video, and text).
Unlike today's popular network protocols (for example, Ethernet, FDDI, and Token Ring), ATM supports applications that require a steady, constant flow of data. Video applications (for example, teleconferencing, video on demand, and real-time long-distance imaging) are some of the main markets for this communication technology because the human eye and ear are highly sensitive to variations in time delays and synchronizing of visual data and sound. Table 1-1, summarizes some of the major differences between the common network technologies currently in use and the ATM technology.
Table 1-1. Comparison of ATM and Legacy Network Technologies
Today's common network technologies
Each station has exclusive access to the shared network medium for a short length of time, then releases the medium to allow other stations access.
Each station has exclusive access all the time to its communication medium (the physical link).
Single stream of data:
During a station's access, only one data stream is transmitted.
Multiple streams of data:
Multiple data streams (virtual channels) can be simultaneously transmitted over a single physical connection.
Variable spacing between packets:
Access times for transmission on the network medium are not predictable. Variable spacing between PDU arrival times are inherent in the design.[a]
Guaranteed, fixed spacing between packets:
Each data stream can be guaranteed to have predictable, extremely reliably-spaced access to the communication medium. That is, the medium supports constant bit rate, in addition to more variable services.
Design inherently supports broadcasting, since every station sees every PDU.
Does not easily support broadcasting since each PDU is seen only by the two endpoints involved in the data stream and, in some cases, the switches between them.
Single transmission rate:
Transmission rate is static at the network medium's built-in rate.
Multiple simultaneous transmission rates:
Each data stream can specify its own transmission rate.
[a] PDU = protocol data unit, which is a frame, packet, or cell, depending on the technology's terminology.
ATM allows each user to describe the data flow characteristics (that is, the traffic contract) wanted from the ATM connection. Some of the currently defined types of data flows are as follows:
A steady, constant flow, called constant bit rate (CBR), sometimes referred to as circuit emulation. The flow is specified as occurring at an absolutely steady or peak rate.
A guaranteed, although fluctuating flow, called variable bit rate (VBR). The flow is controlled by three parameters: a peak cell rate (the maximum rate that can ever be used on the VC), a sustainable rate (the average rate over time), and a maximum burst size.
A flow that guarantees delivery, but does not conform to a timely delivery schedule, referred to as available bit rate (ABR). 
A flow that does not guarantee conformance to any specific performance parameters and, in fact, does not even guarantee delivery, referred to as unspecified bit rate (UBR) or best effort.
ATM defines the data link control and network layers, as illustrated in Figure 1-1, and is commonly implemented over a synchronous optical network (SONET) physical layer. Other network layers (such as IP) tunnel through the ATM network by encapsulation. ATM and SONET are each described briefly in the paragraphs that follow.
ATM is a connection-oriented, packet-based protocol that allows multiple logical data streams (for example, different videos) to be transmitted simultaneously over a single physical connection. The ATM driver passes data from multiple applications to the ATM hardware where the data streams are stored as separate queues. The data is segmented into ATM cells and multiplexed into a single physical stream (see Figure 1-2). Each ATM cell is 53 bytes long, of which 5 bytes are ATM overhead and 48 bytes are upper-layer data (also known as payload).
Each logical data stream is called a virtual channel (VC). All of the VCs from a transmitting endpoint can share one physical link to the ATM switch. However, at the switch, the VCs might be routed onto different outgoing physical links, depending on their final destinations. In this way, they can follow their virtual channel connection (VCC) to the destination endpoint. ATM requires that the endpoint-to-endpoint physical connection be established before transmission occurs for a VC's first bit of data.
When each VC is set up, the user specifies a transmission rate, an upper layer conversion protocol (referred to as the ATM adaptation layer (AAL)), and, for some implementations (for example, switched virtual circuits), performance objectives that are referred to as the traffic contract.
The manner in which the ATM cells are multiplexed guarantees correct ordering of the upper-layer data and supports simultaneous transmission of multiple VCs in a single stream, as shown in Figure 1-3. (The single stream is passed to the SONET hardware as a single path and is explained in the following SONET section.) When the stream of cells arrives at its destination ATM layer, each conversation must be demultiplexed and reassembled before passing the data to the receiving application.
Each VC within the single stream can be transmitted at a different rate. This is accomplished by taking cells from each VC's queue at a different rate. As the ATM hardware creates the single stream, it selects cells from the different VC streams in a manner that supports each channel's user-selected rate. For example, if the transmission rate for VC1 is twice the rate of VC2, the cells are selected and interleaved as shown in Figure 1-4, (instead of equally as shown in Figure 1-2, and Figure 1-3). In the example shown in Figure 1-4, when two cells have been transmitted for VC2, four cells have been transmitted for VC1.
The ATM Adaptation Layers (AAL) provide mapping between upper-layer formats and the ATM cell format. In addition, the AAL module handles the ATM cells in a manner that supports the selected class of service.All AAL functionality occurs at the endpoints (not in the switches). Upper-layer applications select a class of service (one of the AALs) from those summarized in Figure 1-5. AAL 5 is defined for high-speed data transfer and ATM signaling. AAL0 is an unofficial adaptation layer.
SONET defines the fiber-optic physical layer. It covers issues such as the specifications for the multimode fiber optic cable, loss characteristics on the connectors, clock recovery, the available formats for organizing data payloads, and the frame boundary delimitation. SONET provides a variety of data rates and supports numerous payload formats.
When discussing SONET rates, it is important to distinguish between the line or signal rate (that is, the rate on the fiber) and the rates of the various communication streams (referred to as embedded transport rates) being carried within the SONET stream. SONET supports signal rates that are multiples of the basic 51.84 Mbit/s synchronous transport signal (STS) rate, as summarized in Table 1-2. The embedded transport rates are always slower than (or equal to) the signal rate and include some of the more commonly used rates in the communications industry today: for example, 1.544 (DS1 and T1), 2.048 (CEPT), and 6.912 (DS2) Mbit/s.
Also known as
51,840,000 bits per second
155,520,000 bits per second
622,080,000 bits per second
2,488,320,000 bits per second
(2.48832 gigabits per second)
[a] IRIS ATM supports only OC3c (155.52).
At the SONET level, the data stream logically consists of n separate paths (STS-1 streams), each carrying data of one type, encapsulated in STS-1 frames. The number of paths within a SONET stream is specified by the number in the SONET protocol's name. For example, SONET OC3 has three different paths (that is, three STS-1 streams) multiplexed within a signal rate of 155.52 Mbit/s. There is an exception to this. The concatenated formats of SONET (for example, OC3c and OC12c), have only one path and use an abbreviated form of the SONET frame. Within any SONET OCn stream, the multiple paths coexist through byte multiplexing; one byte from each path (each STS-1 frame) is transmitted, then another byte from each path is transmitted, as shown in Figure 1-6.
|Note: The overhead associated with the SONET stream is not shown in Figure 1-6.|
As explained in the preceding paragraphs, the basic physical building blocks for a SONET stream are bytes. Logically, however, the basic building blocks for a SONET data stream are SONET frames (also called STS-1 frames), as shown in Figure 1-7. Each STS-1 frame contains header and data for one path. The header contains protocol overhead data; the upper-layer data is carried in the synchronous payload envelope (SPE) portion of the frame. A SONET OCn stream uses larger frames constructed from n basic frames. For example, for SONET OC3 and OC3c, each frame carries three STS-1 SONET frames, as shown in Figure 1-8.
All of the data within any single SONET path must be of the same format. This is referred to as the mapping for the SPE. ATM is one of the available SPE mappings.Since each path within the SONET stream is a separate logical entity, each path can be mapped differently from the other paths carried in that SONET stream.
Each path is capable of carrying a number of embedded streams at lower transport rates. One STS-1 SONET stream (for example, Path 1 shown in the OC3 frame of Figure 1-8) could carry twenty-eight 1.728 Mbit/s channels. When the SPE mapping is ATM, the separate paths are collapsed into a single concatenated path; for example, for the 155.52 Mbit/s rate, the SONET protocol is OC3c. In OC3c, both the line rate and the path rate are 155.52 Mbit/s. Figure 1-8, shows the difference between the triple-path format of OC3 and the collapsed, single-path format of OC3c.
The IRIS ATM product is a data communications interface controller board (hardware), and driver, protocol applications, and utilities (software) that provide data exchange through the Asynchronous Transfer Mode (ATM) protocol using ATM adaptation layer 5 (AAL5) for permanent virtual circuits (PVCs) and switched virtual circuits (SVCs) over a Synchronous Optical Network (SONET) physical layer. The product complies with the ATM Forum's ATM User-Network Interface standard, versions 3.0 and 3.1, including signaling and the interim local management interface (ILMI).
The product supports constant bit rate (CBR), variable bit rate (VBR), and best-effort traffic. The driver supports all standard IP applications through SVCs and PVCs using best-effort traffic contracts, in compliance with RFC 1577, Classical IP Over ATM. For environments that require CBR/VBR traffic (IP and non-IP), the IRIS ATM character device application programming interface (API) is provided so that customers can develop applications. The API is described in the IRIS ATM API Programmer's Guide. The product includes a VC management program (ATMARP) for IP-over-PVC configurations.
The product provides ATM connectivity for the following platforms:
POWER CHALLENGE platforms
POWER Onyx platforms
The IRIS ATM hardware must be installed by a Silicon Graphics system support engineer (SSE) or other person trained by Silicon Graphics. The IRIS ATM-OC3c Board for Challenge or Onyx Installation Instructions or the IRIS ATM-OC3c 4Port XIO Board Installation Instructions contains complete details for hardware installation.
The software installation and configuration described in this document can be done by customers or SSEs. This document, IRIS ATM Configuration Guide (shipped with the IRIS ATM software), provides software configuration details. The online documents, IRIS ATM Release Notes and IRIX Admin: Software Installation and Licensing, provide software installation instructions.
Two types of addresses are relevant for ATM networking: virtual path identifier/virtual channel identifier (VPI/VCI) addresses and network addresses. Permanent virtual circuits (PVCs) require only VPI/VCI addresses at the ATM layer. Switched virtual circuits (SVCs) require both types of addresses (although the VPI/VCI address is transparent to the user). The globally unique ATM network address is used by the signaling protocol to route the connection setup request from the calling party through one or more switches to the called party, but a local VPI/VCI is used (during the data transmission) by each switch along the route for demultiplexing and mapping between the virtual channel and the hardware resources that are allocated to the connection.
The following sections describe these two types of addresses.
The VPI/VCI address is a 3-octet value contained in the header of the ATM cell (as shown in Figure 1-9). This value identifies a virtual channel. The value is locally assigned by each transmitting station and is unique (and valid) for only one physical link of the virtual channel (for example, from the host to its switch or between two switches). The VPI/VCI value is replaced at each switch along the virtual channel's span. This type of address is used for both PVCs and SVCs. For PVCs, it is the only ATM-level address required.
The ATM network address comes in two formats: a 20-octet value called ATM network service access point (NSAP) (shown in Figure 1-10) or an up-to-15-octet value called native E.164 (shown in Figure 1-11). The ATM network address is globally unique, meaning that it identifies one (and only one) endpoint within the entire world. This address, or a portion of it, is usually assigned to a port by its ATM switch. The NSAP format allows a system to support multiple endpoints using a single port by assigning local values to one portion of the address (as explained in more detail in the following paragraphs). The ATM network address is required for SVCs, but not for PVCs.
The ATM NSAP format (shown in Figure 1-10,) can carry any one of following three types of addresses:
A country assigned address, which is indicated by the AFI field set to 39 and the IDI field containing a data country code (DCC).
An E.164 address, which is indicated by the AFI field set to 45 and the IDI field containing a telephone-style E.164 number.
An internally assigned address, which is indicated by the AFI field of 47 and the IDI filed containing an international code designator (ICD ATM).
The contents of the IDI field are represented in binary code decimal (BCD) notation. For example, the country code for the United States of America is 840 (decimal); this is represented in the IDI field by the binary sequence 1000 0100 0000. Each type of IDI value requires padding and all IDI field padding is done with four 1s. For example, the DCC code requires only 12 of the 16 bits in the IDI field and DCCs are padded on the right, resulting in a binary sequence of 1000 0100 0000 1111 for the United States of America.
Table 1-3, indicates the organization that assigns and defines the values for the three different IDI fields of ATM NSAP addresses.
Table 1-3. Values for the IDI Field of ATM NSAP Addresses
IDI field content
A data country code (DCC)
International Organization for Standardization: OSI specification ISO 3166
An E.164 number
International Telephone and Telegraph Consultative Committee: CCITT specifications I.330 and I.331
An international code designator (ICD)
British Standards Institute
The ATM NSAP address can be logically divided into two sections: that portion assigned by the switch and the portion assigned at the endpoint. The part assigned by the switch is referred to as the network prefix. It includes all fields of the address except the end system identifier (ESI) and end system selector (SEL) fields, as shown in Figure 1-10. The endpoint's interim local management interface (ILMI) module communicates with the adjacent switch to retrieve its assigned network prefix and to register its values for the ESI field.Switches ignore the SEL field; however endpoint software can assign values to this field to differentiate among multiple upper-layer endpoints.
ATM data is carried logically within a virtual channel (VC) and physically by the virtual channel connection (VCC) which is a sequence of physical links stretching from the source endpoint to the destination endpoint, passing through one or more ATM switches. The physical links that make up the virtual channel connection are not known to any one instance of the ATM layer; however, the properties of the full-length connection are important to an ATM network administrator. The following sections describe the different methods for setting up VCs and the parameters that describe VC and VCC functionality.
Each VC has a traffic contract associated with it. The traffic contract determines the performance characteristics of the data transmission. The two parameters that are always included in a traffic contract are the transmission rate (expressed in ATM cells per second) and the quality of service (QoS). Other performance parameters can be included, for example, cell delay variation (CDV), cell transfer delay, and cell loss ratio.
Since transmission rates are expressed in ATM cells per second, it is useful to know that one ATM cell carries 48 bytes of upper-layer data. For example, in order for an upper-layer to transmit (or receive) 3.5 megabits of data per second, the traffic contract must specify about 9115 cells per second (9115 cells * 48 bytes in each cell * 8 bits in each byte = 3,500,160 bits). If the upper-layer data includes an encapsulated protocol, some of the 48 bytes may contain non-ATM overhead, like TCP/IP headers.
The QoS classes are as follows:
Class 1 for constant bit rate traffic (CBR), like video and audio
Class 2 for variable bit rate (VBR), like compressed video and audio
Class 3 for connection-oriented data, like Frame Relay
Class 4 for connectionless data, like IP network traffic
The manner in which the traffic contract is negotiated depends on whether the endpoints are using a permanent virtual circuit (PVC) or a switched virtual circuit (SVC), as explained in “Permanent Virtual Circuits” and “Switched Virtual Circuits”.
A permanent virtual circuit (PVC) is a long-term (permanent) communication channel between two ATM endpoints. The channel can directly connect two ATM endpoints or can involve any number of intermediate switches between the two endpoints. PVCs are created during a relatively difficult installation and setup procedure. The traffic contract is negotiated, person-to-person or as-advertised, but in all cases, before the installation and setup takes place. The traffic contract's performance parameters are either built into the equipment or are configured, manually, by a network administrator. Each node in the network must be configured to conform to the negotiated traffic contract and the contract cannot easily be changed. The sequence of physical links for each PVC must be planned and manually configured. First, the existence of a complete physical connection must be verified. Then, at each port along the route, an address must be created to identify the resources being reserved for this PVC; this address usually consists of a VPI/VCI value and a port identification. Before a VPI/VCI value is selected for a link, the administrator must verify that the value is available at both ends of the link. For the example PVC shown in Figure 1-12, four address mapping tables (one at each node) must be configured manually with the following bidirectional mappings:
At Endpoint A: upper-layer network address and resource address for link 1
At Switch1: resource address for link 1 and resource address for link 2
At Switch2: resource address for link 2 and resource address for link 3
At Endpoint B: resource address for link 3 and upper-layer network address
PVCs require significantly less software complexity and overhead than SVCs; however, they require significantly more administrative time for planning and configuring. PVCs are appropriate for environments in which the traffic contract and the point-to-point connections (PVCs) are well-defined and will remain stable for a reasonably long period of time. PVCs are required if any switch or host along the path does not support SVCs.
As an example of the manual configuration entries required for every PVC, Figure 1-13, shows sample address mapping tables for the single PVC shown in Figure 1-12.
A switched virtual circuit (SVC) is created and torn down dynamically (more or less in real time), as requested by an upper-layer application. The switches, the ATM signaling software, and, for IP traffic, the ATMARP software, automatically handle most of the negotiable parameters and operation management, including the following operations:
ATM address registration
Discovering a route between the two endpoints
VPI/VCI assignment at each link
Resource allocation along the entire route
Negotiation of the traffic parameters
Due to this automation, an SVC environment requires much less administrative time than a PVC configuration.
The two endpoints for an SVC are known as the calling party and the called party. The calling party is the endpoint that originates the setup request for the SVC. Each SVC is bidirectional and is, in fact, two virtual channels (VCs)--a forward VC and backward (or return) VC. The forward VC carries data from the calling endpoint to the called endpoint; the calling endpoint transmits on this channel, the called endpoint receives on this channel. The backward VC carries data in the opposite direction, so the calling endpoint receives on this channel while the called endpoint transmits on it. Three topics related to SVC operation are discussed in more detail in the sections that follow:
ATM signaling is the protocol that sets up and tears down an SVC. This protocol's full name is ATM user-to-network interface (UNI) signaling. A calling endpoint uses ATM signaling to request a channel and to specify the traffic contract for the SVC. The setup request is sent to an adjacent node, which can be either an adjacent ATM switch (either public or private) or the called endpoint.If the recipient of the setup request is an ATM switch (the network side of the UNI interface), it uses one of the following switch-to-switch protocols to discover a route to the called party and to forward the message through the network to the destination: (1) the network node interface (NNI) protocol with dynamically maintained route information, or (2) the interim interswitch signaling protocol (IISP) with manually configured route lookup tables. (Usage of UNI and NNI protocols between different types of nodes is shown in Figure 1-14.) For some parameters of the traffic contract, the called endpoint can modify the contract by selecting among a list of possible values before the connection is completely set up. The traffic contract can be different for the forward and back channels of an SVC.
The basic signaling messages used for managing SVCs are described in Table 1-4.
Table 1-4. Mandatory ATM UNI Signaling Messages
ATM signaling message
Who originates message
Who receives and processes message
Requests that a bidirectional SVC be created and specifies the traffic contract. Some contract parameters allow the called party to select among a list.
Any node implementing a UNI. Sent on forward channel.
The adjacent node (switch or called endpoint). When the recipient is a switch, the request is forwarded to the next hop on the route to the called party, and can be converted into the NNI format. The final switch gives the SETUP message to the called party. Each recipient of this message allocates resources and sets up the SVC, if it can.
Indicates that the SETUP message was received. This message is optional.
Each node that receives a SETUP message. Sent on back channel.
The adjacent node (switch or endpoint).
Indicates that the SVC has been set up completely between the two endpoints. This message contains any traffic parameters that were negotiated.
The called party after receiving a SETUP message and after setting up the SVC. Sent on back channel.
The adjacent node (switch or calling endpoint). When the recipient is a switch, the message is forwarded to the next hop along the route going back to the calling party and can be converted into the NNI format. The final switch gives the CONNECT message to the calling party.
Indicates that the SVC is set up and functional in both directions.
The calling party after receiving the CONNECT message. Sent on forward channel.
The adjacent node (switch or called endpoint). When the recipient is a switch, the message is forwarded to the next hop on the route to the called party, and may be converted into the NNI format. The final switch gives the message to the called party.
Requests that the SVC be torn down.
Either endpoint of an SVC. For a calling party, sent on forward channel. For a called party, sent on back channel.
The adjacent node (switch or endpoint). When the recipient is a switch, the message is forwarded to the next hop on the route to the other endpoint, and can be converted into the NNI format. Upon receipt of this message, each node tears down its local resources for this SVC.
Indicates that the SVC has been torn down. As soon as this message is transmitted, all references to the SVC are erased.
Each node that received a RELEASE message. Sent on the opposite channel from the RELEASE message.
The adjacent node (switch or endpoint).
Requests status information about the SVC.
An endpoint or its adjacent switch. Sent on either channel.
The adjacent node (switch or endpoint).
Indicates the status of the SVC at this node.
An endpoint or switch that received a STATUS ENQUIRY message. Sent on the opposite channel from the STATUS ENQUIRY message.
The node that generated the STATUS ENQUIRY message.
Four messages are involved in creating an SVC: SETUP, CALL PROCEEDING, CONNECT, and CONNECT ACKNOWLEDGEMENT. Figure 1-15, shows the order in which these messages are processed by the different nodes.
SETUP is the first message in the creation of an SVC. This message can be originated by either the source or the destination endpoint of a data transaction. As each switch along the path between the two endpoints receives the SETUP request, it may respond with a CALL PROCEEDING acknowledgment message, as shown in Figure 1-15. Each switch sets up its links for the SVC connection by allocating resources for a bidirectional connection at the specified traffic contracts. Once the resources are allocated, the switch forwards the SETUP request to the next node enroute to the other endpoint. When the SETUP request has been received and successfully processed by the called endpoint, the endpoint sends a CONNECT message, which is propagated along the return (backward) VC to the endpoint that originated the SETUP request. If a switch cannot set up the requested SVC, it does not forward the SETUP message and instead follows its CALL PROCEEDING message with a RELEASE message that causes all the links for that SVC to be torn down. If the called endpoint cannot match the requested traffic contract or does not want to accept the connection, it sends a RELEASE instead of a CONNECT message. Figure 1-16, shows the bidirectional SVC (two VCs) that is the result of a successful SETUP request.
|Note: The term forward is always used to describe the channel that carries data from the calling party to the called party. The term backward always refers to the channel that carries data from the called party to the calling party.|
When either endpoint wishes to terminate the connection, it generates a RELEASE message that causes each node along the connection to tear down the two VCs and pass the message on to the next node. As each node completes its tear down and frees its resources, it sends a RELEASE COMPLETE message to the adjacent node from which the RELEASE message came. Figure 1-17, shows the use of these messages for two cases: (A) the case in which the calling endpoint initiates the release, and (B) the case in which the called endpoint initiates the release.
For IRIS ATM, the ATM user-network interface (UNI) refers to the complete set of functionality that controls how an ATM endpoint (the user) interfaces with a switch (the network). Each ATM port requires one ATM UNI. The IRIS ATM signaling software (atmsigd) consists of a number of modules that run as an IRIX daemon and handle the various functions and protocols related to the UNIs on a system.
For every physical ATM port, the ATM standard requires one instance of an ATM UNI. For IRIS ATM, atmsigd performs this function. For each UNI, atmsigd creates a software stack made of the four following modules (shown in Figure 1-18). Each UNI has its own set of the lower three modules; all the UNIs within a system share a single instance of the overall control module.
Service specific convergence protocol (SSCOP, also known as QSAAL)
ATM adaptation layer 5 (AAL5)
For each UNI, atmsigd creates a PVC to the switch using the following default address: VPI=0 and VCI=5. The signaling module, as shown in Figure 1-18, uses this PVC for its own communications with the switch (for example, setup and teardown requests for SVCs). The VPI/VCI address of this PVC is configurable. Overall, the traffic on this PVC is sporadic and takes up only a small portion of any port's bandwidth; however, the higher the PVC's configured rate, the larger the percentage of the port's total bandwidth that can be occupied by this overhead at a specific point in time (that is, when there is UNI signaling occurring).
ATM network administration and management is based on the existence of an ATM management information base (MIB) that is managed by ILMI software via a PVC to the switch. For each UNI, there is one MIB and one ILMI PVC.
The ATM standard specifies an interim local management interface (ILMI) to handle address registration and assignment, and status reporting (shown in Figure 1-18, and Figure 1-19). To exchange status information, ILMI implementations use the simple network management protocol (SNMP, RFC 1157). ILMI implementations store some of the information they collect in management information databases (MIBs) that users can view. There is one MIB for each UNI (that is, each physical connection). The ATM MIBs contain objects and tables that are specified by the ATM User-Network Interface Specification standard. The module in IRIS ATM that performs these duties is atmilmid. Like snmpd, atmilmid is an administrative process (IRIX daemon) that acts as an SNMP agent, managing MIBs and exchanging information with other ILMI agents. The atmilmid also functions as a subagent to the main SNMP agent (snmpd) so that the ATM MIBs can be viewed with standard SNMP MIB browsers.
The atmilmid daemon responds to requests from other ILMI modules (for example, those that reside on adjacent switches) as well as requests from the local main SNMP agent, as shown in Figure 1-19. To communicate with the local SNMP agent, atmilmid listens on a user datagram protocol (UDP) socket. To communicate with each adjacent ILMI agent, the atmilmid uses a permanent virtual circuit (PVC) on each port with the following default address: VPI=0 and VCI=16. (The VPI/VCI address for the PVC and the socket address are both configurable.) Overall, the traffic on this PVC is sporadic and takes up only a small portion of any port's bandwidth; however, the higher the PVC's configured rate, the larger the percentage of the port's total bandwidth that can be occupied by this overhead at a specific point in time (that is, when there is ILMI overhead communication occurring).
During startup, atmilmid opens a PVC to each adjacent switch and contacts each ILMI agent, as shown in Figure 1-19. From each ILMI agent, atmilmid obtains the switch-assigned portion for its ATM address and it registers its locally assigned portion (MAC address). If the ILMI agent on the switch is not available, atmilmid completes this task as soon as that agent comes online. If the request times out, atmilmid looks for a locally configured ATM address. If no ATM address is available, atmilmid uses a null address.
After this initialization procedure, the atmilmid daemon uses its PVC during normal operation to exchange status information with adjacent ILMI agents. The atmilmid daemon maintains (in memory) one MIB for each UNI. Each UNI MIB contains the adjacent switch's table of supported network prefixes and UNI status objects, as specified by the ATM Forum standards. You can view the ATM MIBs by using any application developed for viewing SNMP MIBs (for example, the IRIXpro Browser\xb9).
The documents that specify the ATM standards to which IRIS ATM complies are as follows:
ATM User-Network Interface Specification, Version 3.0, released by The ATM Forum Technical Committee, September 1993.
ATM User-Network Interface Specification, Version 3.1, released by The ATM Forum Technical Committee, September 1994.
IP and ATM are both protocols that include network layer processing that supports routing. In ATM environments of the future, ATM will function as the main network layer and IP will be overlayered, as shown in Figure 1-1. In these future, large, globally connected environments, routing will be done by ATM. In the meantime, while ATM routing is not fully standardized and implemented, and ATM networks are not globally connected, the current standards for IP routing and address resolution can be used. The method for implementing this “classical” functionality is defined by RFC 1577.IRIS ATM logical network interfaces conform to RFC 1577 rules and guidelines, whether using SVCs or PVCs, as long as the onsite configuration is set up according to the RFC 1577 guidelines. If a site wants to create a nonconforming configuration, IRIS ATM also supports this, as explained in more detail in “IP-over-PVC Configurations That Do Not Comply with RFC 1577”.
RFC 1577 specifies a set of rules for implementing IP routing and address resolution over ATM. Implementations that are compliant with RFC 1577 are termed classical IP because the design treats IP networks in ATM environments as if they were still local area networks (that is, as if the systems sharing a subnetwork address were a collection of systems physically connected to a shared communication medium).The design specified by RFC 1577 differs slightly for permanent virtual circuits (PVCs) and switched virtual circuits (SVCs), and for VCs that use or do not use LLC/SNAP encapsulation, as discussed in the following paragraphs.
One important difference between configuring IP in legacy LAN environments as compared to classical IP-over-ATM environments, is that each ATM physical port can support multiple subnetworks, whereas in legacy LANs each physical connection supports one logical network interface. In IP implementations over shared, broadcast mediums, such as Ethernet or FDDI, the grouping of stations into a subnetwork is a physical event (connection to a LAN). In IP-over-ATM, the grouping of stations into a subnetwork is not a physical event, but an administrative one (simple assignment of addresses). This difference is due to the fact that ATM allows the total bandwidth of any single connection (port) to be subdivided and separately addressed into simultaneously functioning virtual channels; each channel can carry data to and from a different subnetwork.
In IP networks, it is common to think of each host (that is, each system on a network) as identified by two unique addresses: a logical IP address and a hardware MAC (or Ethernet) address. This view is, however, inaccurate. It is more accurate to think of the MAC hardware address as identifying only the physical connection to the network and the IP address as identifying one upper-layer software endpoint (for example, a logical IP network interface, such as et0 or atm0). When described this way, it is conceivable to have multiple upper-layer endpoints in a system, all sharing the same physical connection. And this is exactly what ATM makes possible: numerous IP interfaces all sharing one ATM port. The following paragraphs describe exactly how this is done.
In ATM switched virtual circuit environments, each upper-layer endpoint is identified by an ATM address, as described in “ATM Addresses”. In concept, an ATM address is a network layer address, not a hardware address. However, in many ATM implementations (including IRIS ATM), an ATM address is partially a hardware address because it includes the hardware address (MAC address) of the ATM port. All of the fields of the ATM NSAP except the end system selector (SEL) field constitute the ATM port's address; the entire ATM NSAP, including the SEL field, is the endpoint's address. The SEL field is used to distinguish between the upper-layer endpoints that share that port.
To do IP-over-SVCs, it is necessary to map between IP and ATM addresses. RFC 1577 specifies a method for doing this. The method uses the ATM address in place of the standard Ethernet or MAC hardware address, and describes a protocol, ATMARP, for registering and discovering the mappings, and for maintaining the IP-to-ATM address resolution table. The RFC 1577 design is shown in Figure 1-20. The ATMARP protocol uses standard IP address resolution protocol (ARP) (RFC 826) to discover ATM addresses when only the IP address is known, and inverse IP ARP (RFC 1293) to discover IP addresses when only the ATM address is known.
In the RFC 1577 design, all IP hosts that use the same network address and subnet mask (that is, the same subnet address) are considered members of a single logical IP subnetwork (LIS). Each LIS can be thought of as logically similar to an Ethernet local area network, even though the members of the LIS do not have any special physical relationship to each other and can even be separated by multiple ATM switches. Each LIS member must be known to other members of that LIS by a single ATM address. For IRIS ATM, this rule means that all traffic between an IRIS ATM endpoint and other members of a specific LIS must travel over the same physical port, as shown in Figure 1-21. To contact an IP host that is not a member of the same LIS, a router must be used, even when it is physically possible for the IP host to be contacted directly through the ATM switch, as illustrated by network 255.100.8, which is shown in Figure 1-21.
Within each LIS, one (and only one) host acts as the address resolution server and maintains the IP-to-ATM address resolution table. Upon request, this server provides the ATM address for any other member of that LIS. Each LIS member (IP endpoint) registers its IP address and ATM address with the ATMARP server. When the ATM address is in the ATM NSAP format, a different ATM address can be registered for each of the local IP addresses.This is accomplished by using the SEL field (which is not interpreted by ATM switches) to distinguish among the IP upper-layer endpoints. For example, the ATM address for atm2 using port 0 might consist of the ATM address for port 0 (network prefix and MAC address) plus the 8-bit SEL field set to 00000010 binary (which is 2 in decimal format).
Because the logical network interfaces are upper-layer, nonphysical entities and the ATM port can simultaneously handle multiple communication channels, a single physical ATM port can service more than one logical network interface, as shown in Figure 1-21. (Each logical network interface identifies one member of an LIS.) In this figure, port 1 (board unit 1) services both atm1 (a member of LIS 255.100.3) and atm2 (a member of LIS 255.100.4). IRIS ATM can simultaneously support up to 64 logical network interfaces. These can all share a single physical port, or they can be distributed among a number of ports. During the SVC configuration process, the network administrator assigns each logical network interface to one (and only one) port. Figure 1-21, illustrates a system with three logical network interfaces (each one is a member of a different LIS) and two ATM physical ports.
The IRIS ATM subsystem handles IP-over-SVC address resolution with the following mechanisms:
The IRIS ATM ILMI (atmilmid) module, at startup, automatically communicates with the adjacent switch on each ATM hardware connection (port) to create an ATM address for itself. On each ATM port that is using an ATM NSAP address, the atmilmid obtains the network-prefix portion of the ATM NSAP from the switch and registers its local portion (a MAC address). If the address request on any port times out without having received an address from the adjacent system, IRIS ATM uses the address configured in the /var/atm/atmilmid.conf file.
At startup, the IRIS ATM initialization script (init.d.atm) looks in the /var/atm/ifatm.conf file for the address resolution server and the port assignment for each logical network interface. (Each logical network interface is a local endpoint for one LIS.) The software then opens up an SVC to each ATMARP server and registers its IP-to-ATM address mapping (using ATMARP and LLC/SNAP encapsulation). This SVC is kept open for ATMARP communications. The software reopens the SVC if it goes away at any time.
When a port's ATM address is in the ATM NSAP format, a unique ATM address is mapped to each of the local IP addresses that use that port. The registered ATM address consists of the port's ATM address (network prefix and MAC address) plus the SEL field set to the logical network interface number. For example, the ATM address for atm2 using port 0 consists of the port's network prefix and MAC address plus the 8-bit SEL field set to 00000010 (binary).
For each IP-over-SVC transmission request, the IRIS ATM software looks first in its local cache of IP-to-ATM address mappings. If the address is not there, the software uses its SVC to the address resolution (ATMARP) server to discover the ATM address that corresponds to the IP address. Once the ATM address is known, a bidirectional SVC is created to the endpoint and data can be exchanged. If the SVC becomes idle (that is, has not carried any data for a configurable timeout period), it is torn down.
If the IRIS ATM software has been configured to function as an ATMARP server, the software maintains the IP-to-ATM address resolution table, and responds to client requests for ATM address resolution, including inverse ARP requests.
All of the IP-over-ATM network interfaces on a system must be members of the same logical IP subnetwork (LIS). However, no two members of the same LIS can reside within the same Origin processor module. This restriction is present only to enable automatic default and correct operation of the IRIX routing daemon (for example, routed). Nothing in the ATM protocol, the IRIS ATM software or hardware, or RFC 1577 requires this restriction. For RFC 1577 compliance, each IRIX ATM port can support one member of each LIS. For example, a module with 64 IRIX ATM ports could support 64 hosts from a single LIS. For ATM compliance, there are no restrictions regarding the number of logical network interfaces.
In environments requiring IP-over-PVCs in compliance with RFC 1577 (using or not using LLC/SNAP encapsulation and ATMARP), IRIS ATM can interoperate (that is, participate in each LIS) with or without ATM addresses and with or without ILMI.
Before IP traffic can be exchanged over PVCs, a network manager application must create the VCs and associate them with IP addresses. IRIS ATM ships the atmarp utility for this purpose. (Alternatively, a site can develop its own manager using the IRIS ATM application programming interface.) Once the PVC management application has created the PVCs, applications send and receive using the standard IRIX socket interface, just as with any other network subsystem that supports IP traffic.
During system startup, the /etc/init.d/network.atm script starts the atmarp PVC management application if the /var/atm/pvc.conf IP-to-ATM address resolution file exists. The user-configurable pvc.conf file maps IP addresses to VC addresses. (A VC address consists of a local port identifier and VPI/VCI values from the ATM cell). For each entry in the table, the atmarp daemon establishes a best-effort PVC and associates it with an IP address. The atmarp utility then goes to sleep, leaving the VCs open and ready for use. IP applications can then transmit and receive over the associated PVC. If atmarp is interrupted with a SIGHUP signal (for example, killall -HUP atmarp ) it wakes up, reloads the lookup table from the pvc.conf file, makes any changes necessary, then goes back to sleep.
IP-over-PVCs, by default, operates with LLC/SNAP encapsulation and responds to inverse ATMARP requests. If a site wants to use ATM addresses, the ILMI module (atmilmid) must be configured, otherwise zero-length (null) ATM addresses are used. In a PVC environment, the ATM address is superfluous since the hardware connection is static: that is, (1) the PVCs are defined by the network manager as part of the system configuration; (2) they are created by the software (atmarp daemon) at startup time; and (3) they remain active until the network manager tears them down. The only endpoint address really needed is the IP address and the local VC address (VPI, VCI, and port tuplet) that are associated with each logical network interface.
The IRIS ATM subsystem handles PVC address resolution by using LLC/SNAP encapsulation with the following mechanisms:
The IRIS ATM ILMI (atmilmid) module, at start up, automatically communicates with the adjacent switch on each ATM hardware connection (port) to create an ATM address for itself. On each ATM port that is using an ATM NSAP address, the atmilmid obtains the network-prefix portion of the ATM NSAP from the switch and registers its local portion (a MAC address). If the address request on any port times out without having received an address from the adjacent system, atmilmid uses the address configured in the /var/atm/atmilmid.conf file. If no ATM address is obtained from either source, a null source address is used.
At startup, the IRIS ATM initialization script (init.d.atm) starts atmarp, which loads the contents of the /var/atm/pvc.conf file. This is the IP-to-VC address mapping table for PVCs. Each entry identifies one remote IP address, and maps it to a local hardware address consisting of a VPI/VCI address and a port identification number. The network portion of each IP address in this file must match the network portion of one of the logical network interfaces configured on this system; the software verifies that each entry has a local endpoint belonging to the same LIS.
For each entry in the table, the atmarp daemon sets up both a transmit and a receive PVC. By default, each PVC is set up to use LLC/SNAP encapsulation, so that it supports IP inverse ARP protocol. Each PVC connects two members of an LIS, as shown in Figure 1-22.
For each transmission request to one of these IP addresses, the IRIS ATM software looks in its IP-to-PVC address mapping table to discover the VCI/VPI and port (hardware) address on which to transmit. If no match is located, the transmission does not occur.
For each received packet on any of these PVCs, the IRIS ATM software looks in its IP-to-PVC address mapping table to discover the local endpoint (IP address), then places that packet on the logical network interface's input queue.
|Note: In other words, each PVC to another IP address must be already set up and waiting before an application tries to send to that address and before packets start arriving at the switch from that address.|
If the IRIS ATM software receives an inverse ATMARP request, it responds with either the known ATM address or, if none is known, a zero-length address.
IRIS ATM allows IP to operate over PVCs without the overhead of IP ARP or LLC/SNAP encapsulation (in compliance with RFC 1577). This configuration functions exactly like IP-over-PVC with LLC/SNAP encapsulation (described in the previous section), except that the encapsulation is not used and inverse ARP replies are not generated. This functionality can be configured on a per-PVC basis in the /var/atm/pvc.conf file, as explained in “Address Resolution for PVCs” in Chapter 3.
The IRIS ATM subsystem handles PVC address resolution without LLC/SNAP encapsulation with the following mechanisms:
At startup, the IRIS ATM initialization script (network.atm) calls the atmarp utility that loads the contents of the /var/atm/pvc.conf file into memory. This is the IP-to-PVC address mapping table. Each entry identifies one remote IP address, and maps it to a local hardware address consisting of a VPI/VCI address and a port identification number. The network portion of each IP address in this file must match the network portion of one of the logical network interfaces on this system in order to ensure that both endpoints belong to the same LIS.
For each entry in the table, atmarp sets up both a transmit and a receive PVC. For non-LLC/SNAP usage, each entry must be marked in order to turn off this encapsulation. Each PVC connects two members of an LIS, as shown in Figure 1-22.
For each transmission request to one of these IP addresses, the IRIS ATM software looks in its IP-to-PVC address mapping table to discover the VCI/VPI and port address on which to transmit. If no match is located, the transmission does not occur.
For each received packet on any of these PVCs, the IRIS ATM software looks in its IP-to-PVC address mapping table to discover the local endpoint (IP address), then places that packet on the logical network interface's input queue.
|Note: In other words, each PVC to another IP address must be already set up and waiting before an application tries to send to that address and before packets start arriving at the switch from that address.|
IRIS ATM allows IP to operate over PVCs in configurations that do not comply with RFC 1577 guidelines. This type of configuration functions exactly like the conformant IP-over-PVC configurations; however, the address resolution and PVC management daemon, atmarp, cannot be used because it enforces RFC 1577 rules. The following scenarios (shown in Figure 1-23,) are examples of nonconformant configurations:
Endpoints having IP addresses with the same network portion and subnet mask that cannot all contact each other directly. In example A, endpoints 255.100.2.2 and 255.100.2.3 cannot directly contact each other since they are physically connected to different ATM networks.
An IP address is associated with more than one ATM address. In example A, endpoint 255.100.2.1 is known by two different ATM addresses (due to its use of two ports).
IP addresses with the same network portion and subnet mask share a single virtual channel. In example B, multiple members of an LIS are sharing a single PVC.
The following sections describe some of the implementation details for IRIS ATM products.
This section describes the manner in which the IRIX operating system assigns an IP network interface (for example, atm0 and IP address 220.127.116.11) to a particular ATM port.
On a CHALLENGE or Onyx system, with each restart (for example, after a reboot, shutdown, halt, or init command, or a power off), the startup routine probes for hardware installed in the HIO mezzanine adapter slots, and makes a list of all the boards located. The slots are probed in the following order:
Main IO4 board: I/O adapter slot 5, then 6
Second IO4 board (if present): I/O adapter slot 2 (only when the FMezz board is long), slot 5, slot 3 (only when the FMezz board is long), and slot 6
Third IO4 board (if present): I/O adapter slot 2 (only when the FMezz board is long), slot 5, slot 3 (only when the FMezz board is long), and slot 6
Fourth IO4 board (if present): I/O adapter slot 2 (only when the FMezz board is long), slot 5, slot 3 (only when the FMezz board is long), and slot 6
The list and order of IRIS ATM boards that were located by this process can be displayed by using the /sbin/hinv command, as shown in the following example.
% /sbin/hinv -d atm ATM-OC3c: atm0, slot 5 adap 6, firmware version #### ATM-OC3c: atm1, slot 3 adap 5, firmware version ####
If the hardware unit numbers are assigned by software (the default behavior), the text atm# indicates the order, as follows: atm0 (also known as unit0) is the first board located and atm1 is the second. If the unit numbers are set by jumpers, the text atm# represents the unit number read from the setting on the board. In the preceding example, the startup routine located two IRIS ATM boards attached to two different IO4 boards.
As the startup process begins to initialize ATM network interfaces, it does the following:
If the IRIS ATM driver is configured to support the IP protocol stack, the driver creates the number of IP-over-ATM network interfaces specified in the /var/sysgen/master.d/if_atm file.
The /etc/init.d/atm script uses the contents of the /var/atm/ifatm.conf file to associate each network interface with a board.
The ifconfig command (which is invoked automatically during startup) searches the netif.options file for IP-over-ATM interface names (for example, atm0, atm1, atm2) and configures and enables each interface that exists (that is, each interface that was created by the driver).
On an Origin2000, OriginServer, or Onyx2 system, with each restart (for example, after a reboot, shutdown, halt, init command, or a power off), the startup routine probes for hardware on all the systems connected into the Craylink interconnection fabric. All the slots and links in all the modules within the fabric are probed. The routine then creates a hierarchical file system, called the hardware graph, that lists all the hardware that is located. The top of the hardware graph is visible at /hw. (For complete details, see the man page for hwgraph.) After the hardware graph is completed, the ioconfig program assigns a unit number to each located device that needs one. Other programs (for example, hinv and the device's driver) read the assigned number and use it. On an initial startup, ioconfig assigns numbers sequentially; for example, if two IRIS ATM XIO boards are found, they are numbered unit0 (with ports 0 to 3) and unit1 (with ports 4 to 7). The port numbers for each IRIS ATM XIO board are created from the board's assigned unit number: first_port#=brd_unit# * 4 and the other three are sequential from the first. On subsequent startups, ioconfig distinguishes between hardware that it has seen before and new items. To previously seen items, it assigns the same unit and port numbers that were assigned on the initial startup. To new hardware, it assigns new sequential numbers. It never reassigns a number, even if the device that had the number is removed and leaves a gap in the numbering.
|Note: New items are differentiated from previously seen items based solely on the hardware graph listing (that is, the path within /hw). The database of previously seen devices is kept in the file /etc/ioconfig.conf. For example, a replacement board that is installed into the location of an old board will be assigned the old board's numbers, while a board that is moved from one location to another will be assigned a new unit number and new port numbers. For more information about the hardware graph and ioconfig, see the man pages for hwgraph and ioconfig.|
The IRIS ATM boards that are located can be displayed with the /sbin/hinv or find commands, as shown in the following examples. In these examples, the startup routine located two IRIS ATM boards on two different modules (that is, inside two different chassis).
% find /hw/module -name atm /hw/module/1/slot/io3/quad_atm/pci/0/atm /hw/module/2/slot/io12/quad_atm/pci/0/atm<
% /sbin/hinv -d atm ATM XIO 4 port OC-3c: module 1, slot io3, unit 0 (ports: 0-3) ATM XIO 4 port OC-3c: module 2, slot io12, unit 1(ports: 4-7)
As the startup process continues, it calls the network hardware drivers so that they can create their network and programmatic interfaces. For ATM, this step works in the following manner:
For each IRIS ATM port, the startup process creates short (/hw/atm/#) and long (/hw/module/#/slot/.../atm.) entries in the hardware graph. Then, the installation scripts create a symbolic link in /dev that points to the port's entry in the hardware graph. The /dev/atm# links are for use by the IRIS ATM application programming interface (API).
If the IRIS ATM driver is configured to support the IP protocol stack, the driver creates the number of IP-over-ATM network interfaces specified in the/var/sysgen/master.d/if_atm file.
The /etc/init.d/atm script uses the contents of the /var/atm/ifatm.conf file to associate each IP-over-ATM network interface with a port.
The ifconfig command searches the netif.options file for IP-over-ATM interface names (for example, atm0, atm1, atm2) and configures and enables each interface that exists (that is, each interface that was created by the driver).
The following sections describe how transmission rates are created and managed by different IRIS ATM hardware products.
The IRIS ATM-OC3c HIO board for CHALLENGE and Onyx platforms manages transmission rates with rate queues and divisors. The board has 8 rate queues organized as 2 banks: a0-a3 and b0-b3. Each queue can support one peak rate and 63 different sustainable rates. The “a” bank consists of 4 high-priority queues that are designed for constant bit rate traffic (CBR and VBR channels). The other bank contains 4 low-priority queues and are only used for best-effort traffic.
High-priority queues are serviced before low-priority ones. As long as there is data awaiting transfer on any high-priority queue, low-priority data is not transmitted. This means that, for applications with a constant flow of data, only queues a0-a3 will ever operate.
During startup, the IRIS ATM HIO board driver configures each rate queue, as follows:
Queues that are mentioned in the /var/atm/atmhw.conf file are configured to a fixed rate, as specified in the file. The IRIS ATM driver never changes the rates for these queues; this ensures that site-specified rates are always available, even when the queues are not actively being used. Appendix B, “Supported Transmission Rates for IRIS ATM Board on CHALLENGE and Onyx Platforms”, lists the supported rates, which range from 0 to 135,991,460 bits-per-second.
Queues that are not mentioned (or are commented out) in the file are left unconfigured. The driver configures these during operation, as follows.
During operation, as VCs are created, the driver associates each newly created VC with the queue whose transmission rate best matches the peak rate requested for that VC. For each ATMIOC_CREATEPVC or ATMIOC_SETUP command, the driver looks for a queue whose transmission rate best matches the rate requested in the API call, following these guidelines:
For VCs carrying best-effort traffic, the driver uses the low-priority queue whose rate is closest to, but slower than, the requested peak rate.
For VCs carrying CBR and VBR traffic, the driver uses the high-priority queue whose configured rate exactly matches the requested peak rate. If the requested rate does not exist, the driver searches for a high-priority queue with the following characteristics and reconfigures it to the requested peak rate:
A queue that does not currently have a VC associated with it
A queue that was not configured from the atmhw.conf file during startup
|Note: There can be dozens of CBR and VBR virtual channels active on a board, but the peak rate for each one must be one of the four rates that are configured on the high-priority queues.|
To set the sustainable transmission rate for a particular VC, one of the board's configured rates is divided by a divisor (ranging between 1 and 64). The IRIS ATM driver sets all divisors. Peak rates for CBR, VBR, or best-effort traffic use divisors of 1. Sustainable (average) rates for VBR traffic use divisors between 2 and 64 (inclusive).
|Note: The IRIS ATM-OC3c board for CHALLENGE and Onyx platforms implements sustainable (average) rates by filling some cell slots with idle cells instead of data. This results in some loss of bandwidth which, under heavy load conditions, can be noticeable.|
To summarize, the IRIS ATM-OC3c board simultaneously makes available for selection up to 8 different peak rates and up to 504 (8*63) sustainable rates. Not all of these available selections can be actively used simultaneously, since this would exceed the board's bandwidth.
Table 1-5, summarizes the default settings configured for the IRIS ATM-OC3c board's rates.
Table 1-5. Default Transmission Rates on ATM-OC3c Queues
Default cell rate
(in ATM cells per second)
Default bit rate
(in user payload bits per second)
Priority / Use
High / CBR, VBR[a]
High / CBR, VBR
High / CBR, VBR
High / CBR, VBR
Low / BE
Low / BE
Low / BE
Low / BE
[a] CBR = constant bit rate; VBR = variable bit rate; BE = best effort
A board is oversubscribed when the sum of all the open VCs multiplied by their average rates is greater than the board's total payload bandwidth.The IRIS ATM software contains a number of features that prevent performance degradation due to oversubscription.Whenever there is even one VC open for a CBR traffic contract, the IRIS ATM software refuses to create new VCs once the board's total bandwidth is allocated to open VCs (including the best-effort ones). If all the VCs on a board are best-effort (regardless of which queues they are using), the IRIS ATM software allows the board to become oversubscribed and handles the transmissions in the best manner possible.
|Note: With this board, the default TCP/IP configuration uses the maximum bandwidth possible. Therefore, a single TCP/IP connection can oversubscribe the port it uses and prevent CBR traffic. To prevent this, there are two options: (1) reduce the default TCP/IP bandwidth (for example, by editing the /var/atm/ifatm.conf file) or (2) use ifconfig to disable the TCP/IP logical network interfaces.|
The IRIS ATM-OC3c 4Port XIO board for Origin2000 and Onyx2 platforms handles transmission rates with a table that is managed by firmware. Each table entry represents one ATM cell slot on the output stream, as shown in Figure 1-25. For each VC, the board's firmware calculates how many table entries are needed to generate the VC's rate, then associates that number of table entries with the VC's data stream, as shown in Figure 1-25. The cell feeder logic sequentially moves through the table, entry by entry. Each entry triggers a 48-byte read (384 bits) from one data stream. An ATM cell is created and placed on the stream of data that is on its way to the SONET logic. If there is no VC associated with a table entry, the cell feeder creates an idle cell to fill that cell slot in the stream. As shown in Figure 1-25, a VC is associated with multiple, evenly spaced table entries in order to transmit the VC's data steadily at its transmission rate. For this board, a VC's sustainable rate is exactly the same as its peak rate.
Theoretically, for a table with 353,207 entries, each entry would be processed once every second, while the entries in a table with half that number of entries would be processed twice every second. To understand why this is true, see the following calculation and facts:
The OC3c SONET line rate is 155, 520,000 bits per second of which approximately 87.2% (135,631,698 bits) is upper-layer payload, and the rest is SONET and ATM overhead.
The upper-level data in each ATM cell (and hence, each entry in the table) places 384 bits onto the transmission medium.
When the total number of upper-layer bits transmitted every second is divided by the upper-layer bits in each cell, the result is the number of ATM cells per second. This is the number of table entries that are processed once every second: 135,631,698 bits transmitted per second / 384 bits per cell= 353,207 cells per second.
Or expressed another way, the (number of table entries) multiplied by (number of times each table entry is processed every second) multiplied by (upper-layer bits in each cell) equals the total number of upper-layer bits transmitted in a second: (353,207 * 1) * 384 = 135,631,408 maximum bits per second of payload (176,603 * 2) * 384 = 135,631,108 maximum bits per second of payload.
A more realistic example for this product's hardware implementation (in which the maximum number of table entries is limited) is the following example:
(7,064 entries * 50 table passes) * 384 bits per cell = 135,628,808 payload bits per second
The driver for the IRIS ATM-OC3c 4Port XIO board does not allow oversubscription of any port. Whenever a requested rate cannot be provided, the request is denied. A denial can be due to either of these reasons:
Not enough table entries are free to create the requested transmission rate. Said another way, the request is denied when filling it would oversubscribe the line rate.
The spacing possible with the currently available table entries is not even enough to create a steady data flow for the VC. For example, if the table entries for a requested rate need to be spaced at intervals of 50 (table entries 8, 58, 108, 158, and so on, or table entries 3, 53, 103, 153, and so on), the request is denied when one or more of the needed table entries are already filled. This denial can occur even though the requested rate does not oversubscribe the connection's line rate.
IRIS ATM supports the following upper layer applications:
Standard IRIX TCP/IP applications:
For Internet (IP) networking, IRIS ATM provides its services to the IRIX IP protocol stack. IP applications can use the IP-over-ATM logical network interfaces (atm#), just as they would IP over Ethernet or FDDI. This support provides RFC1577-compliant address resolution and packet encapsulation. IP traffic can be exchanged over dynamically-created switched virtual circuit (SVC) connections or permanent virtual circuit (PVC) connections.
With SVCs, IP-to-ATM address resolution is handled by an ATMARP server (as specified by RFC1577). With PVCs, IP-to-ATM address resolution is handled by the IRIS ATM atmarp daemon.
With SVCs, the creation of channels is handled by the IRIS ATM atmsigd module which provides a private user-to-network interface (UNI) as specified in the official standard: ATM User-Network Interface Specification, Versions 3.0 and 3.1 (ATM UNI). With PVCs, the creation of channels is handled by the IRIS ATM atmarp daemon.
For both SVCs and PVCs, the atmilmid module provides interim local management interface (ILMI) support and address assignment and registration, as specified in the ATM UNI standard.
IRIS ATM utilities:
IRIS ATM includes utilities (for example, atmconfig, ifatmconfig, atmstat, sigtest, and atmtest) for configuring, monitoring, and testing the IRIS ATM subsystem.
IRIS ATM provides an application programming interface (API) that customers can use to develop their own upper-layer applications that use PVCs or SVCs for IP or non-IP traffic. See the IRIS ATM API Programmer's Guide (shipped with each IRIS ATM board) for details.
 IRIS ATM does not currently support ABR.
 IRIS ATM currently supports only AAL5.
 The IRIS ATM board supports only ATM payloads.
 The product does not include a VC management program for non-IP traffic. An application programming interface is provided so that customers who require non-IP traffic can develop applications.
 For IRIS ATM, the ESI field is always a MAC address read from the IRIS ATM board.
 For IP-over-ATM, IRIS ATM sets the SEL field to a value that matches the logical IP network interface identification. For example, the ATM NSAP for logical network interface atm0 uses SEL=0x00 but that for atm4 uses SEL=0x04.
 The term public describes services that are offered to the general public by equipment supplied by a public service company (such as a telephone company). Private indicates services that are offered by a private entity to a restricted set of users (for example, to employees of a company).
 There are a number of proposals for designs and protocols that handle routing, address resolution services, and other network services over ATM. RFC 1577 is the one that defines the standard for “classical IP.”
 Two systems share the same subnetwork address when the network portions of their IP addresses match and they are using the same subnet mask.
 IRIS ATM does this automatically by setting the SEL field set to the same value as the logical network interface number. For example, for atm0, SEL is 0; for atm2, SEL is 2.
 Total OC3c bandwidth is 155.52 megabits per second; however, of this, about 87.2% (135,631,698 bits) is available for upper-layer data. This is referred to as the payload bandwidth.
 When a VC specifies a sustainable rate, this is the average rate. When the VC does not specify a sustainable rate, the peak rate is used as the average rate.