This chapter provides an introduction to OpenGL Performer, including a survey of visual simulation techniques, descriptions of features and libraries, and discussion of some of the specific details of OpenGL Performer structure and use.
OpenGL Performer can be used in various ways. You can use it as a complete database processing and rendering system for applications such as flight simulation, driver training, or virtual reality. You can also use it in conjunction with layered application-development tools to perform the low-level portion of visual simulation development projects. In short, applications can use part or all of the features provided by OpenGL Performer.
For example, consider a driver training application that has already been developed. This application consists of a database, simulation code, and rendering code. The application can be ported to OpenGL Performer in several ways. If time is short and the bottleneck is in the rendering code, OpenGL Performer's libpr rapid- rendering layer can take over the rendering task with minimal effort. Alternatively, it may be better to create an importer to import the existing database into OpenGL Performer's run-time format and gain the extra features that the full library, libpf, provides.
OpenGL Performer is an extensible software toolkit for creating real-time 3D graphics. On IRIX and Linux systems, the main components of the toolkit are six libraries, typically used in their dynamic shared object (DSO) form with the .so suffix, as shown in Table 2-1; support files for those libraries (such as the header files); and source code for sample applications. On Windows systems, the DSO equivalent is a dynamic link library (DLL) with a corresponding file suffix of .dll.
Main OpenGL Performer library. Contains libpf, which handles multiprocessed database traversal and rendering, and libpr, which performs the optimized rendering, state control, and other functions fundamental to real-time graphics.
Library of scene and geometry building tools that greatly facilitate the construction of database loaders and converters. Tools include a sophisticated triangle mesher and state sharing for high-performance databases.
Utility functions library. Note that libpfdu-util.dll is a combination of libpfdu and libpfutil.
User interface library.
A graphical viewer library that provides for the easy construction of applications.
Collection of libraries containing the load, convert, and store routines for numerous file formats.
|Note: Throughout this guide, a reference to DSO files will pertain to both DSO and DLL files unless otherwise noted.|
Note that while this document refers often to the libpr library or libpr “objects,” the library itself does not exist in isolation—it has been placed within the libpf library to improve instruction-space layout, procedure call performance, and caching behavior. However, libpr still provides an implementation and portability abstraction layer that simplifies the following discussions.
In addition to the core libraries, OpenGL Performer provides a suite of database loaders in the form of dynamic shared objects. Each loader reads data files or streams structured in a particular format and converts it into an OpenGL Performer scene graph. Loader libraries are named after their corresponding file extension, for example, the Wavefront “obj” format loader is found in libpfobj.so. Any number of file loaders may be accessed through the single pfdLoadFile() function, which uses special dynamic shared object features to locate and use the proper loader corresponding to the extension of the file being loaded.
Figure 2-1 illustrates the relationships between the OpenGL Performer libraries and the operating system software.
All OpenGL Performer features are provided as a layer above the operating system and the graphics library. However, OpenGL Performer does not isolate application programs from the operating system or the graphics library, however. Even when using OpenGL Performer to its fullest extent, applications have direct and free access to all system layers—including not only libpf, libpr, libpfdu ,libpfutil, libpfui, libpfv, libpfmpk, and the libpfdb loader, but also the OpenGL graphics library and the operating system. You are free to choose which of the libraries best suits your needs. You may want to build your own toolkits on top of libpr (but you still link with libpf; you just do not use any libpf features), or you can take advantage of the visual simulation development environment that libpf provides.
OpenGL Performer defines a run-time-only database through its programming interface; it does not define an archival database or file format. Applications import their databases into OpenGL Performer run-time structures. You can either write your own routines to do this or use one of the many database loaders provided as sample source code. These examples show how to import more than 30 popular database formats and how to export scene graphs in the open Designer's Workbench and Medit formats (see OpenGL Performer Programmer's Guide for more information).
This section lists the features of the OpenGL Performer libraries. An application can use all or just part of the features. You can use these features in conjunction with or extend them with other application development tools.
High-speed geometry rendering functions
Efficient graphics state management
Comprehensive lighting and texturing
Simplified window creation and management
Immediate mode graphics
Display list graphics
Integrated 2D and 3D text display functions
A comprehensive set of math routines
Intersection detection and reporting
Color table utilities
Windowing and video channel management utilities
Asynchronous filesystem I/O
Shared memory allocation facilities
High-resolution clocks and video-interval counters
Multiple windows per graphics pipeline
Multiple display channels and video channels per window
Hierarchical scene graph construction and real-time editing
Multiprocessing (parallel simulation, intersection, cull, draw processes, and asynchronous database management)
System stress and load management
Asynchronous database paging
Level-of-detail model switching, with fading or morphing
Rapid culling to the viewing frustum
Intersections and database queries
Dynamic and static coordinate systems
Shadows and spotlights
Visual simulation features
Light points, both raster and calligraphic
Sophisticated fog and haze control
Landing light capabilities
Produces optimized OpenGL Performer data structures.
Tessellates input polygons including concave polygons and recombines triangles into high-performance meshes.
Automatically shares state structures between geometry when possible.
Produces scene graph containing optimized pfGeoSets and pfGeoStates.
Converts pfGeoSets to more efficient pfGeoArrays.
Reading and writing XML files
Specifying complex display configuration (pipes, windows, and channels) from a file or through API calls
Tracking mouse and keyboard input
Setting up user interaction with 3D scene elements
Managing multiple scene graphs (worlds)
Managing multiple camera positions (views)
Extending program functionality using program modules
Imports display-configuration information from files using the OpenGL Multipipe SDK configuration file format.
Configures OpenGL Performer pipes, windows, and channels according to the configuration file specifications.
libpf provides a pipelined multiprocessing model for implementing visual simulation applications. The critical path pipeline stages are:
The application (APP) stage updates and queries the scene. The CULL stage traverses the scene and adds all potentially visible geometry to a special libpr display list, which is then rendered by the draw stage. Rendering pipelines can be split into separate processes to tailor the application to the number of available CPUs, as shown in Figure 2-2.
An application might have multiple rendering pipelines drawing to multiple graphics pipelines with separate processes. The CULL task of the rendering pipeline can itself be multithreaded.
OpenGL Performer provides additional, asynchronous stages for various computations:
INTERSECTION—intersects line segments with the database for things like collision detection and line-of-sight determination, and may be multithreaded.
COMPUTE—for general, asynchronous computations.
DATABASE—for asynchronously loading files and adding to or deleting files from the scene graph.
Multiprocess operation is largely transparent because OpenGL Performer manages the difficult multiprocessing issues for you, such as process timing, synchronization, and data exclusion and coherence.
For more information about multiprocessing stages, see Chapter 11, “Multiprocessing”.
libpf provides software constructs to facilitate visual database rendering. A pfPipe is a rendering pipeline that renders one or more pfChannels into one or more pfPipeWindows. A pfChannel is a view into a visual database, equivalent to a viewport, within a pfPipeWindow.
OpenGL Performer is designed to run at a fixed frame rate specified by the application. OpenGL Performer measures graphics load and uses that information to compute a stress value. Stress is applied to the model's level of detail to reduce scene complexity when nearing graphics overload conditions.
OpenGL Performer supports multiple pfChannels on a single pfPipeWindow, multiple pfPipeWindows on a single pfPipe, and multiple pfPipes per machine for multichannel, multiwindow, and multipipe operation. Frame synchronization between channels and between the host application and the graphics subsystem is provided. This also supports simulations that display multiple simultaneous views on different hardware displays.
A visual database is a graph of nodes with a pfScene node as its root. A pfScene is viewed by a pfChannel, which in turn is culled and drawn by a pfPipe. Scenes are typically, but not necessarily, constructed by the application at database loading time. OpenGL Performer supplies sample source code that shows how to construct a scene from several popular database formats; see OpenGL Performer Programmer's Guide for more information.
pfScene—Root node of a visual database
pfGroup—Branch node, which may have children
pfSCS—Static coordinate system
pfDCS—Dynamic coordinate system
pfLayer—Coplanar geometry node
pfLOD—Level-of-detail selection node
pfSwitch—Select among children
pfSequence—Sequenced animation node
pfPartition—Collection of geometry organized for efficiency
pfBillboard—Geometry that rotates to face the viewpoint
pfText—Geometry based upon pfFont and pfString
pfASD—Active Surface Definition for morphing geometry and continuous level of detail (LOD) measurement
pfLightSource—User-manipulatable lights that support high-quality spotlights and shadows
Culling the scene to the visible portion in the viewing frustum.
Comprehensive, user-directed database intersections.
Flattening modeling transformations for improved CULL, intersection, and rendering performance.
Cloning a database subgraph to obtain model instancing, which shares geometry but not articulations.
Deletion of scene-graph components.
Printing for debugging purposes.
The application can direct and customize traversals through the use of identification masks on a per-node basis using callbacks.
libpf provides an environmental model called a pfEarthSky, consisting of ground and sky polygons, which efficiently clears the viewport before rendering the scene. Atmospheric effects such as ground fog, haze, and clouds are included.
Sequenced animations, using pfSequence nodes, allow the application to efficiently render complex geometry sequences that are not easily modeled otherwise. You can think of animation sequences as a series of "flip cards," where the application controls which card is shown, and for how long.
Active Surface Definition (pfASD) is a library that handles real-time surface meshing and blending in a multiprocessing and multichannel environment. The pfASD approach uses a modeling terrain that is a single, connected surface rather than a collection of patches.
A pfASD surface contains several hierarchical level of detail (LOD) meshes where one level encapsulates a coarser level of detail than the next. When rendering a pfASD surface, an evaluation function selects polygons from the appropriate LODs, and constructs a valid meshing to best approximate a real terrain. An evaluation function, for example, might be based on distance.
Unlike existing LOD schemes, pfASD selects triangles from many different LODs and combines them into a final surface that transitions smoothly between LODs without cracks. This feature lets a fly-through over a surface use polygons from higher LODs for drawing nearby portions of the surface in combination with polygons from low LODs that represent distant portions of the surface.
Many graphics applications are limited in sending graphics commands to the Geometry Pipeline by CPU overhead. A pfGeoSet (or pfGeoArray) is a collection of like primitives such as points, lines, triangles, and triangle strips. pfGeoSets use tuned rendering loops to eliminate the CPU bottleneck.
OpenGL Performer optimizes graphics library performance by managing state changes, and provides functions to control aspects of the graphics library state such as lighting, texture, and transparency. These functions operate in both immediate and libpr display-list mode for direct mode changes, as well as for mode caching.
Other state functions such as push, pop, and override allow extensive control of graphics state.
A pfState is an encapsulation of graphics that renders lighting, texturing, and fog—the state settings for a graphics context. Loading a pfState ensures that the graphics pipeline is configured appropriately, regardless of previous graphics state. pfGeoStates describe the state of the geometry in pfGeoSets or pfGeoArrays, and are used for simplifying and accelerating graphics state management.
OpenGL Performer supports special libpr display lists. They do not use graphics library objects, but rather a simple token/data mechanism that does not cache geometry data. These display lists cache only libpr state and rendering commands. They also support function callbacks to allow applications to perform special processing during display list rendering. Display lists can be reused and are therefore useful for multiprocessing producer/consumer situations in which one process generates a display list of the visible scene, while another one renders it. Note that you can also use OpenGL display lists in OpenGL Performer applications.
Functions are provided to perform intersections of segments with cylinders, spheres, boxes, planes, and geometry. Intersection functions for spheres, cylinders, and frustums are also provided.
OpenGL Performer supports global color tables that can define the colors used by pfGeoSets and pfGeoArrays. You can use color tables for special effects such as infrared lighting, and you can switch them in real time.
Light points, defined by the pfLPointState state object, can simulate highly emissive objects such as runway lights, approach lights, strobes, beacons, and street lights. The size, direction, shape, color, and intensity of these lights can be controlled.
Calligraphic extensions to pfLPointState provide a means of displaying exceptionally bright light points on non-raster display systems.
For more information about pfLPointState, see Chapter 16, “Light Points,” in the OpenGL Performer Programmer's Guide.
OpenGL Performer is an object-oriented API. Basic object function, such as creation, deletion, and printing, are inherited from pfObject. Basic memory management is done through pfMemory.
A simple nonblocking file access method is provided to allow applications to retrieve file data during real-time operation.
OpenGL Performer includes routines to allocate memory from the application process heap or from shared memory arenas. Shared memory arenas must be used when multiple processes need to share data. The application can create its own shared memory arenas or use pfDataPools. pfDataPools are shared arenas that can be shared by multiple processes. Applications can allocate blocks of memory within pfDataPools, which can be individually locked and unlocked to provide mutual exclusion between unrelated processes.
OpenGL Performer includes high-resolution clock and video interval counter routines. pfGetTime() returns the current time at the highest resolution that the hardware supports. Processes can either share synchronized time values with other processes, or have their own individual clocks.
OpenGL Performer provides window-system-independent window routines to allow greater portability of applications. For information about these window routines, see Chapter 11, “Windows ,” in the OpenGL Performer Programmer's Guide.
For sample programs involving windows and input handling on IRIX and Linux systems, see the following directories:
On Windows systems, see these directories:
Although OpenGL Performer does not define a file format, it does provide sample source code for importing numerous other database formats into OpenGL Performer's run-time structures. Figure 2-3 shows how databases are imported into OpenGL Performer: first, a user creates a database with a modeling program, and then an OpenGL Performer-based application imports that database using one of the many importing routines.
OpenGL Performer routines then manipulate and draw the database in real time.
Scene graphs can also be generated automatically by loaders with built-in scene-graph generation algorithms. The “sponge” loader is an example of such automatic generation; it builds a model of the Menger (Sierpinski) Sponge, without requiring an input file.
libpfdu is a database utilities library that provides helpful functions for constructing optimized OpenGL Performer data structures and scene graphs. It is mainly used by database loaders, which take an external file format containing 3D geometry and graphics state and load them into OpenGL Performer-optimized, run-time-only structures. Such utilities often prove very useful; most modeling tools and file formats represent their data in structures that correspond to the way users model data. However, these data structures are often mutually exclusive with effective OpenGL Performer run-time structures.
libpfdu contains many utilities, including DSO support for database loaders and their modes, and file path support. The heart of libpfdu is the OpenGL Performer database builder. The builder is a tool that allows users to input or output a collection of geometry and graphics state in immediate mode.
Geometric primitives with their corresponding graphics state are sent one at a time to the builder. When the builder has received all the data, the builder can return optimized OpenGL Performer data structures, which can be used as a part of a scene graph. The builder hashes geometry into different bins, based on the attribute binding types and associated graphics state of the geometry. The builder also keeps track of graphics state elements, such as textures, materials, light models, and fog, and shares state elements whenever possible.
For each pfGeoSet, the builder creates a pfGeoState (OpenGL Performer's encapsulated state primitive), which has been optimized to share as many attributes as possible with other pfGeoStates being built (and possibly with the default pfGeoState that can be attached to a channel with pfChanGState()).
Having created all of these primitives (pfGeoSets and pfGeoStates), the builder places them in a leaf node (pfGeode), and optionally creates a spatial hierarchy (for increased culling efficiency) by running the new database through a spatial breakup utility function, which is also contained in libpfdu.
|Note: The builder allows the user to extend the notion of a graphics state by registering callback functionality through the builder API, and then treating this state or functionality like any other OpenGL Performer state or mode (although such uses of the builder are slightly more complicated).|
The library libpfv supports the following features:
Reading and writing XML files
Specifying complex display configuration (pipes, windows, and channels) from a file or through API calls
Tracking mouse and keyboard input
Setting up user interaction with 3D scene elements
Managing multiple scene graphs (worlds)
Managing multiple camera positions (views)
Extending program functionality using program modules
The principal class in libpfv is the pfvViewer. It allows complex multiworld and multiview applications to be implemented in a modular fashion, allowing individual features to be encapsulated into configurable and re-usable modules.
Loading geometry into a pfvViewer world
Picking geometry under the mouse pointer
Manipulating geometry (rotating, translating, scaling, deleting)
Navigating through a world using mouse and keyboard controls
Controlling the render style of models
Setting up colorful earth and sky backgrounds
Displaying 2D images in overlay
Saving snapshots of the rendered images
Smoothly transitioning from one world to another
Collecting and displaying statistics
A typical OpenGL Performer program starts with defining how many pipes, windows, and channels it requires. This code has to change every time you target the application at a new hardware configuration. The software product OpenGL Multipipe SDK solves this problem by providing a file format for specifying different display configurations. Loading such a configuration file determines the pipe/window/channel configuration that the program uses.
The library libpfmpk facilitates importing configuration files that use the OpenGL Multipipe SDK configuration file format. This library configures the OpenGL Performer application using the configuration file specifications. As a result, display configuration becomes easier and quicker to change.
The X Window System is a network-based, hardware-independent window system for use with bitmapped graphics displays. In the X client/server model, an X server running in the background handles input and output, and informs client applications when various events occur. A special client, the window manager, places windows on the screen, handles icons, and manages the titles and borders of windows.
With the pfWindow functions that OpenGL Performer provides, you do not need to know X or IRIS IM to use windows. However, you might want to integrate pfWindows with a Motif application or have a pfWindow use a designated Motif window.
If you have an IRIS Performer application that uses IRIS GL, you can port it to use OpenGL with minimal work. Most of what you need to do is port the window- and event-handling to use X. OpenGL does not have window or event routines. The OpenGL Porting Guide provides more information on porting from IRIS GL to OpenGL, and the sample applications distributed with OpenGL Performer provide many examples of programs that compile and run with either IRIS GL or OpenGL.
Most of the differences between IRIS GL and OpenGL are transparent to a developer using OpenGL Performer. The most significant difference between IRIS GL and OpenGL is how Performer handles windows and input. These differences are covered by pfWindows that provide a GL-independent windowing layer. Graphics rendering and state calls made through the OpenGL Performer API are also GL-independent.
You will notice differences between IRIS GL and OpenGL when direct GL calls are used outside of the OpenGL Performer interface. There are relatively few circumstances in which your OpenGL Performer-based program needs to call graphics library routines directly. Making outside calls usually happens only in DRAW callbacks. For more information, see “Customizing OpenGL Performer Traversals” in Chapter 6.
For information on compiling and linking OpenGL Performer applications, see the
Computers have generated interactive simulated virtual environments—usually for training or entertainment—since the 1960s. Computer image generation (CIG) has not always been a readily available technique, and many special-purpose approaches to visual simulation have been tried. For example, the NASA Kennedy Space Center newspaper Spaceport News described the Apollo 7 astronaut training visual simulator this way on March 28, 1968:
Each simulator consists of an instructor's station, crew station, computer complex, and projectors to simulate the stages of a flight. Engineers serve as instructors, instruments keeping them informed at all times of what the pilot is doing. Through the windows, infinity optics equipment duplicates the scenery of space. The main components of a typical visual display for each window includes a 71-centimeter fiber-plastic celestial sphere embedded with 966 ball bearings of various sizes to represent the stars from the first through fifth magnitudes, a mission-effects projector to provide earth and lunar scenes, and a rendezvous and docking projector which functions as a realistic target during maneuvers.
Visual simulation systems have advanced significantly due to advances in hardware and software, and to a greater understanding of human perceptions. For example, the Mars Sojourner Rover, the land rover on Mars, was simulated by researchers with an OpenGL Performer application.
This section outlines the major requirements of current visual simulation systems. These requirements fall into six major groups, each covering several related topics:
Low latency image generation
Reducing perceived latency (the time between input and response) requires reducing actual latency and increasing the frame rate. You cannot avoid latency, but you can minimize its effects by attention to hardware design and software structure.
A fixed frame rate is essential to realistic visual simulation. Achieving this goal, however, is very difficult because it requires using a fixed graphics resource to view images of varying complexity. To design for constant frame rates you must understand the required compromises in hardware, database, and application design.
Rich scene content
Customers nearly always want complex, detailed, and realistic images, without sacrificing high update rates and low system cost. Thus, providing interesting and natural scenes is usually a matter of tricks and halfway measures; a naive implementation would be prohibitively expensive in terms of machine resources.
Texture processing is arguably the most important incremental capability of real-time image generation systems. Sophisticated texture processing is the factor that most clearly separates the “major league” from the “minor league” in visual simulation technology.
Real-time character animation in entertainment systems is based on features and capabilities originally developed for high-end flight simulators. Creation of compelling entertainment experiences hinges on the ability to provide engaging synthetic characters.
One of the key notions of real-time image generation systems is the fact that they are often programmed largely by their databases. This programming includes the design and specification of several autonomous actions for later playback by the visual system.
The issue of latency is critical to comfortable perception of moving images under interactive control. In the real world, the images that reach our brains move smoothly and instantly in reaction to our own motion. In simulated visual environments, such motion is usually depicted as a discrete series of images generated at fixed time intervals. Furthermore, the image resulting from a motion often is not presented until several frame intervals have elapsed, creating a very unnatural latency. A typical human reaction to such delayed images is nausea, commonly known as simulator sickness.
In visual simulation the terms “ latency” and “ transport delay” refer to the time elapsed between stimulus and response. Confusion can enter the picture because there are several important latencies.
The most general measure is the total latency, which measures the time between user input (such as a pilot moving a control) and the display of a new image computed using that input. For example, if the pilot of a flight simulator initiates a sudden roll after a smooth level flight, how long does it take for a tilted horizon to appear?
The total time required is the sum of latencies of components within the processing path of the simulation system. The basic component latencies include the time required for each of these tasks:
Input device measurement and reporting
Vehicle dynamics computation
Image generation computation
Video display system scan-out
The latency that matters to the user of the system is the total time delay. This overall latency controls the sense of realness the system can provide.
Another measure combines the latencies due to image generation and video display into the visual latency. Questions of latency in visual simulation applications usually refer to either total latency or visual latency. The application developer selects the scope of the application, and then the latency is decided by the choice of image generation mode, frame rate, and video output format.
In many situations the perceived latency can be much less than the actual latency. This is because the human perception of latency can be reduced by anticipating user input. This means that reducing perceived latency is largely a matter of accurate prediction.
To be acceptable by human observers, interactive graphics applications, and immersive virtual environments, in particular, depend on a consistent frame rate. Human perceptions are attuned to continuous update from natural scenes but seem tolerant of discrete images presented at rates above 15 frames per second—as long as the frame rate is consistent. When latency grows large or frame rates waver, headaches and nausea often result.
Attaining a constant frame rate for a constant scene is easy. However, it is difficult to maintain a constant frame rate through wildly varying scene content and complexity. Designers of image generation systems use several approaches to achieve a constant, programmer-selected, frame rate.
The first and most basic method is to draw all scenes in such a simple way that they can be viewed from any location without altering the chosen frame rate. This conservative approach is much like always driving in low gear just in case a hill might be encountered. Implementing it simply means identifying and planning for the worst case situation of graphics load. Although this may be reasonable in some cases, in general it is wasteful of system resources.
A second approach is to discard (cull) database objects that are positioned completely outside the viewing frustum. This requires a pass through the visual database to compare scene geometry with the current frame's viewing volume. Any objects completely outside the frustum can be safely discarded. Testing and culling a complex object requires less time than drawing it.
When simple view-volume culling is insufficient to keep scene complexity constant, it may be necessary to compute the potential visibility of each object during the culling process by considering other objects within the scene that may occlude the test object. High-performance image generation systems use comparable occlusion culling tests to reduce the polygon filling complexity of real-time scenes.
Several tricks and techniques can give the impression of rich scene content without actually requiring large quantities of complex geometry.
Graphics systems can display only a finite number of geometric primitives per frame at a specified frame rate. Because of these limitations, the fundamental problem of database construction for real-time simulation is to maximize visual cues and minimize scene complexity. With level of detail selection, one of several similar models of varying complexity is displayed based on how visible the object is from the eyepoint. Level of detail selection is one of the best tools available for improving display performance by reducing database complexity. For more detailed information, see Chapter 15, “Optimizing Performance”.
Many of the objects in databases can be considered to have one or more axes of symmetry. Trees, for example, tend to look nearly the same from all horizontal directions of view. An effective approach to drawing such objects with less graphic complexity is to place a texture image of the object onto a single polygon and then rotate the polygon during simulation to face the observer. These self-orienting objects are commonly called billboards. For information on billboards, see Chapter 15, “Optimizing Performance”.
Animated events in simulation environments often have a sequence of stages that follow each other without variation. Where this is the case, you can often define this behavior in the database during database construction. The behavior can be implemented by the real-time visual system without intervention by the application process.
An example of this would be illuminated traffic signals in a driving simulator database. There are three mutually exclusive states of the signal, one with a green lamp, one with the amber, and one with the red. The duration of each state is known and can be recorded in the database. With these intervals built into the database, simulations can be performed without requiring the simulation application to cycle the traffic signal from one state to the next.
The simplest type of animation sequence is known as a geometry movie. It is a sequence of exclusive objects that are selected for display based on elapsed time from a trigger event. Advancement is tied to frames rather than time, or is based on specific events within the database.
For further information on animation, see the section, “pfSequence Nodes” in the OpenGL Performer Programmer's Guide.
A ntialiased image generation can have a significant effect on image quality in visual simulation. The difference, though subtle in some cases, has very significant effects on the sense of reality and the suitability of simulators for training. Military simulators often center on the goal of detecting and recognizing small objects on the horizon. Aliased graphics systems produce a “ sparkle” or “ twinkle” effect when drawing small objects. This artifact is unacceptable in these training applications because the student will come to subconsciously expect such effects to announce the arrival of an opponent and this unfulfilled expectation can prove fatal.
The idea of antialiasing is for image pixels to represent an average or other convolution of the image fragments within the area of a pixel rather than simply be a sample taken at the center of the pixel. This idea is easily stated but difficult to implement while maintaining high performance.
InfiniteReality continues the RealityEngine antialiasing approach known as multisampling. In this system, each pixel is considered to be composed of multiple subpixels. Multisampling stores a complete set of pixel information for each of the several subpixels. This includes such information as color, transparency, and (most importantly) a Z-buffer value.
Providing multiple independent Z-buffered subpixels (the so-called subpixel Z-buffer) per image pixel allows opaque polygons to be drawn in an arbitrary order because the subpixel Z-comparison will implement proper visibility testing. Converting the multiple color values that exist within a pixel into a single result can either be done as each fragment is rendered into the multisampling buffer or after all polygons have been rendered. For the best visual result, transparent polygons are rendered after all opaque polygons have been drawn.
The most powerful incremental feature of image generation systems beyond the initial capability to draw geometry is texture mapping, the ability to apply textures to surfaces. These textures consist of synthetic or photographic images that are displayed in place of the surfaces of geometric primitives, which serve to modify their surface appearance, reflectance, or shading properties. For each point on a texture-mapped surface, a corresponding pixel from the texture map is chosen to display instead, giving the appearance of warping the texture into the shape of the object's surface. With the InfiniteReality graphics subsystem, you can have very large textures, called cliptextures, (up to 8Mx8M texels).
For more information about texture mapping and cliptextures, see Chapter 8, “Geometry ,” and Chapter 10, “ClipTextures,” in the OpenGL Performer Programmer's Guide.
The most obvious use of texture mapping is to generate the appearance of surface details on geometric objects, without making those details into actual geometry. One valuable and widely used addition to these texture processing features is the concept of partly transparent textures. An example of this is the use of billboards (see “Rendering Slices of Shapes” in Chapter 15). For example, to display a tree using textures and billboards, you would create a texture map of a tree (from a photograph, perhaps), marking the background (any part of the texture that does not show part of the tree) as transparent. Then, using a flat rectangle for the billboard, map the texture to the billboard; the transparent regions in the texture become transparent regions of the billboard, allowing other geometry to show through.
You can use textures to simulate reflections (usually in a curved surface) of a 3D environment such as a room by using the viewing vector and the surface normal of the geometry to compute the index of each screen pixel into the texture image. The texture used for this process, the environment map, must contain images of the environment to be reflected.
You can use the environment mapping technique to implement lighting equations by noting that the environment map image represents the image seen in each direction from a chosen point. Interpreting this image as the illumination reflected from an incident light source as a function of angle, the intensities rather than the colors of the environment map can be used to scale the colors of objects in the database in order to implement complex lighting models (such as Phong shading) with high performance. You can use this method to provide elaborate lighting environments with systems in which per-pixel shading calculations would not otherwise be available.
You can also use texture mapping to project images such as aircraft landing lights and vehicle headlights into images. These projective texture techniques, when combined with the ability to use Z-buffer contents to texture images, allow the generation of real-time images with true 3D cast shadows.
The shared-memory, multiprocessed system architecture with high bandwidth for graphics subsystems in the SGI product line provide ideal systems for real-time high quality character animation. Vertex position, colors, normal vectors, and texture coordinates can all be interpolated between two versions of a model, a process known as morphing, with the OpenGL Performer pfEngine and pfFlux objects. You can also apply more complex functions between multiple versions of a model. You can use morphing to fill in motion between a start position and an end position for an object or—in its fully generalized form—parts of an animated character (such as facial expressions).
For more information about morphing, see Chapter 14, “Dynamic Data” , in the OpenGL Performer Programmer's Guide.
Simple pair-wise morphing is not sufficient to give animated characters life-like emotional expressions and behavior. You need the ability to model multiple expressions in an orthogonal manner and then combine them with arbitrary weighting factors during real-time simulation.
One current approach to human facial animation is to build a geometric model of an expressionless face, and then to distort this neutral model into an independent target for each desired expression. Examples include faces with frowns and smiles, faces with eye gestures, and faces with eyebrow movement. Subtracting the neutral face from the smile face gives a set of smile displacement vectors and increases efficiency by allowing removal of null displacements. Completing this process for each of the other gestures yields the input needed by a real-time system: a base or neutral model and a collection of displacement vector sets.
In actual use, you would process the data in a straightforward manner. You would specify the weights of each source model (or corresponding displacement vector set) before each frame is begun. For example, a particular setting might be “62% grin and 87% arched eyebrows” for a clownish physiognomy. The algorithmic implication is simply a weighted linear combination of the indicated vectors with the base model.
These processing steps are made more complicated in practice by the performance-inspired need to execute the operations in a multiprocessing environment. Parallel processing is needed because users of this technology:
Need to perform hundreds to thousands of interpolations per character.
Desire several characters in animation simultaneously.
Prefer animation update rates of 30 or 60 Hertz.
Generate multiple independent displays from a single system.
Together, these demands can require significant resources, even when only vertex coordinates are interpolated. When colors, normals, and texture coordinates are also interpolated, and especially when shared vertex normals are recomputed, the computational complexity is correspondingly increased.
The computational demands can be reduced when the rate of morphing is less than the image update rate. You can often improve the quality of the interpolated result by applying a non-linear interpolation operation, such as the eased cosine curves and splines found useful in other applications of computer animation.
A successful concept in computer-assisted 2D animation systems is the notion of skeleton animation. With this method you interpolate a defining skeleton and then position artwork relative to the interpolated skeleton. In essence, the skeleton defines a deformation of the original 2D plane, and the original image is transformed by this mapping to create the interpolated image. This process can be extended directly into the 3D domain of real-time computer image generation systems and used for character animation in entertainment applications.
The techniques of generalized morphing and skeleton animation can be used together to create advanced entertainment applications with life-like animated characters. One application of the two methods is to first perform a generalized" betweening" operation that builds a character with the desired pre-planned animation aspects, such as eye or mouth motion, and then to set the matrices or other transformation operators of the skeleton transformation operation to represent hierarchical motions such as those of arms or legs. The result of these animation operations is a freshly posed character ready for display.
Several companies produce database modeling tools and example databases that are well integrated with OpenGL Performer. A selection of these products are included and described in the Friends of Performer distribution. The Friends of Performer gift software is located in the /usr/share/Performer/friends directory. These tools have been built to address many aspects of the database construction process. Popular systems include tools that allow interactive design of geometry, easy editing and placement of texture images, flexible file-based instancing, and many other operations. Special purpose tools also exist to aid in the design of roadways, instrument panels, and terrain surfaces.
The reward of building complex databases that accurately and efficiently represent the desired virtual environment is great, however, since real-time image generation systems are only as good as the environments they explore.
 In recognition of the ingenuity of this system, OpenGL Performer includes a star database with the locations and magnitudes of the 3010 brightest stars as seen from earth. View the file “/usr/share/Performer/data/3010.star” with perfly while contemplating the engineering effort required to accurately embed those 966 ball bearings.