A pfEarthSky is a special set of functions that clears a pfChannel's viewport efficiently and implements various atmospheric effects. A pfEarthSky is attached to a pfChannel with pfChanESky(). Several pfEarthSky definitions can be created, but only one can be in effect for any given channel at a time.
A pfEarthSky can be used to draw a sky and horizon, to draw sky, horizon, and ground, or just to clear the entire screen to a specific color and depth. The colors of the sky, horizon, and ground can be changed in real time to simulate a specific time of day. At the horizon boundary, the ground and sky share a common color, so that there is a smooth transition from sky to horizon color. The width of the horizon band can be defined in degrees.
A pfChannel's earth-sky model is automatically drawn by OpenGL Performer before the scene is drawn unless the pfChannel has a draw callback set with pfChanTravFunc(). In this case it is the application's responsibility to clear the viewport. Within the callback pfClearChan() draws the channel's pfEarthSky.
Example 6-1 shows how to set up a pfEarthSky().
The complexities of atmospheric effects on visibility are approximated within OpenGL Performer using a multiple-layer sky model, set up as part of the pfEarthSky function. In this design, individual layers are used to represent the effects of ground fog, clear sky, and clouds. Figure 6-1 shows the identity and arrangement of these layers.
The lowest layer consists of ground fog, extending from the ground up to a user-selected altitude. The fog thins out with increasing altitude, disappearing entirely at the bottom of the general visibility layer. This layer extends from the top of the ground fog layer to the bottom of the cloud layer's lower transition zone, if such a zone exists. The transition zone provides a smooth transition between general visibility and the cloud layer. (If there is no cloud layer, then general visibility extends upward forever.) The cloud layer is defined as an opaque region of near-zero visibility; you can set its upper and lower boundaries. You can also place another transition zone above the cloud layer to make the clouds gradually thin out into clear air.
Set up the atmospheric simulation with the commands listed in Table 6-1
Create a pfEarthSky.
Set the render mode.
Set the attributes of the earth and sky models.
Set the colors for earth and sky and clear.
Set the fog functions.
You can set any pfEarthSky attribute, mode, or color in real time. Selecting the active pfFog definition can also be done in real time. However, changing the parameters of a pfFog once they are set is not advised when in multiprocessing mode.
The default characteristics of a pfEarthSky are listed in Table 6-2.
0.0 0.0 0.0
Sky top color
0.0 0.0 0.44
Sky bottom color
0.0 0.4 0.7
Ground near color
0.5 0.3 0.0
Ground far color
0.4 0.2 0.0
0.8 0.8 1.0
NULL (no fog)
NULL (no fog)
Cloud bottom color
0.8 0.8 0.8
Cloud top color
0.8 0.8 0.8
Transition zone bottom
Transition zone top
By default, an earth-sky model is not drawn. Instead, the channel is simply cleared to black and the Z-buffer is set to its maximum value. This default action also disables all other atmospheric attributes. To enable atmospheric effects, select PFES_SKY, PFES_SKY_GRND, or PFES_SKY_CLEAR when turning on the earth-sky model.
Clouds are disabled when the cloud top is less than or equal to the cloud bottom. Cloud transition zones are disabled when clouds are disabled.
Fog is enabled when either the general or ground fog is set to a valid pfFog. If ground fog is not enabled, no ground fog layer will be present and fog will be used to support general visibility. Setting a fog attribute to NULL disables it. See “Atmospheric Effects” for further information on fog parameters and operation.
The earth-sky model is an attribute of the channel and thus accesses information about the viewer's position, current field of view, and other pertinent information directly from pfChannel. To set the pfEarthSky in a channel, use pfChanESky().
A pfVolFog is a class that uses a multi-pass algorithm to draw the scene with a fog that has different densities at different locations. It extends the basic layered fog provided by pfEarthSky and introduces a new type of fog: a patchy fog. A patchy fog has a constant density in a given area. The boundaries of this area can be defined by an arbitrary three-dimensional object or by a set of objects.
A layered fog changes only with elevation; its density and color is uniform at a given height. It is defined by a set of elevation points, each specifying a fog density and, optionally, also a fog color at the point's elevation. The density and the color between two neighboring points is linearly interpolated.
Figure 6-2 illustrates the basic difference between patchy fog and layered fog.
Compared to a layered fog in pfEarthSky, a layered fog in pfVolFog has distinct advantages:
It can be specified by an arbitrary number of elevation points.
Each elevation point can have a different color associated with it.
A layered fog in pfVolFog is not dependent on an InfiniteReality-specific texgen. It can also be drawn using only 2D textures to simulate the 3D texture. Thus, a layered fog in pfVolFog can virtually be used on any machine.
A pfVolFog is not part of the scene graph; it is created separately by the application process. Once created, elevation points of a layered fog can be specified by calling pfVolFogAddPoint() or pfVolFogAddColoredPoint() repeatedly. The fog initialization is completed by calling pfApplyVolFog().
pfVolFog *lfog; lfog = pfNewVolFog(arena); pfVolFogAddPoint(lfog, elev1, density1); pfVolFogAddPoint(lfog, elev2, density2); pfVolFogAddPoint(lfog, elev2, density2); pfApplyVolFog(lfog);
The boundary of a patchy fog is specified by pfVolFogAddNode(pfog,node),where node contains the surfaces enclosing the foggy areas. It is possible to define several disjoint areas in the same tree or by adding several different nodes. Note that each area has to be completely enclosed, and the vertices of the surfaces have to be ordered so that the front face of each surface faces outside the foggy area. The node has to be part of the scene graph for the rendering to work properly.
pfVolFog *pfog; pfNode *fogNode; pfog = pfNewVolFog(arena); fogNode = pfdLoadFile(filename); pfVolFogAddNode(pfog, fogNode); pfAddChild(scene, fogNode); pfApplyVolFog(pfog);
Patchy and layered fog can be combined but only if layered fog has a uniform color; that is, it is specified using pfVolFogAddPoint() only.
The function pfApplyVolFog() initializes a pfVolFog. If at least two elevation points were defined, it initializes data structures necessary for rendering of a layered fog, including a 3D texture. Any control points defined afterward are ignored. If a node containing patchy fog boundaries has been added prior to calling pfApplyVolFog(), a patchy fog is initialized. Since function pfVolFogAddNode() only marks the parts of the scene graph that specifies the texture, it is possible to add additional patchy fog nodes, even after pfApplyVolFog() has been called.
Table 6-3 summarizes routines for initialization and drawing of a pfVolFog.
Create a pfVolFog.
Add a channel on which pfVolFog is used.
Add a point specifying fog density at a certain elevation.
Add a point specifying fog density and color at a certain elevation.
Add a node defining the boundary of a patchy fog.
Set color of a layered fog or patchy fog.
Set density of a patchy fog.
Set binary flags.
Set a single attribute.
Set an array of attributes.
Initialize data structures necessary for rendering fog.
Add a channel on which pfVolFog is used.
Update the current view for all stored channels.
Draw the scene with fog.
Return the texture used by layered fog.
The attributes of a pfVolFog are listed in Table 6-4.
0.9, 0.9, 1
Layered fog mode
64 x 64 x 64
Patchy fog mode
Layered patchy fog
The flags of a pfVolFog are listed in Table 6-5.
Use 2D texture
Force patchy fog passes
Faster patchy fog
No object in fog
1D texture on surface
Separate node bins
Draw nodes separately
Use cull programs
Layered patchy fog
Use layered patchy fog
A pfVolFog needs information about the current eye position and view direction. Since this information is not directly accessible in a draw process, it is necessary to call pfVolFogAddChannel() for each channel at the beginning of the application. Whenever the view changes, the application process has to call pfVolFogUpdateView(). See programs in /usr/share/Performer/src/sample/apps/C/fogfly or /usr/share/Performer/src/sample/apps/C++/volfog on IRIX and Linux or %PFROOT%\Src\sample\apps\C\fogfly or %PFROOT%\Src\sample\apps\C++\volfog on Microsoft Windows for an example. If you do not update the view, the fog will not be rendered.
If the application changes the position of the patchy fog boundaries (for example, by inserting a pfSCS, pfDCS, or pfFCS node above the fog node) or the orientation of the whole scene with respect to the up vector (for example, the use of a trackball in Perfly), the fog may not be drawn correctly.
To draw the scene with a fog, the draw process has to call pfDrawVolFog() instead of pfDraw(). This function takes care of drawing the whole scene graph with the specified fog. Expect the draw time to increase because the scene is drawn twice (three times if both patchy and layered fog are specified). In case of a patchy fog there may also be several full-screen polygons being drawn. You can easily disable the fog by not calling pfDrawVolFog().
Since boundaries of patchy fog are in the scene graph, do not use pfDraw() to draw the scene without fog; instead, use pfDrawBin() with PFSORT_DEFAULT_BIN, PFSORT_OPAQUE_BIN, and PFSORT_TRANSP_BIN.
A patchy fog needs as deep a color buffer as possible (optimally 12 bits per color component) and a stencil buffer. Use at least a 4-bit stencil buffer (1-bit is sufficient only for very simple fog objects). It may be necessary to modify your application so that it asks for such a visual.
A pfVolFog can be deleted using pfDelete(). In case of a layered fog it is necessary to delete the texture handle in a draw process. The texture is returned by pfGetVolFogTexture(). See the example in /usr/share/Performer/src/sample/apps/C/fogfly on IRIX and Linux and in %PFROOT%\Src\sample\apps\C\fogfly on Microsoft Windows.
This section describes how to manage the various parameters for both layered and patchy fog.
As mentioned earlier, a layered fog of a uniform color is specified by function pfVolFogAddPoint(), which sets the fog density at a given elevation. The density is scaled so that if the fog has a density of 1, the nearest object inside the fog that has full fog color is at a distance equal to 1/10 of the diagonal of the scene bounding box. The layered fog color is set by function pfVolFogSetColor() or by calling pfVolFogSetAttr() with parameter PFVFOG_COLOR and a pointer to an array of three floats.
A layered fog of nonuniform color is specified by function pfVolFogAddColoredPoint(), which sets the fog density and the fog color at a given elevation. The color set by pfVolFogSetColor() is then ignored.
It is also possible to set the mode both for a layered and patchy fog at once by using parameter PFVFOG_MODE. The default mode is PFVFOG_LINEAR. The function of the mode parameter is equivalent to the function of the fog mode parameter of the OpenGL function glFog().
The size of a 3D texture used by a layered fog can be modified by calling pfVolFogSetAttr() with parameter PFVFOG_3D_TEX_SIZE and an array of three integer values. The default texture size is 64x64x64, but reasonable results can be achieved with even smaller sizes. The sizes are automatically rounded up to the closest power of 2. The second value should be equal to or greater than the third value. If 3D textures are not supported, a set of 2D textures is used instead of a 3D texture (the number of 2D textures is equal to the third dimension of the 3D texture). Every time the r coordinate changes more than 0.1, a new texture is computed by interpolating between two neighboring slices, and the texture is reloaded. The use of 2D textures can be forced by calling: pfVolFogSetFlags() with flag PFVFOG_FLAG_FORCE_2D_TEXTURE set to 1.
|Note: Once a layered fog is initialized by calling the pfApplyVolFog(), changing any of the parameters described here will not affect rendering of the layered fog.|
The density of a patchy fog is controlled by function pfVolFogSetDensity() or by using pfVolFogSetVal() with parameter PFVFOG_FOG_DENSITY. As in the case of a layered fog, the density of a patchy fog is scaled by 1/10 of the diagonal of the scene bounding box.
You can specify an additional density value that is added to every pixel inside or behind a patchy fog boundary using the function pfVolFogSetVal() with parameter PFVFOG_FOG_DENSITY_BIAS. This value makes a patchy fog appear denser but it may create unrealistically sharp boundaries.
The patchy fog color is set by function pfVolFogSetColor() or by calling pfVolFogSetAttr() with parameter PFVFOG_COLOR and a pointer to an array of three floats. If the blend_color extension is not available, patchy fog will be white.
It is also possible to set the mode both for a patchy and layered fog at once by using parameter PFVFOG_MODE. The default mode is PFVFOG_LINEAR.
|Note: The parameters of a patchy fog can be modified at any time and they will affect the rendering of the subsequent frame.|
This section describes the following topics:
The example in /usr/share/Performer/src/sample/C++/volfog on IRIX and Linux and in %PFROOT%\Src\sample\C++\volfog on Microsoft Windows illustrates the use of all these advanced features.
A layered fog can be self-shadowed—that is, the lower parts of a dense fog appear darker. Self-shadowing is enabled by setting the flag PFVFOG_FLAG_SELF_SHADOWING to 1. The fog mode should be set to PFVFOG_EXP.
When the fog has different colors at different elevations and the flag PFVFOG_FLAG_FOG_FILTER is set to 1, a secondary scattering is approximated. In this case, the color of a higher layer may affect the color of a lower layer.
If the flag PFVFOG_FLAG_DARKEN_OBJECTS is set, even the objects below a dense fog become darker. The light is assumed to come from the top.
A patchy fog can be animated by modifying the geometry of the fog nodes. When changing the content of geosets specifying the fog boundary, make sure that the geosets are fluxed and that the bounding box of each geoset is updated. In addition, function pfVolFogAddNode() has to be called every time the fog bounding box changes.
It is possible to use a different algorithm for rendering patchy fog that can handle semi-transparent surfaces better. To use this algorithm, set the flag PFVFOG_FASTER_PATCHY_FOG to 1. Some advanced features of patchy fog described in the following subsections are supported only in one of the two algorithms. In such cases, this limitation is noted.
If the flag PFVFOG_FASTER_PATCHY_FOG is set to 1, the algorithm also allows the color of the patchy fog boundary to be modified using a texture. Either a built-in 1D texture expressing the attenuation between two elevations is used or you can provide a 1D or a 3D texture for each volume object. This can be used to simulate self-shadowing of dense gases, such as clouds.
The built-in 1D texture is enabled by setting the flag PFVFOG_FLAG_PATCHY_FOG_1DTEXTURE. The texture is mapped to the range of elevations between the bottom and top of the fog bounding box. The texture value at the bottom (default of 0.3) can be modified by calling pfVolFogSetVal() with parameter PFVFOG_PATCHY_TEXTURE_BOTTOM and the value at the top (default of 1.5) using parameter PFVFOG_PATCHY_TEXTURE_TOP.
To use a different scale for objects of different sizes, you must specify the fog objects separately. When the flag PFVFOG_FLAG_SEPARATE_NODE_BINS is set, all calls to pfVolFogAddNode() define fog nodes that are drawn separately, and the predefined texture is scaled according to the bounding box of each node.
If both the flag PFVFOG_FLAG_PATCHY_FOG_1DTEXTURE and the flag PFVFOG_FLAG_USER_PATCHY_FOG_TEXTURE are set, textures associated with the fog nodes are used to modify the surface color of a patchy fog.
To avoid artifacts on overlapping colored patchy fog objects the flag PFVFOG_FLAG_DRAW_NODES_SEPARATELY forces the algorithm to be applied to each node separately in the back-to-front order with respect to the viewpoint. Currently, this mode does not work well when scene objects intersect fog objects.
If the flag PFVFOG_FLAG_LAYERED_PATCHY_FOG is set, the layered fog is used to define the density of a patchy fog. The layered fog is then present only in areas enclosed by the patchy fog boundaries. Since layered fog is computed for the whole scene, it is important to set fog parameter PFVFOG_MAX_DISTANCE to a value that corresponds to the size of the patchy fog area (for example, a diameter of its bounding sphere). Use function pfVolFogSetVal() to modify the maximum distance parameter.
Layered patchy fog nodes can be moved and rotated by specifying a matrix for each fog node, identified by its index (the order in which nodes were specified). The function pfVolFogSetAttr() with three parameters specified can be used for this purpose. The first parameter is PFVFOG_ROTATE_NODE, the second parameter specifies the node index, and the last one is a pointer to a pfMatrix.
Light shafts are a special application of a layered patchy fog. The fog boundary specifies a cone of light with decreasing intensity (density) along the cone axis. Additional rendering passes darken the objects outside the cone of light and lighten the objects inside the light shaft based on their distance from the light. To enable these additional passes, set flag PFVFOG_FLAG_LIGHT_SHAFT to 1. To ensure that these passes are applied even if the light shaft is not in the field of view, you must also set flag PFVFOG_FLAG_FORCE_PATCHY_PASS to 1.
To control the additional passes, the parameter PFVFOG_LIGHT_SHAFT_DARKEN_FACTOR (set using pfVolFogSetAttr()) can change the factor by which all objects outside the light shaft are darkened. The default value is 0.3.
Parameters PFVFOG_LIGHT_SHAFT_ATTEN_SCALE and PFVFOG_LIGHT_SHAFT_ATTEN_TRANSLATE set the translate and scaling of a built-in, one-dimensional texture that is used to reduce the color of objects lit by the light. Set the translate to a small value—for example, 10 to 20% of the shaft length—and the scale to the inverse of the shaft length.
The quality and speed of patchy fog rendering can be controlled by calling pfVolFogSetVal() with the parameter PFVFOG_RESOLUTION. The resolution is a value between 0 and 1. Higher values will reduce banding and speed up the drawing. On the other hand, high values may cause corruption in areas of many overlapping fog surfaces. The default value is 0.2, but you may use values higher than that if your fog boundaries do not overlap much.
The following are other performance considerations:
The multipass algorithms used for rendering layered and patchy fog may produce incorrect results if the scene graph contains polygons that have equal depth values. To avoid such problems, a stencil buffer is used during rendering of the second pass. You can disable this function by setting the flag PFVFOG_FLAG_CLOSE_SURFACES to 0.
By default, the multipass algorithm is applied only when boundaries of a patchy fog are visible. This may cause undesirable changes of semi-transparent edges of scene objects when fog objects move into or away from the view. To force the use of the multipass algorithm, set the flag PFVFOG_FLAG_FORCE_PATCHY_PASS to 1.
Cull programs (see “Cull Programs” in Chapter 4) can speed up rendering of patchy fog because in some draw passes only the part of the scene intersecting the fog boundary is rendered. To enable cull programs, set the flag PFVFOG_FLAG_USE_CULL_PROGRAM to 1.
A layered fog is faster to render than a patchy fog; use a layered fog instead of a patchy fog whenever possible. Rendering of both types of fog together is even slower; so, you may try to define only one type.
Changing the fog mode does not affect the rendering speed in the case of a layered fog but rendering of a patchy fog is slower for fog modes PFVFOG_EXP and PFVFOG_EXP2. If you prefer using non-linear modes, try to use them only for layered fog and not for patchy fog.
You can speed up drawing of a patchy fog by reducing the size of the fog boundaries. In case of several disjoint fog areas, the size of a bounding box containing all boundaries will affect the draw time and quality. Try to avoid defining a patchy fog in two opposite parts of your scene. Try also to increase the value of resolution (if there are not too many overlapping fog boundaries) or reduce the patchy fog density.
If there is a lot of banding visible in the fog, try to choose a visual with as many bits per color component as possible. Keep in mind that a patchy fog needs a stencil buffer. You can also try to apply all techniques mentioned in the previous item—reducing the size of patchy fog boundaries, increasing resolution, or decreasing density.
If a patchy fog looks incorrect (the fog appears outside the specified boundaries) make sure that the vertices of the fog boundaries are specified in the correct order so that front faces always face outside the foggy area.
If you see a darker band in a layered fog at eye level, make sure the texture size is set so that the second value is equal to or greater than the third value.
Since light shafts are using a combination of layered and patchy fog and the density is decreasing to 0 at the end of the light cone, the quality of results is very sensitive to the depth of color buffers. 12-bit visuals are required and the light shaft should not be too large. Also, ensure that PFVFOG_MAX_DISTANCE is set as small as possible.
OpenGL Performer has the following limitations in regards to fog management:
The values of a layered fog are determined at each vertex and interpolated across a polygon. Consequently, an object located on top of a large ground polygon may be fogged a bit more or less than the part of the polygon just under the object.
A layered fog works fast with a 3D texture. Reloading of 2D textures during the animation can be slow.
The method does not work well for semitransparent surfaces. If your scene contains objects that are semitransparent or that have semitransparent edges, (for example, tree billboards or mountains in Performer Town), these objects or edges may be cut or may be fogged more than the neighboring pixels. Even if a semitransparent edge of a billboard is outside the fog, it will not be smooth.
A layered patchy fog is extremely sensitive to the size of the fog area and the density of the layered fog. Specifically, the fog values accumulated along an arbitrary line crossing the bounding box of the fog area should not reach 1.
A patchy fog needs a stencil buffer and the deepest color buffers possible.The rendering quality on a visual with less than 12 bits per color component is low unless the fogged area is very small compared to the size of the whole scene.
If the blend_color extension is not available, the patchy fog color will be white.
You can create real-time shadows using the class pfShadow. You specify a set of light sources and a set of objects that cast shadows on all other objects in the scene. The class manages the drawing and renders shadows for each combination of a shadow caster and a light source. Shadows are rendered by projecting the objects as seen from the light source into a texture and projecting the texture onto a scene. To avoid computing the texture for each frame, a set of textures is precomputed at the first frame, then for each frame the best representative is chosen and warped to approximate the correct shadow.
The following sections further describe real-time shadows:
A pfShadow is not part of the scene graph; it is created separately by the application process. Once the pfShadow is created, you can specify the number of shadow casters by calling function pfShadowNumCasters() and then set each caster using the function pfShadowShadowCasters(). Each shadow caster is specified by a scene graph node and a matrix that contains the transformation of the node with respect to the scene graph root. Shadow casters are indexed from 0 to the number of casters minus 1.
Similarly, the number of light sources is set by function pfShadowNumSources(). A light source is defined by its position or direction, set by pfShadowSourcePos() or pfShadowLight().
A pfShadow needs information about the current eye position and view direction. Since this information is not directly accessible in a draw process, it is necessary to call pfShadowAddChannel() for each channel at the beginning of the application. Whenever the view changes, the application process has to call pfShadowUpdateView(). Even if the view does not change, this function must be called at least once in single-process mode or as many times as the number of buffers in a pfFlux in multiprocess mode. Without updating the view, the shadow is not rendered correctly.
The class initialization is completed by calling the function pfShadowApply() as shown in the following creation example:
pfShadow *shd = pfNewShadow(); pfShadowNumCasters(shd, 2); pfShadowShadowCaster(shd, 0, node1, matrix1); pfShadowShadowCaster(shd, 1, node2, matrix2); pfShadowNumSources(shd, 1); pfShadowSourcePos(shd, 0, x1, y1, z1, w1); pfShadowAddChannel(channel); pfShadowApply(shd);
Table 6-6 summarizes the functions for the initialization and drawing of a pfShadow.
Create a pfShadow.
Set number of shadow casters.
Set a shadow caster and its rotation matrix.
Specify the translation of caster's center.
Set number of light sources.
Specify light source position.
Specify light source.
Set ambient factor.
Set a user-defined shadow texture for a given caster and light source.
Set a function used when blending closest shadows.
Add a channel on which pfShadow is used.
Update the current view for all stored channels.
Update rotation matrix of a caster.
Set binary flags.
Set a single attribute.
Get a pfDirData associated with the pfShadow.
Initialize a pfShadow.
Draw the scene and shadows.
The attributes of a pfShadow are listed in Table 6-7.
Size of shadow texture
512 x 512
Number of shadow textures
There is only one pfShadow flag, PFSHD_BLEND_TEXTURES. This blend-textures flag has a default of 0.
To draw a scene with real-time shadows, the draw process has to call the draw function provided by the pfShadow class: pfShadowDraw(). Before the first frame is rendered, all required shadow textures are precomputed. A warning is printed if the window size is smaller than the texture dimensions. Ensure that the window is not obscured; otherwise, the textures will not be correct.
By default, only the closest shadow texture is selected for any direction and it is skewed so that it approximates the correct shadow. Optionally, the flag PFSHD_BLEND_TEXTURES can be set using the function pfShadowFlags(). In this case, the two closest textures are selected and blended together, resulting in smoother transitions. Also, instead of a linear blend between the textures, you can define a blend function, mapping values 0–1 to the interval 0–1. The blend function can be set using the function pfShadowTextureBlendFunc().
Every time the caster changes its position or orientation with respect to the light source, it is necessary to update its matrix using pfShadowUpdateCaster() (the caster is identified by its index). When the caster's matrix changes, the shadow of the caster changes as well. In this case, the set of precomputed shadow textures is searched to find the one or two closest representatives.
The shadow texture is used to darken the scene pixels when the texture texel is set to 1. The amount by which the scene pixel is darkened can be set by the function pfShadowAmbientFactor(). The default value is 0.6
As the caster is projected into a shadow texture, the center of the projection corresponds with the center of the bounding box of the caster's node. When the shadow texture is skewed to approximate shadows from a slightly different direction, it is best if the center of the projection corresponds with the center of the object. The bounding box center may not coincide with the center of the object (in the case of some long protruding parts) and you can use the function pfShadowAdjustCasterCenter() to shift the bounding box center toward the center of the object.
For each combination of a shadow caster and a light source, it is possible to specify the number of shadow textures used, their sizes, and a set of directions for which the textures are precomputed. The number of textures and their sizes can be set by the function pfShadowVal(), where the first parameter is PFSHD_PARAM_TEXTURE_SIZE or PFSHD_PARAM_NUM_TEXTURES.
The set of directions can be controlled by using the function pfGetShadowDirData() to get the pointer to the corresponding pfDirData, a class that stores data associated with a set of directions. Then you can either select the default mode or specify the directions directly. See following section “Assigning Data with Directions” for more details. By default, there is one texture of size 512 x 512 and the direction corresponds to the light direction (or a vector from a point light source to the object's center). If there are more textures, the original light direction is rotated around a horizontal direction, assuming that the object will primarily keep its horizontal position (for example, a helicopter or a plane).
A sample implementation of shadows is in the file perf/samples/pguide/libpf/C++/shadowsNew.
The pfDirData class is used to store directional data—that is, data that depend on direction. A pfDirData stores an array of directions and an array of (void *) pointers representing the data associated with each direction.
The directions and data can be set using the function pfDirDataData(). Optionally, you can set only the directions using the function pfDirDataDirections() in the case that the associated data are defined later or generated internally by another OpenGL Performer class (such as pfShadow).
You can also generate directions automatically using the function pfDirDataGenerateDirections(). The first parameter defines one of the default sets of directions and the second parameter is used to specify additional values. At present only type PFDD_2D_ROTATE_AROUND_UP is supported, in which case the second parameter points to a 3D vector that is rotated around the up vector, creating a number of directions.
The data can be queried using the pfDirDataFindData() or pfDirDataFindData2() function. In the first case, the function finds the closest direction to the direction specified as the first parameter, copies it to the second parameter, and returns the pointer to the data associated with it. The input direction has to be normalized. The second function finds the two closest directions to the specified direction. It copies the two directions to the second parameter (which should point to an array of two vectors). The two pointers to the data associated with the two directions are copied to the array of two (void *) pointers specified as the third parameter. In addition, two weights associated with each direction are copied to the array of two floats. These weights are determined based on the distance of the end point of the input direction and each of the two closest directions.
The following are limitations of real-time shadows in OpenGL Performer:
When projecting a caster into a shadow texture, pfSwitch children are selected according to switch value. In the case of pfLOD, the finest level is chosen. Also, pfSequences are ignored—which can be useful in the case of helicopter rotors, for example.
The pfShadow class uses cull programs to cull out geometry that is not affected by the shadow to make the multipass drawing more efficient. At present, though, the cull program used by the pfShadow class overwrites any other cull program you specify.
|Note: Ensure that you do not overwrite TravMode in your application by setting it to PFCULL_ALL. The mode is set by pfShadow when pfShadowApply() is called.|
When projecting a caster into a shadow texture, pfSwitch and pfLOD may not be handled properly. Also, pfSequences are ignored—which can be useful in case of helicopter rotors, for example.
The image-based rendering approach is used for very complex objects. Such an object is represented by a set of images taken from many directions around it. When the object is rendered for each view direction, several closest views are blended together.
In OpenGL Performer, you can use the pfIBRnode class to represent complex objects. Unlike a pfBillboard, a parent class of pfIBRnode, the texture on pfGeoSets of a pfIBRnode is not static, but it changes based on the view direction for each pfGeoSet.
The following sections further describe image-based rendering:
A pfIBRnode is a child class of pfBillboard. You create a pfIRRnode in a fashion similar to that of a pfBillboard. Compared to a pfBillboard, a pfIBRnode has two additional parameters: a pfIBRtexture and an array of angles defining the initial rotation of the objects.
Each pfIBRnode has associated with it a single pfIBRtexture, which stores a set of images of the complex object as viewed from different directions. Each pfGeoSet is then rendered with a texture representing the view of the object from the given direction. A pfIBRtexture is specified using the function pfIBRnodeIBRtexture().
Using the function pfIBRnodeAngles(), you control the initial orientation of the complex object by specifying the rotation from the horizontal and vertical planes for each pfGeoSet. These angles are very useful in case of trees, for example, because you can use a different vertical angle for each instance of the tree. The trees then appear different, although they all use the same pfIBRtexture. The first value is ignored in the case that only one ring of views around the object is used.
You must set up a pfIBRnode so that the pfIBRtexture applied to it can modify properly the image at each frame. You do so in the following manner:
If the pfIBRtexture has the flag PFIBR_USE_REG_COMBINERS set, enable multitexturing and specify texture coordinates for additional texture units.
If the pfIBRtexture has the flag PFIBR_3D_VIEWS enabled, set the billboard rotation (PFBB_ROT) to PFBB_AXIAL_ROT.
On IRIX and Linux, see the example in the following file:
On Microsoft Windows, see the example in the following file:
By default, it is assumed that the geosets of the pfIBRnode specify rectangles that are always facing the viewer (like billboards). This approach is very fast but it requires a large number of views to limit the artifacts due to the differences between the neighboring views.
To reduce the number of views required to obtain a reasonable image of the complex object from any direction, we can use a shape that approximates the surface of the complex object instead of a billboard. This shape is called a proxy. The closer the proxy is to the original surface, the fewer views of the objects are required. Optimally, you create a proxy that contains a relatively small number of primitives and that is very close to the original surface. The proxy can be created using the new tool Simply. See section “The Simplify Application” for the details.
Compared to default mapping of views on a billboard there are only minor changes. Instead of a billboard, the node's geosets contain the proxy geometry. The pfIBRtexture associated with the node has the flag PFIBR_USE_PROXY set. There is an array of texture coordinates indexed by the view index and the geoset index. These texture coordinates can be defined and queried by pfIBRnodeProxyTexCoords() and pfGetIBRnodeProxyTexCoords(). Note that it is more efficient to store the proxy in one geoset.
Optionally, it is possible to specify different geosets for each view (if the PFIBR_NEAREST flag is set in the pfIBRtexture assigned to the pfIBRnode) or for each group of views if the views are blended. In this case, you must set the flag PFIBRN_VARY_PROXY_GEOSETS using pfIBRnodeFlags(). This can be useful for removing the invisible parts of the proxy (invisible from the range of views in the group) or for sorting the proxy triangles to avoid artifacts when edges of the proxy textures are transparent. The array of texture coordinates is then organized as follows:
The first index is the view index or the group index (if the views are blended).
The second index is the geoset index multiplied by the number of views in a group (1 for the nearest view).
The coordinates are grouped by geosets.
Thus, there are texture coordinates for the geoset 0 for all views in the group, then for geoset 1, and so on.
The geosets are organized as follows: if the proxy has n geosets and there are v views or groups of views, the pfIBRnode has n*v geosets, and each group of n geosets belongs to one view.
To create views of a complex object from various directions and to compute the texture coordinates of its proxy, you can use the makeProxyImages tool described in section “Creating Images of an Object with makeProxyImages”.
A pfIBRtexture stores a set of images of a complex object as viewed from different directions. The directions are specified using pfIBRtextureIBRdirections(). Internally, pfIBRtexture uses pfDirData to store the views. A pfDirData determines the type of view distribution. It could be a set of views around the object with all views perpendicular to the vertical axis, or the views can be from a set of rings and each ring contains an array of evenly spaced views that have the same angle from the horizontal plane. Otherwise, the views are assumed to be uniformly or randomly distributed around the sphere of directions. You must specify the directions before the images are set.
Once you specify the directions, you set the images using pfIBRtextureIBRtextures(). The parameters are an array of pointers to the textures containing the views and the number of the textures in this array.
If views are organized in rings, you can load the images directly from a set of files using pfIBRtextureLoadIBRtexture() without the need to specify the directions first. The parameter format specifies the path where the images are stored as well as how they are indexed—for example, images/view%03d.rgb. The other two parameters specify the number of images and the increment between two loaded images. The increment specification is useful when the texture memory is limited; for instance, specifying step=2 causes every second image to be skipped. Optionally, you can specify the views using the function pfIBRtextureIBRtextures(). The parameters are an array of pointers to the textures containing the views and the number of the textures in this array.
If the views are organized in rings, the textures, by default, represent views around the object, all perpendicular to the vertical axis. In this case, specified textures form a single ring of views that are evenly spaced. If the flag PFIBR_3D_VIEWS is specified by the function pfIBRtextureFlags(), the textures form a set of rings. Each ring contains an array of evenly spaced views that have the same angle from the horizontal plane.
If the flag PFIBR_3D_VIEWS is not set, both functions pfIBRtextureLoadIBRtexture() and pfIBRtextureIBRtextures() will set one ring with the specified number of textures and a horizontal angle of 0. If the flag PFIBR_3D_VIEWS is set, the class checks whether a file info is present in the image directory. If it is, the information about rings is loaded from that file. The file contains two values on each line: the horizontal angle and the number of textures at each ring. If the file is not present in the image directory, you must specify the rings before the images are loaded by calling the functions pfIBRtextureNumRings() and pfIBRtextureRing(). Rings are indexed from 0 and should be ordered by the horizontal angle, with the lowest angle at index 0. Each ring can have a different number of textures associated with it.
When 3D views are used, the image files read by function pfIBRtextureLoadIBRtexture() should be indexed by the ring index and the index of the image in a given ring. Specify the format string in the manner shown in the following example:
If you specify the textures using the function pfIBRtextureIBRtextures(), the texture pointers are all stored in a single array, starting with textures of the first ring, followed by textures of the second ring, and so on.
It is assumed that the views in each ring are uniformly spaced and they are ordered clockwise with respect to the vertical axis. If the views are ordered in the opposite direction, use the function pfIBRtextureDirection() to set the direction to –1.
When using pfIBRnodes and pfIBRtextures in Perfly, you need an alpha buffer. If the pfIBRnode is rendered as an opaque rectangle, try the command-line parameter –9, in which case Perfly requests a visual with an alpha buffer.
For more details about associating a pfIBRtexture with a pfIBRnode, see the pfIBRnode man page and the following program:
/usr/share/Performer/src/sample/pguide/C++/IBRnode (IRIX and Linux) %PFROOT%\Src\sample\pguide\C++\IBRnode (Microsoft Windows)
At present, the pfIBRtexture class is used only by the pfIBRnode class. The pfIBRtexture class provides a draw function for pfGeoSets that belong to the pfIBRnode, but the draw process is transparent to you. You can control the drawing by setting flags using the function pfIBRtextureFlags(). If the flag PFIBR_NEAREST is set, the closest view from the closest ring is selected and applied as a texture of the pfGeoSet. This approach is fast on all platforms, but it results in visible jumps when the texture is changed. Thus, by default, the flag PFIBR_NEAREST is not set and the two or, in case of 3D views, four closest views are blended together. If the graphics hardware supports register combiners, flags PFIBR_USE_REG_COMBINERS and PFIBR_USE_2D_TEXTURES are automatically set by the class constructor and blending of textures can be done in one pass.
The flag PFIBR_USE_PROXY is used when the views are mapped on an approximation of the complex object (a proxy) and a different draw function is applied. You can read more about proxies in section “Creating a pfIBRnode Using a Proxy”.
By default on IRIX, the flag PFIBR_USE_2D_TEXTURES is not set and a 3D texture is used for fast blending between the two closest views. To avoid flickering when the object is viewed from a distance, additional 3D textures are used to store additional mipmap levels. This feature is available on machines with multisampling only (InfiniteReality systems). To disable the mipmapping, set flag PFIBR_MIPMAP_3DTEXTURES to zero. In case of several rings, the nearest ring is selected and the views inside this ring are blended using the 3D texture. 3D texture is not compatible with other distributions of the views. Hence, in this case, ensure that you set flag PFIBR_USE_2D_TEXTURES.
Create a regular simplification of an object
Create a proxy of an object
In a regular simplification of an object, the resulting geometry does not cross the inner and outer boundaries of the original object. The distance of these boundaries from the original object controls the coarseness of the resulting geometry. All vertex parameters, such as the normal or texture coordinates, are preserved. A simplified version of the object can be used to create a pfLOD node (see section “pfLOD Nodes” in Chapter 3).
A proxy is a simplified version of the object where the original object is fully inside the proxy. This property is important because the proxy is used in image-based rendering where the images of a complex object from various directions are projected onto the proxy. In this way, it is possible to render a very complex object using a simplified version (a proxy) and store the surface detail, including the associated lighting, in multiple textures. See section “Creating Images of an Object with makeProxyImages” for the process of making the textures that are projected on the proxy.
The Simplify application is based on the Perfly application and they share many command-line parameters and key commands (see the man page for perfly). The syntax for the command-line invocation is as follows:
simplify [ perfly-options ] infile outfile [ simplification-options ]
You can get the list of the simplication options by running simplify with no option or with only the option –h.
When you start the Simplify application, the menu is similar to that of Perfly. There is an additional pane of buttons and sliders, called the Simplify pane, which can be enabled and disabled using the Simplify pane button. Figure 6-3 shows the Simplify pane, which is enabled by default. Most of the buttons and sliders on the Simplify pane have command-line equivalents.
Computing a proxy with Simplify requires two basic decisions:
Where to position the initial proxy and an outer boundary for the original object
What algorithm to use for creating the initial proxy and the outer boundary
Since these decisions may be difficult to make in an analytical fashion initially, the Simplify GUI allows you to make some guesses and refine them in an iterative fashion. The following procedure for making a proxy assumes that you have invoked the Simplify application using the default simplification options.
Ensure that the Simplify into proxy button is selected (the default).
Specify the initial distance of the proxy from the object and an outer boundary.
Use the sliders Initial distance and Outer boundary to do this. Distances are specified as a percentage of the object diameter (more precisely, the diameter of the object's bounding sphere). Initially, you might want to use the defaults, 2% for Initial distance and 5% for Outer boundary.
Select the algorithm for creating the initial proxy.
Simplify provides two algorithms: the marching cubes algorithm and the deplace-along-normals algorithm. The first button on the Simplify pane is the Do marching cubes button, which is selected by default. If the Do marching cubes button is not selected, Simplify uses the deplace-along-normals algorithm.
The marching cubes algorithm creates an isosurface at a certain distance (slider Iso distance) from the original object. The isosurface is later moved to the distance of the outer boundary (slider Outer boundary) and a copy of the isosurface is moved to the distance of the initial proxy (slider Initial distance).
The marching cubes algorithm has the following additional controls:
Grid Size X slider
Grid Size Y slider
Grid Size Z slider
Iso distance slider
With these controls, you can set the grid size at each axis and the distance of the isosurface from the object (using the slider Iso distance). The finer the grid, the longer the algorithm takes and the more complex the initial proxy. On the other hand, if the grid is too coarse, many details may be missed.
In general, the algorithm does not work very well if the desired isosurface distance is too small compared to the size of a grid voxel. For this reason, it is possible to specify the isosurface distance separately from the outer boundary distance and the initial proxy distance. Often it is possible to specify the isosurface distance large enough so that the isosurface does not miss any part of the object and then move it closer as needed. It is also possible to preview the isosurface by clicking the button Get isosurface while the button Show boundary is selected.
If you select the deplace-along-normals algorithm, the outer boundary and the initial proxy are created by displacing the original surface along its normals. This approach works better in the case where distances are very small. Unfortunately, some areas of the object may not be simplified. For example, if two parts of the object are touching, displacing along the normals will create a self-intersecting boundary that will not allow any room for simplification in the area of intersection.
With the deplace-along-normals algorithm, the grid is used to accelerate the intersection test of the simplified proxy with the boundary surfaces. Thus, do not reduce the grid resolution too much.
Click the Run simplify proxy button to start the simplification.
The simplification algorithm starts by moving the isosurface or the original surface to create the outer boundary and the initial proxy. The initial proxy is simplified by removing vertices and edges as long as the surface is within the surfaces defined by the object and the outer boundary. At the end, the vertices of the proxy are moved as close to the original object as possible.
After completing the computation, the proxy is saved in the file specified on the command line.
The simplification algorithm can be stopped or paused by clicking the Stop simplify or Pause button, respectively. When the algorithm is paused, it is possible to save the current proxy by clicking the Save mesh button. The file name contains the index of the current step so that several meshes can be output during the simplification.
The procedure for a regular simplification is very similar to the procedure for making a proxy, as described in the preceding section. In contrast to making a proxy, however, Simplify uses two boundary surfaces, an outer boundary (set by the slider Outer boundary) and an inner boundary (set by the slider Inner boundary) to create a regular simplication.
To simplify an object requires two basic decisions:
Where to place an outer boundary and inner boundary
What algorithm to use for creating the boundaries
The following procedure for making a proxy assumes that you have invoked the Simplify application using the default simplification options.
Ensure that the Simplify into proxy button is not selected.
This is not the default. Figure 6-4 shows the resulting Simplify pane.
Specify inner and outer boundaries.
Use the sliders Inner boundary and Outer boundary to do this. Distances are specified as a percentage of the object diameter (more precisely, the diameter of the object's bounding sphere). Initially, you might want to use the defaults, 2.5% for Inner boundary and 5% for Outer boundary.
Select the algorithm for creating the boundaries.
Simplify provides two algorithms: the marching cubes algorithm and the deplace-along-normals algorithm. The first button on the Simplify pane is the Do marching cubes button, which is selected by default. If the Do marching cubes button is not selected, Simplify uses the deplace-along-normals algorithm. See the preceding section for a description of the algorithms.
If you select the marching cubes algorithm, the distance of both boundaries from the original surface is the same (in absolute value) and it is controlled by the slider Iso distance. As in the case of making a proxy, the isosurface can be previewed by clicking the Get isosurface button.
If you select the deplace-along-normals algorithm, the boundaries are created by moving the original surface along its normals to distances specified by the sliders Outer boundary and Inner boundary. Note that the distance for the inner boundary is specified as a negative number.
Click the Run simplify button to start the simplication.
The computation can be paused or stopped by clicking the Pause or Stop simplify button, respectively. When the algorithm is paused, it is possible to save the intermediate result by clicking the Save mesh button or to toggle the visibility of the boundary by clicking the Show boundary button. After the simplification is finished you can display the original object by clicking the Restore object button. You can restart the algorithm without restoring the original object.
As you may realize, this procedure could be used to create an object proxy if you select the displace-along-normals algorithm and the inner boundary is set to zero. The result may be different, though, because the algorithm is trying to preserve seams between pfGeoSets with different pfGeoStates; the seam preservation is not necessary for the proxy.
You can use the program makeProxyImages to create images (views) of the specified object from a set of directions. Since the images are being projected on a proxy, a simplification of the original object, additional processing may be required to add views of parts of the proxy that are partially or fully obstructed by other parts. These additional texture pieces are important because as the proxy is rotated away from the view at which the texture was computed, some parts of the proxy that were not directly visible from the view may become visible. Thus, each image consists of the view of the object and a collection of texture pieces for obstructed parts of the proxy.
It is necessary to store texture coordinates for each proxy triangle so that the texture pieces are correctly mapped. Consequently, the program makeProxyImages outputs not only textures storing the views but also a pfIBRnode that contains the texture coordinates and the proxy geometry. You can create the proxy of an object using the program Simplify.
The input to the makeProxyImages program is the file containing the original complex object. Table 6-8 and the following sections describe other key, command-line options:
Specifies the file containing the proxy.
Specifies the files where the images are stored. A view number and the extension rgb is added automatically.
Specifies the file where the resulting pfIBRnode is stored.
Specifies the size of the texture (–W xsize ysize). It is important to specify the size.
Specifies the oversampling factor. Specify this option when the hardware does not support antialiasing.
Specifies the text file with ring information to determine the view directions. Each line of the ring file contains two values: the angle from the horizontal plane and how many views are created for that angle.
Specifies that only views around the object are used.
Specifies that uniformly distributed 3D views are used.
Enables skipping a certain number of views in each ring.
Scales up the object. By default, the program uses orthographic projection. The center of the projection is the center of the bounding sphere around the object and the object is scaled so that the bounding sphere fits the window. If the bounding sphere is too large you may try to upscale the object using the –s option to make better use of the texture. You can use perspective projection by defining the distance of the camera from the center of the bounding sphere. Unless there are reasons for doing otherwise, use the orthographic projection.
Specifies non-default lighting. In image-based rendering the lighting is captured in
the textures. Thus, it is important you specify the lights in the same way as in your
scene. By default, the default Perfly lighting is selected. You can specify your own
lights using the –l option:
To obtain the full set of options, run the program makeProxyImages without any parameters.
By default, the program makeProxyImages renders only the view of the object without the extra texture pieces for obstructed triangles of the proxy. To enable this feature you have to add the option –ev. The process has two steps. First, the number and size of texture pieces is determined and a packing algorithm determines their position around the primary view. Second, for each view the texture pieces are rendered in place.
The packing algorithm operates on the pixel level and there are several options that affect its speed and the quality of the results. To speed up the packing algorithm, you can downsample the textures before packing using the option –evd. The drawback is that there may be more wasted space between texture pieces. You can also reduce the number of neighboring pixels the texture packing algorithm checks when finding the optimal place for texture pieces by using the option –evp. In general, the texture pieces are not aligned with their neighboring pieces. Thus, when the view texture is mipmapped, the gaps between the textures may become visible. For this purpose, you can add the option –evmp to set the number of mipmapping levels that will not have cracks. Each edge of the texture piece that is not a silhouette edge is extended to contain more pixels from neighboring triangles. Setting the value too high may cause the packing algorithm to fail.
If the packing algorithm fails to place the texture pieces around the primary view, the object is scaled down a little (for the given view) and the algorithm is restarted. This process repeats until all the texture pieces fit.
Similarly, as obstructed triangles may come into full view, backfacing triangles may become visible as the proxy is rotated away from the view. Thus, it is possible to add texture pieces for backfacing triangles into the view texture using the option –bf. Not all backfacing triangles are added but only those that may be visible from neighboring views. Since additional texture pieces that are used for backfacing triangles of the proxy can be found in neighboring views, it is advantageous to combine several views into a single texture. This reduces the number of texture pieces packed into a texture for one view. You can use the option –tm to control this. Do not exceed 2Kx2K when combining several views into one texture.
When rendering additional texture pieces, you can control how far before and after the proxy triangle the clip planes are being set. This option affects triangles around the silhouette of the object. This is view-dependent: for each view, there are different triangles that contain the silhouette of the object. Since the proxy fully contains the original object, parts of the silhouette triangles may be transparent. This may cause visible cracks when the object is rotated. Moving the clip planes reduces some of the cracks.
If you move the plane that is behind the triangle farther away (using option –evlb), some of the geometry that is behind the silhouette is included in the texture. When you move the front clip plane closer to the cameras (using option –evlf), some of the geometry that is in front of the silhouette is included in the texture.
Because some proxy triangles may have a texture with transparent edges, it may be desirable to sort the proxy triangles. Because the proxy can be viewed from any direction, it is necessary to determine how the triangles are sorted. If the proxy is rendered with only the nearest views selected, the triangles are ordered for each view differently. You must set that mode using the option –nr. By default, three or four of the nearest views are blended together. In that case, the proxy triangles are sorted for each group of views. Sometimes it may be possible to see changes in transparency as the view moves from one group of views to another. If this becomes too obvious, you can disable the sorting using the option –evns.
Generally, you can drastically reduce problems if you place the proxy very close to the original object (especially around visible sharp edges) or if you increase the number of views. The following are some potential problems you might encounter:
The images are missing an alpha channel.
If your machine does not support a single-buffered visual with at least 8 bits per red, green, blue, and alpha component, the images may be missing an alpha channel. Note the number of alpha bits printed at the beginning of the makeProxyImages output. On some SGI systems with multisampling, you may try to use the option –nms to request a visual without multisampling to improve the probability of getting a visual with an alpha channel.
Your textures are not antialiased.
Do not forget to oversample the textures on machines with no antialiasing (using option –o).
The processing time is very long.
The process may take a very long time if the proxy is very fine and many texture pieces have to be added to each view. Since the rendering is done into a window, ensure that you do not overlap the window during the process or that the screen saver does not start. If some of the textures are corrupt, you may restart the program with the same parameters and add the option –sfr, which skips the rendering of the specified number of textures. It is also a good idea to increase shared arena size (use environment variable PFSHAREDSIZE) to avoid memory overflow when the pfIBRnode is saved at the end.
Texture pieces intersect the image of the object.
Inspecting the view textures, you may notice that sometimes the additional texture pieces may intersect the image of the object. This is fine because those triangles that are overlapped are assigned one of the additional texture pieces packed around the object.
You can use the program makeIBRimages from the directory /usr/share/Performer/src/conv/ on IRIX and Linux and %PFROOT%\Src\conv on Microsoft Windows to create images (views) of a specified object from a set of directions. The input is a file that can be read by OpenGL Performer and the output is a set of images of that object that can be directly used as an input for a pfIBRtexture. The images are stored in a directory specified using the option -f.
If a text file info is present in the output directory, a set of 3D views is rendered. The file has the same syntax as described in section “Creating a pfIBRtexture”. Each line of the file info contains two values: the angle from the horizontal plane and how many views are created for that angle. The images are then indexed by two integer values that are appended to the name specified by the option -f. The first value is the ring index of the views and the second one indexes the views within the ring.
If the file info is not present, a set of N views (set by the option -n) is computed around the object using the horizontal angle of 0. In this case, only one index is appended to the image name.
If you specify the option -pfb, the program outputs a pfb file in the specified directory. The file contains a single pfIBRnode that uses the created images.
Before loading perfly, ensure that PFPATH is set to the directory that contains the images.
If your machine does not support a single-buffered visual with at least 8 bits per red, green, blue, and alpha component, the images may be missing the alpha channel. Note the number of alpha bits printed when makeIBRimages begins.
When using pfIBRnodes and pfIBRtextures in perfly, you also need an alpha buffer. If the pfIBRnode is rendered as a full rectangle, try the command-line parameter –9, in which case perfly requests a visual with alpha.
To obtain the full set of command-line options, run the program makeIBRimages without any parameters.
The following are current limitations of image-based rendering in OpenGL Performer:
A pfIBRtexture applied to a pfIBRnode is not properly rotated when the pfIBRnode is viewed from the top. This may result in visible rotation of the texture with respect to the ground.
When the flag PFIBR_3D_VIEWS is set in a pfIBRtexture, do not use 3D textures. This mode is not implemented.