Chapter 10. Imaging Extensions

This chapter describes imaging extensions. After some introductory information the imaging pipeline, the following extensions are described:

Introduction to Imaging Extensions

This section describes platform dependencies, where extensions are in the OpenGL imaging pipeline, and the functions that may be affected by one of the imaging extensions.

Platform Dependencies

Currently, the majority of the imaging extensions are only supported on Fuel, InfinitePerformance, and InfiniteReality systems. The imaging extensions supported on Onyx4 and Silicon Graphics Prism systems include only the following:

  • EXT_abgr

  • EXT_packed_pixels

  • SGI_color_matrix

The EXT_packed_pixels extension was promoted to a standard part of OpenGL 1.2 and is available in that form.

Applications on Onyx4 and Silicon Graphics Prism systems can achieve similar functionality to the SGI_color_table and SGIX_pixel_texture extensions by writing fragment programs using one-dimensional textures as lookup tables.

Where Extensions Are in the Imaging Pipeline

The OpenGL imaging pipeline is shown in the OpenGL Programming Guide, Second Edition in the illustration “Drawing Pixels with glDrawPixels*()” in Chapter 8, “Drawing Pixels, Bitmaps, Fonts, and Images.” The OpenGL Reference Manual, Second Edition also includes two overview illustrations and a detailed fold-out illustration in the back of the book.

Figure 10-1 is a high-level illustration of pixel paths.

Figure 10-1. OpenGL Pixel Paths

OpenGL Pixel Paths

The OpenGL pixel paths show the movement of rectangles of pixels among host memory, textures, and the framebuffer. Pixel store operations are applied to pixels as they move in and out of host memory. Operations defined by the glPixelTransfer() function and other operations in the pixel transfer pipeline apply to all paths among host memory, textures, and the framebuffer.

Pixel Transfer Paths

Certain pipeline elements, such as convolution filters and color tables, are used during pixel transfer to modify pixels on their way to and from user memory, the framebuffer, and textures. The set of pixel paths used to initialize these pipeline elements is diagrammed in Figure 10-2. The pixel transfer pipeline is not applied to any of these paths.

Figure 10-2. Extensions that Modify Pixels During Transfer

Extensions that Modify Pixels During Transfer

Convolution, Histogram, and Color Table in the Pipeline

Figure 10-3 shows the same path with an emphasis on the position of each extension in the imaging pipeline itself. After the scale and bias operations and after the shift and offset operations, color conversion (LUT in Figure 10-3 below) takes place with a lookup table. After that, the extension modules may be applied. Note how the color table extension can be applied at different locations in the pipeline. Unless the histogram or minmax extensions were called to collect information only, pixel processing continues, as shown in the OpenGL Programming Guide.

Figure 10-3. Convolution, Histogram, and Color Table in the Pipeline

Convolution, Histogram, and Color Table in the 

Interlacing and Pixel Texture in the Pipeline

Figure 10-4 shows where interlacing (see “SGIX_interlace—The Interlace Extension”) and pixel texture (see “SGIX_pixel_texture—The Pixel Texture Extension”) are applied in the pixel pipeline. The steps after interlacing are shown in more detail than the ones before to allow the diagram to include pixel texture.

Figure 10-4. Interlacing and Pixel Texture in the Pixel Pipeline

Interlacing and Pixel Texture in the Pixel Pipeline

Merging the Geometry and Pixel Pipeline

The convert-to-fragment stage of geometry rasterization and of the pixel pipeline each produce fragments. The fragments are processed by a shared per-fragment pipeline that begins with applying the texture to the fragment color.

Because the pixel pipeline shares the per-fragment processing with the geometry pipeline, the fragments it produces must be identical to the ones produced by the geometry pipeline. The parts of the fragment that are not derived from pixel groups are filled with the associated values in the current raster position.

Pixel Pipeline Conversion to Fragments

A fragment consists of x and y window coordinates and its associated color value, depth value, and texture coordinates. The pixel groups processed by the pixel pipeline do not produce all the fragment's associated data; so, the parts that are not produced from the pixel group are taken from the raster position. This combination of information allows the pixel pipeline to pass a complete fragment into the per-fragment operations shared with the geometry pipeline, as shown in Figure 10-5.

Figure 10-5. Conversion to Fragments

Conversion to Fragments

For example, if the pixel group is producing the color part of the fragment, the texture coordinates and depth value come from the current raster position. If the pixel group is producing the depth part of the fragment, the texture coordinates and color come from the current raster position.

The pixel texture extension (see “SGIX_pixel_texture—The Pixel Texture Extension”) introduces the switch, highlighted in blue (lighter-colored balls), which provides a way to retrieve the fragment's texture coordinates from the pixel group. The pixel texture extension also allows you to specify whether the color should come from the pixel group or the current raster position.

Functions Affected by Imaging Extensions

Imaging extensions affect all functions that are associated with the pixel transfer modes (see Chapter 8, “Drawing Pixels, Bitmaps, Fonts, and Images,” of the OpenGL Programming Guide). In general, the following operations are affected:

  • All functions that draw and copy pixels or define texture images

  • All functions that read pixels or textures back to host memory

EXT_abgr—The ABGR Extension

The ABGR extension, EXT_abgr, extends the list of host-memory color formats by an alternative to the RGBA format that uses reverse component order. This is the most convenient way to use an ABGR source image with OpenGL.

To use this extension, call glDrawPixels(), glGetTexImage(), glReadPixels(), and glTexImage*() with GL_ABGR_EXT as the value of the format parameter.

The following code fragment illustrates the use of the extension:

 *  draw a 32x32 pixel image at location 10, 10 using an ABGR source 
 *  image. "image" *should* point to a 32x32 ABGR UNSIGNED BYTE image

    unsigned char *image;

    glRasterPos2f(10, 10);
    glDrawPixels(32, 32, GL_ABGR_EXT, GL_UNSIGNED_BYTE, image);

EXT_convolution—The Convolution Extension

The convolution extension, EXT_convolution, allows you to filter images (for example, to sharpen or blur the) by convolving the pixel values in a one- or two- dimensional image with a convolution kernel.

The convolution kernels are themselves treated as one- and two- dimensional images. They can be loaded from application memory or from the framebuffer.

Convolution is performed only for RGBA pixel groups, although these groups may have been specified as color indexes and converted to RGBA by index table lookup.

Figure 10-6 shows the equations for general convolution at the top and for separable convolution at the bottom.

Figure 10-6. Convolution Equations

Convolution Equations

Performing Convolution

Performing convolution consists of the following steps:

  1. If desired, specify filter scale, filter bias, and convolution parameters for the convolution kernel. For example:

       GL_REDUCE_EXT /*nothing else supported at present */);

  2. Define the image to be used for the convolution kernel.

    Use a 2D array for 2D convolution and a 1D array for 1D convolution. Separable 2D filters consist of two 1D images for the row and the column.

    To specify a convolution kernel, call glConvolutionFilter2DEXT(), glConvolutionFilter1DEXT(), or glSeparableFilter2DEXT().

    The following example defines a 7 x 7 convolution kernel that is in RGB format and is based on a 7 x 7 RGB pixel array previously defined as rgbBlurImage7x7:

    GL_CONVOLUTION_2D_EXT,    /*has to be this value*/
    GL_RGB,                   /*filter kernel internal format*/
    7, 7,                     /*width & height of image pixel array*/
    GL_RGB,                   /*image internal format*/
    GL_FLOAT,                 /*type of image pixel data*/
    (const void*)rgbBlurImage7x7      /* image itself*/

    For more information about the different parameters, see the reference page for the relevant function.

  3. Enable convolution, as shown in the following example:


  4. Perform pixel operations (for example, pixel drawing or texture image definition).

    Convolution happens as the pixel operations are executed.

Retrieving Convolution State Parameters

If necessary, you can use glGetConvolutionParameter*EXT() to retrieve the following convolution state parameters:


Convolution border mode. For a list of border modes, see the man page for glConvolutionParameterEXT().


Current internal format. For lists of allowable formats, see the man pages for glConvolutionFilter*EXT() and glSeparableFilter2DEXT().


Current filter bias and filter scale factors. The value params must be a pointer to an array of four elements, which receive the red, green, blue, and alpha filter bias terms in that order.


Current filter image width.


Maximum acceptable filter image width and filter image height.

Separable and General Convolution Filters

A convolution that uses separable filters typically operates faster than one that uses general filters.

Special facilities are provided for the definition of two-dimensional separable filters. For separable filters, the image is represented as the product of two one-dimensional images, not as a full two-dimensional image.

To specify a two-dimensional separable filter, call glSeparableFilter2DEXT(), which has the following format:

void glSeparableFilter2DEXT( GLenum target,GLenum internalformat,GLsizei width,
                             GLsizei height,GLenum format,GLenum type,
                             const GLvoid *row,const GLvoid *column )

The parameters are defined as follows:




Specifies the formats of two one-dimensional images that are retained; it must be one of GL_ALPHA, GL_LUMINANCE, GL_LUMINANCE_ALPHA, GL_INTENSITY, GL_RGB, or GL_RGBA.


Points to two one-dimensional images in memory, is defined by format and type, is width pixels wide.


Points to two one-dimensional images in memory, is defined by format and type, and is height pixels wide.

The two images are extracted from memory and processed just as if glConvolutionFilter1DEXT() were called separately for each with the resulting retained images replacing the current 2D separable filter images, except that each scale and bias are applied to each image using the 2D separable scale and bias vectors.

If you are using convolution on a texture image, keep in mind that the result of the convolution must obey the constraint that the dimensions have to be a power of 2. If you use the reduce-border convolution mode, the image shrinks by the filter width minus 1; so, you may have to take that into account ahead of time.  

New Functions

The EXT_convolution extension introduces the following functions:

  • glConvolutionFilter1DEXT()

  • glConvolutionFilter2DEXT()

  • glCopyConvolutionFilter1DEXT()

  • glCopyConvolutionFilter2DEXT()

  • glGetConvolutionFilterEXT()

  • glSeparableFilter2DEXT()

  • glGetSeparableFilterEXT()

  • glConvolutionParameterEXT()

EXT_histogram—The Histogram and Minmax Extensions

The histogram extension, EXT_histogram, defines operations that count occurrences of specific color component values and that track the minimum and maximum color component values in images that pass through the image pipeline. You can use the results of these operations to create a more balanced, better-quality image.

Figure 10-7 illustrates how the histogram extension collects information for one of the color components. The histogram has the number of bins specified at creation, and information is then collected about the number of times the color component falls within each bin. Assuming that the example below is for the red component of an image, you can see that R values between 95 and 127 occurred least often and those between 127 and 159 most often.

Figure 10-7. How the Histogram Extension Collects Information

How the Histogram Extension Collects Information

Histogram and minmax operations are performed only for RGBA pixel groups, though these groups may have been specified as color indexes and converted to RGBA by color index table lookup.

Using the Histogram Extension

To collect histogram information, follow these steps:

  1. Call glHistogramEXT() to define the histogram, as shown in the following example:

                      256               /* width (number of bins) */,
                      GL_LUMINANCE      /* internalformat */,
                      GL_TRUE           /* sink */);

    The parameters are defined as follows:


    ,Specifies the number of histogram entries. Must be a power of 2.


    Specifies the format of each table entry.


    Specifies whether pixel groups are consumed by the histogram operation (GL_TRUE) or passed further down the image pipeline (GL_FALSE).

  2. Enable histogramming by calling


  3. Perform the pixel operations for which you want to collect information (drawing, reading, copying pixels, or loading a texture). Only one operation is sufficient.

    For each component represented in the histogram internal format, let the corresponding component of the incoming pixel (luminance corresponds to red) be of value c (after clamping to [0, 1). The corresponding component of bin number round((width-1)*c) is incremented by 1.

  4. Call glGetHistogramEXT(), whose format follows, to query the current contents of the histogram:

    void glGetHistogramEXT( GLenum target, GLboolean reset, GLenum format,
                           GLenum type, GLvoid *values )

    The parameters are defined as follows:




    Must be GL_TRUE or GL_FALSE. If GL_TRUE, each component counter that is actually returned is reset to zero. Counters that are not returned are not modified; for example, GL_GREEN or GL_BLUE counters may not be returned if format is GL_RED and internal format is GL_RGB.






    Used to return a 1D image with the same width as the histogram. No pixel transfer operations are performed on this image, but pixel storage modes that apply for glReadPixels() are performed. Color components that are requested in the specified format—but are not included in the internal format of the histogram—are returned as zero. The assignments of internal color components to the components requested by format are as follows:

    Internal Component 

    Resulting Component











Using the Minmax Part of the Histogram Extension

The m inmax part of the histogram extension lets you find out about minimum and maximum color component values present in an image. Using the minmax part of the histogram extension is similar to using the histogram part.

To determine minimum and maximum color values used in an image, follow these steps:

  1. Specify a minmax table by calling glMinmaxEXT(), whose format follows:

    void glMinmaxEXT( GLenum target, GLenum internalformat, GLboolean sink)

    The parameters are defined as follows:


    Specifies the table in which the information about the image is to be stored. The value for target must be GL_MINMAX_EXT.


    Specifies the format of the table entries. It must be an allowed internal format. See the man page for glMinmaxEXT().


    Determines whether processing continues. GL_TRUE or GL_FALSE are the valid values. If set to GL_TRUE, no further processing happens and pixels or texels are discarded.

    The resulting minmax table always has two entries. Entry 0 is the minimum and entry 1 is the maximum.

  2. Enable minmax by calling the following function:


  3. Perform the pixel operation—for example, glCopyPixels().

    Each component of the internal format of the minmax table is compared to the corresponding component of the incoming RGBA pixel (luminance components are compared to red).

    • If a component is greater than the corresponding component in the maximum element, then the maximum element is updated with the pixel component value.

    • If a component is smaller than the corresponding component in the minimum element, then the minimum element is updated with the pixel component value.

  4. Query the current context of the minmax table by calling glGetMinmaxEXT(), whose format follows:

    void glGetMinmaxEXT (GLenum target, GLboolean reset, GLenum format, 
                          GLenum type, glvoid *values) 

You can also call glGetMinmaxParameterEXT() to retrieve minmax state information; setting target to GL_MINMAX_EXT and pname to one of the following values:


Internal format of minmax table


Value of sink parameter

Using Proxy Histograms

Histograms can get quite large and require more memory than is available to the graphics subsystem. You can call glHistogramEXT() with target set to GL_PROXY_HISTOGRAM_EXT to find out whether a histogram fits into memory. The process is similar to the one explained in the section “Texture Proxy” on page 330 of the OpenGL Programming Guide, Second Edition.

To query histogram state values, call glGetHistogramParameter*EXT(). Histogram calls with the proxy target (like texture and color table calls with the proxy target) have no effect on the histogram itself.

New Functions

The EXT_histogram extension introduces the following functions:

  • glGetHistogramEXT()

  • glGetHistogramParameterEXT()

  • glGetMinmaxEXT()

  • glGetMinmaxParameterEXT()

  • glHistogramEXT()

  • glMinmaxEXT()

  • glResetHistogramEXT()

  • glResetMinmaxEXT()

EXT_packed_pixels—The Packed Pixels Extension

The packed pixels extension, EXT_packed_pixels, provides support for packed pixels in host memory. A packed pixel is represented entirely by one unsigned byte, unsigned short, or unsigned integer. The fields within the packed pixel are not proper machine types, but the pixel as a whole is. Thus, the pixel storage modes, such as GL_PACK_SKIP_PIXELS, GL_PACK_ROW_LENGTH, and so on, and their unpacking counterparts all work correctly with packed pixels.

Why Use the Packed Pixels Extension?

The packed pixels extension lets you store images more efficiently by providing additional pixel types you can use when reading and drawing pixels or loading textures. Packed pixels have two potential benefits:

  • Save bandwidth.

    Packed pixels may use less bandwidth than unpacked pixels to transfer them to and from the graphics hardware because the packed pixel types use fewer bytes per pixel.

  • Save processing time.

    If the packed pixel type matches the destination (texture or framebuffer) type, packed pixels save processing time.

In addition, some of the types defined by this extension match the internal texture formats; so, less processing is required to transfer texture images to texture memory.

Using Packed Pixels

To use packed pixels, provide one of the types listed in Table 10-1 as the type parameter to glDrawPixels(), glReadPixels(), and so on.

Table 10-1. Types That Use Packed Pixels

Parameter Token Value

GL Data Type











The already available types for glReadPixels(), glDrawPixels(), and so on are listed in Table 8-2 “Data Types for glReadPixels or glDrawPixels,” on page 293 of the OpenGL Programming Guide.

Pixel Type Descriptions

Each packed pixel type includes a base type (for example, GL_UNSIGNED_BYTE) and a field width (for example, 3_3_2):

  • The base type (GL_UNSIGNED_BYTE, GL_UNSIGNED_SHORT, or GL_UNSIGNED_INT) determines the type of “container” into which each pixel's color components are packed.

  • The field widths (3_3_2, 4_4_4_4, 5_5_5_1, 8_8_8_8, or 10_10_10_2) determine the sizes (in bits) of the fields that contain a pixel's color components. The field widths are matched to the components in the pixel format in left-to-right order.

    For example, if a pixel has the type GL_UNSIGNED_BYTE_3_3_2_EXT and the format GL_RGB, the pixel is contained in an unsigned byte, the red component occupies three bits, the green component occupies three bits, and the blue component occupies two bits.

    The fields are packed tightly into their container with the leftmost field occupying the most-significant bits and the rightmost field occupying the least-significant bits.

Because of this ordering scheme, integer constants (particularly hexadecimal constants) can be used to specify pixel values in a readable and system-independent way. For example, a packed pixel with type GL_UNSIGNED_SHORT_4_4_4_4_EXT, format GL_RGBA, and color components red == 1, green == 2, blue == 3, alpha == 4 has the value 0x1234.

The ordering scheme also allows packed pixel values to be computed with system-independent code. For example, if there are four variables (red, green, blue, alpha) containing the pixel's color component values, a packed pixel of type GL_UNSIGNED_INT_10_10_10_2_EXT and format GL_RGBA can be computed with the following C code:

GLuint pixel, red, green, blue, alpha; 
pixel = (red << 22) | (green << 12) | (blue << 2) | alpha;

While the source code that manipulates packed pixels is identical on both big-endian and little-endian systems, you still need to enable byte swapping when drawing packed pixels that have been written in binary form by a system with different endianness.

SGI_color_matrix—The Color Matrix Extension

The color matrix extension, SGI_color_matrix, lets you transform the colors in the imaging pipeline with a 4 x 4 matrix. You can use the color matrix to reassign and duplicate color components and to implement simple color-space conversions.

This extension adds a 4 x 4 matrix stack to the pixel transfer path. The matrix operates only on RGBA pixel groups; the extension multiplies the 4 x 4 color matrix on top of the stack with the components of each pixel. The stack is manipulated using the OpenGL matrix manipulation functions: glPushMatrix(), glPopMatrix(), glLoadIdentity(), glLoadMatrix(), and so on. All standard transformations—for example, glRotate() or glTranslate() also apply to the color matrix.

The color matrix is always applied to all pixel transfers. To disable it, load the identity matrix.

The following is an example of a color matrix that swaps BGR pixels to form RGB pixels:

GLfloat colorMat[16] = {0.0, 0.0, 1.0, 0.0,
                        0.0, 1.0, 0.0, 0.0,
                        1.0, 0.0, 0.0, 0.0,
                        0.0, 0.0, 0.0, 0.0 };

After the matrix multiplication, each resulting color component is scaled and biased by the appropriate user-defined scale and bias values. Color matrix multiplication follows convolution; convolution follows scale and bias.

To set scale and bias values to be applied after the color matrix, call glPixelTransfer*() with the following values for pname:



SGI_color_table—The Color Table Extension

The color table extension, SGI_color_table, defines a new RGBA-format color lookup mechanism. It does not replace the color lookup tables provided by the color maps described in the OpenGL Programming Guide but provides the following additional lookup capabilities:

  • Unlike pixel maps, the color table extension's download operations go through the glPixelStore() unpack operations in the same way glDrawPixels() does.

  • When a color table is applied to pixels, OpenGL maps the pixel format to the color table format.

If the copy texture extension is implemented, this extension also defines methods to initialize the color lookup tables from the framebuffer.

Why Use the Color Table Extension?

The color tables provided by the color table extension allow you to adjust image contrast and brightness after each stage of the pixel processing pipeline.

Because you can use several color lookup tables at different stages of the pipeline (see Figure 10-3), you have greater control over the changes you want to make. In addition the extension color lookup tables are more efficient than those of OpenGL because you may apply them to a subset of components (for example, alpha only).

Specifying a Color Table

To specify a color lookup table, call glColorTableSGI(), whose format follows:

void glColorTableSGI( GLenum target, GLenum internalformat, GLsizei width, 
                     GLenum format, GLenum type,const GLvoid *table) 

The parameters are defined as follows:




Specifies the internal format of the color table.


Specifies the number of entries in the color lookup table. It must be zero or a non-negative power of two.


Specifies the format of the pixel data in the table.


Specifies the type of the pixel data in the table.


Specifies a pointer to a 1D array of pixel data that is processed to build the table.

If no error results from the execution of glColorTableSGI(), the following events occur:

  1. The specified color lookup table is defined to have width entries, each with the specified internal format. The entries are indexed as zero through N–1, where N is the width of the table. The values in the previous color lookup table, if any, are lost. The new values are specified by the contents of the 1D image to which table points with format as the memory format and type as the data type.

  2. The specified image is extracted from memory and processed as if glDrawPixels() were called, stopping just before the application of pixel transfer modes (see the illustration “Drawing Pixels with glDrawPixels*()” on page 310 of the OpenGL Programming Guide).

  3. The R, G, B, and A components of each pixel are scaled by the four GL_COLOR_TABLE_SCALE_SGI parameters, then biased by the four GL_COLOR_TABLE_BIAS_SGI parameters and clamped to [0,1].

    The scale and bias parameters are themselves specified by calling glColorTableParameterivSGI() or glColorTableParameterfvSGI() with the following parameters:






    Points to a vector of four values: red, green, blue, and alpha in that order.

  4. Each pixel is then converted to have the specified internal format. This conversion maps the component values of the pixel (R, G, B, and A) to the values included in the internal format (red, green, blue, alpha, luminance, and intensity).

The new lookup tables are treated as 1D images with internal formats like texture images and convolution filter images. As a result, the new tables can operate on a subset of the components of passing pixel groups. For example, a table with internal format GL_ALPHA modifies only the A component of each pixel group and leaves the R, G, and B components unmodified.

Using Framebuffer Image Data for Color Tables

If the copy texture extension is supported, you can define a color table using image data in the framebuffer. Call glCopyColorTableSGI(), which accepts image data from a color buffer region (width-pixel wide by one-pixel high) whose left pixel has window coordinates (x,y). If any pixels within this region are outside the window that is associated with the OpenGL context, the values obtained for those pixels are undefined.

The pixel values are processed exactly as if glCopyPixels() had been called until just before the application of pixel transfer modes. See the illustration “Drawing Pixels with glDrawPixels*()” on page 310 of the OpenGL Programming Guide.

At this point, all pixel component values are treated exactly as if glColorTableSGI() had been called, beginning with the scaling of the color components by GL_COLOR_TABLE_SCALE_SGI. The semantics and accepted values of the target and internalformat parameters are exactly equivalent to their glColorTableSGI() counterparts.

Lookup Tables in the Image Pipeline

The the following lookup tables exist at different points in the image pipeline (see Figure 10-3):


Located immediately after index lookup or RGBA to RGBA mapping, and immediately before the convolution operation.


Located immediately after the convolution operation (including its scale and bias operations) and immediately before the color matrix operation.


Located immediately after the color matrix operation (including its scale and bias operations) and immediately before the histogram operation.

To enable and disable color tables, call glEnable() and glDisable() with the color table name passed as the cap parameter. Color table lookup is performed only for RGBA groups, though these groups may have been specified as color indexes and converted to RGBA by an index-to-RGBA pixel map table.

When enabled, a color lookup table is applied to all RGBA pixel groups, regardless of its associated function.

New Functions

The SGI_color_table extension introduces the following functions:

  • glColorTableSGI()

  • glColorTableParameterivSGI()

  • glGetColorTableSGI()

  • glGetColorTableParameterivSGI()

  • glGetColorTableParameterfvSGI()

SGIX_interlace—The Interlace Extension

The interlace extension, SGIX_interlace, provides a way to interlace rows of pixels when rasterizing pixel rectangles or loading texture images. Figure 10-4 illustrates how the extension fits into the imaging pipeline.

In this context, interlacing means skipping over rows of pixels or texels in the destination. This is useful for dealing with interlace video data since single frames of video are typically composed of two fields: one field specifies the data for even rows of the frame, the other specifies the data for odd rows of the frame, as shown in the following illustration:

Figure 10-8. Interlaced Video (NTSC, Component 525)

Interlaced Video (NTSC, Component 525)

When interlacing is enabled, all the groups that belong to a row m are treated as if they belonged to the row 2×μ. If the source image has a height of h rows, this effectively expands the height of the image to 2×η rows.

Applications that use the extension usually first copy the first set of rows and then the second set of rows, as explained in the following sections.

In cases where errors can result from the specification of invalid image dimensions, the resulting dimensions—not the dimensions of the source image—are tested. For example, when you use glTexImage2D() with GL_INTERLACE_SGIX enabled, the source image you provide must be of height (texture_height + texture_border)/2.

Using the Interlace Extension

One application of the interlace extension is to use it together with the copy texture extension. You can use glCopyTexSubImage2D() to copy the contents of the video field to texture memory and end up with de-interlaced video. You can interlace pixels from two images as follows:

  1. Call glEnable() or glDisable() with cap set to GL_INTERLACE_SGIX.

  2. Set the current raster position to xr, yr, as follows:

    glDrawPixels(width, height, GL_RGBA, GL_UNSIGNED_BYTE, I0);  

  3. Copy pixels into texture memory (usually field 1 is first), as follows:

    glCopyTexSubImage2D (GL_TEXTURE_2D, level, xoffset, yoffset, x, y, 
                        width, height)

  4. Set raster position to (xr,yr+zoomy), as follows:

    glDrawPixels(width, height, GL_RGBA, GL_UNSIGNED_BYTE, I1);

  5. Copy the pixels from the second field (usually F1 is next). For this call, set the following:

    y offset += yzoom
    y += height (to get to next field)

This process is equivalent to taking pixel rows (0,2,4,...) of I2 from image I0, and rows (1,3,5,...) from image I1, as follows:

glDisable( GL_INTERLACE_SGIX);  
/* set current raster position to (xr,yr) */
glDrawPixels(width, 2*height, GL_RGBA, GL_UNSIGNED_BYTE, I2);

SGIX_pixel_texture—The Pixel Texture Extension

The pixel texture extension, SGIX_pixel_texture, allows applications to use the color components of a pixel group as texture coordinates, effectively converting a color image into a texture coordinate image. Applications can use the system's texture-mapping capability as a multidimensional lookup table for images. Using larger textures will give you higher resolution, and the system will interpolate whenever the precision of the color values (texture coordinates) exceeds the size of the texture.

In effect, the extension supports multidimensional color lookups that can be used to implement accurate and fast color-space conversions for images. Figure 10-4 illustrates how the extension fits into the imaging pipeline.

Note: This extension is experimental and will change.

Texture mapping is usually used to map images onto geometry, and each pixel fragment that is generated by the rasterization of a triangle or line primitive derives its texture coordinates by interpolating the coordinates at the primitive's vertexes. Thus, you do not have much direct control of the texture coordinates that go into a pixel fragment.

By contrast, the pixel texture extension gives applications direct control of texture coordinates on a per-pixel basis, instead of per-vertex as in regular texturing. If the extension is enabled, glDrawPixels() and glCopyPixels() work differently. For each pixel in the transfer, the color components are copied into the texture coordinates, as follows:

  • Red becomes the s coordinate.

  • Green becomes the t coordinate.

  • Blue becomes the r coordinate.

  • Alpha becomes the q coordinate (fourth dimension).

To use the pixel texture extension, an application has to go through these steps:

  1. Define and enable the texture you want to use as the lookup table, as follows:

    glTexImage3DEXT(GL_TEXTURE_3D_EXT, args);

    This texture does not have to be a 3D texture.

  2. Enable pixel texture and begin processing images, as follows:


Each subsequent call to glDrawPixels() uses the predefined texture as a lookup table and uses those colors when rendering to the screen. Figure 10-5 illustrates how colors are introduced by the extension.

As in regular texture mapping, the texel found by mapping the texture coordinates and filtering the texture is blended with a pixel fragment, and the type of blend is controlled with the glTexEnv() function. In the case of pixel texture, the fragment color is derived from the pixel group; thus, using the GL_MODULATE blend mode, you could blend the texture lookup values (colors) with the original image colors. Alternatively, you could blend the texture values with a constant color set with the glColor*() functions. To do this, use this function:

glPixelTexGenSGIX(GLenum mode);

The valid values of mode, shown in the following, depend on the pixel group and the current raster color, which is the color associated with the current raster position:


If mode is GL_RGB, the fragment red, green, and blue will be derived from the current raster color, set by the glColor() function. Fragment alpha is derived from the pixel group.


If mode is GL_RGBA, the fragment red, green, blue, and alpha will be derived from the current raster color.


If mode is GL_ALPHA, the fragment alpha is derived from the current raster color and red, green, and blue will be derived from the pixel group.


If mode is GL_NONE, the fragment red, green, blue, and alpha are derived from the pixel group.

Note: See the following section “Platform Issues” for currently supported modes.

When using pixel texture, the format and type of the image do not have to match the internal format of the texture. This is a powerful feature; it means, for example, that an RGB image can look up a luminance result. Another interesting use is to have an RGB image look up an RGBA result, in effect, adding alpha to the image in a complex way.

Platform Issues

Pixel texture is supported only on Fuel and InfinitePerformance systems. For further restrictions on the implementation, see your platform release notes and the man page for glPixelTexGenSGIX(). For new applications targeting Onyx4 and Silicon Graphics Prism systems, you can achieve similar functionality by writing fragment programs using the fragment color components as texture coordinates.

When you use 4D textures with an RGBA image, the alpha value is used to derive Q, the 4D texture coordinate. Currently, the Q interpolation is limited to a default GL_NEAREST mode, regardless of the minfilter and magfilter settings.

Note: When working with mipmapped textures, the effective LOD value computed for each fragment is 0. The texture LOD and texture LOD bias extensions apply to pixel textures as well.

New Functions

The SGIX_pixel_texture extension introduces the function glPixelTexGenSGIX().