The processing of a video input/output path is described by two sets of parameters:
Video parameters describe how to interpret and generate the signal as it arrives and leaves, as discussed in this chapter
Image parameters describe how to write/read the resulting bits to/from the device (see Chapter 7, “Image Buffer Parameters”)
Not all parameters may be supported on a particular video jack or path. Some parameters may be adjusted on both a path and a jack, or may be adjusted on just one or the other. Use mlGetCapabilities to obtain a list of parameters supported by a jack or path. In addition, not all values may be supported on a particular parameter. Use mlPvGetCapabilities to obtain a list of the values supported by the parameter.
This chapter contains the following sections:
![]() | Note: This chapter assumes a working knowledge of digital video concepts. Readers unfamiliar with terms such as video timing , 422, or CbYCr should consult a text devoted to this subject. A good resource is A Technical Introduction to Digital Video by Charles Poynton, published by John Wiley & Sons, 1996 (ISBN 0-471-12253-X, hardcover). |
There are two kinds of video sampling, spatial and temporal. Our concern here is with temporal sampling, of which there are two techniques:
Progressive sampling is frame-based (for example, from film)
Interlaced sampling is field-based
In progressive, frame-based sampling, a picture at a specified resolution is sampled at a constant rate. Film is a progressive sampling source for video.
Imagine an automatic film advance camera that can take 60 pictures per second, with which you take a series of pictures of a moving ball. Figure 6-1 shows 10 pictures from that sequence (different colors emphasize the different positions of the ball in time). The time delay between each picture is a 60th of a second, so this sequence lasts 1/6th of a second.
In interlaced sampling, the video is sampled periodically at two sample fields, F1 and F2, such that half of the display lines of the picture are scanned at a time.
Pairs of sample fields are superimposed on each other ( interlaced) to create the video frame. In the video frame, the sample frames appear coincident to the eye even though they are consecutive. This effect is aided by the persistence of phosphors on the display screen that hold the impression of the first set of scanned lines as the second set displays. (For example, this sequence is made visible if you videotape a computer monitor display.)
Most video signals in use today, including several high-definition video formats, are field-based (interlaced) rather than frame-based (progressive). In ML, the value of the video timing parameter ML_VIDEO_TIMING_INT32 defines the specific video standard, and each standard is defined as progressive or interlaced.
For example, suppose you shoot the moving ball with an NTSC video camera. NTSC video has 60 fields-per-second, so you might think that the video camera would record the same series of pictures as shown in Figure 6-1, but it does not. The video camera does record 60 images per second, but each image consists of only half of the scanned lines of the complete picture at a given time, as shown in Figure 6-2, rather than a filmstrip of 10 complete images.
Note how the image lines alternate between odd- and even-numbered images.
This section describes the video parameters.
Sets the colorspace at the video jack. For input paths, this is the colorspace you expect to receive at the jack. For output paths, it is the colorspace you desire at the jack.
The following colorspace values are supported:
ML_COLORSPACE_RGB_601_FULL |
ML_COLORSPACE_RGB_601_HEAD |
ML_COLORSPACE_CbYCr_601_FULL |
ML_COLORSPACE_CbYCr_601_HEAD |
ML_COLORSPACE_RGB_240M_FULL |
ML_COLORSPACE_RGB_240M_HEAD |
ML_COLORSPACE_CbYCr_240M_FULL |
ML_COLORSPACE_CbYCr_240M_HEAD |
ML_COLORSPACE_RGB_709_FULL |
ML_COLORSPACE_RGB_709_HEAD |
ML_COLORSPACE_CbYCr_709_FULL |
ML_COLORSPACE_CbYCr_709_HEAD |
See “ML_IMAGE_COLORSPACE_INT32” in Chapter 7 for a detailed description of colorspace values.
Describes the alpha value for any pixel outside the clipping region. This is a real number: a value of 0.0 is the minimum (fully transparent), 1.0 is the maximum (fully opaque). Default is 1.0.
Describes the blue value for any pixel outside the clipping region. This is a real number: a value of 0.0 is the minimum (fully transparent), 1.0 is the maximum (fully opaque). Default is 1.0.
Describes the Cb value for any pixel outside the clipping region. This is a real number: a value of 0.0 is the minimum legal value, 1.0 is the maximum legal value. Default is 0.
Describes the Cr value for any pixel outside the clipping region. This is a real number: a value of 0.0 is the minimum legal value, 1.0 is the maximum legal value. Default is 0.
Describes the green value for any pixel outside the clipping region. This is a real number: a value of 0.0 is the minimum legal value, 1.0 is the maximum legal value. Default is 0.
Describes the red value for any pixel outside the clipping region. This is a real number: a value of 0.0 is the minimum legal value (black), 1.0 is the maximum legal value. Default is 0.
Describes the luminance value for any pixel outside the clipping region. This is a real number: a value of 0.0 is the minimum legal value (black), 1.0 is the maximum legal value. Default is 0.
Queries the incoming genlock signal for an output path. Not all devices may be able to sense genlock timing, but those that do will support this parameter. Common values match those for ML_VIDEO_TIMING listed in “Video Parameter Descriptions”, plus the following:
ML_TIMING_NONE | |
There is no signal present | |
ML_TIMING_UNKNOWN | |
The timing of the genlock cannot be determined |
Describes the genlock source timing. Only accepted on output paths. Each genlock source is specified as an output timing on the path and corresponds to the same timings as available with ML_VIDEO_TIMING_INT32.
Describes the genlock signal type. Only accepted on output paths. Each genlock type is specified as either a 32-bit resource ID or ML_VIDEO_GENLOCK_TYPE_INTERNAL.
Sets the vertical height for each F1 field of the video signal. For progressive signals, it specifies the height of every frame.
Sets the vertical height for each F2 field of the video signal. For progressive signals, it always has value 0.
Sets the default signal at the video jack when there is no active output. The following values are supported:
Determines the device behavior if the application is doing output and fails to provide buffers fast enough (that is, the queue to the device underflows). Allowable options are:
ML_VIDEO_REPEAT_FIELD | |
The device repeats the last field. For progressive signals or interleaved formats, this is the same as ML_VIDEO_REPEAT_FRAME . | |
ML_VIDEO_REPEAT_FRAME | |
The device repeats the last two fields. This output capability is device dependent and the allowable settings should be queried via the get capabilities of the ML_VIDEO_OUTPUT_REPEAT_INT32 parameter. | |
ML_VIDEO_REPEAT_NONE | |
The device does nothing, usually resulting in black output. |
Sets the precision (number of bits of resolution) in the signal at the jack. This is an integer. Example values are as follows:
8 | 8-bit signal |
10 | 10-bit signal |
Sets the sampling at the video jack. (See “ML_IMAGE_SAMPLING_INT32” in Chapter 7 for a detailed description of sampling values.)
The following values are supported:
ML_SAMPLING_422 |
ML_SAMPLING_4224 |
ML_SAMPLING_444 |
ML_SAMPLING_4444 |
Used to query the incoming signal on an input path. Not all devices may be able to sense timing, but those that do will support this parameter. Common values match those for ML_VIDEO_TIMING listed in “Video Parameter Descriptions”, plus the following:
ML_TIMING_NONE | |
There is no signal present | |
ML_TIMING_UNKNOWN | |
The timing of the input signal cannot be determined |
Sets the start vertical location on F1 fields of the video signal. For progressive signals, it specifies the start of every frame.
Sets the start vertical location on F2 fields of the video signal. Ignored for progressive timing signals.
Sets the timing on an input or output video path. Not all timings may be supported on all devices. On devices that can auto-detect, the timing may be read-only on input. (Details of supported timings may be obtained by calling mlPvGetCapabilites on this parameter). Figure B-1 and Figure B-2 illustrate details of the 601 standard.
The format is as follows:
ML_TIMING_xxxx_yyyyxzzzz_nnn[i|p|PsF] |
where:
xxxx | Total number of lines. | |
yyyy x zzzz | Width by height of the active video region (high definition). | |
nnn[i|p| PsF] | The frame rate, followed by one of the following:
|
The following SD timings are supported:
ML_TIMING_525 (NTSC) |
ML_TIMING_525_SQ_PIX |
ML_TIMING_625 (PAL) |
ML_TIMING_625_SQ_PIX |
The following HD timings are supported:
ML_TIMING_1125_1920x1080_60p |
ML_TIMING_1125_1920x1080_5994p |
ML_TIMING_1125_1920x1080_50p |
ML_TIMING_1125_1920x1080_60i |
ML_TIMING_1125_1920x1080_5994i |
ML_TIMING_1125_1920x1080_50i |
ML_TIMING_1125_1920x1080_30p |
ML_TIMING_1125_1920x1080_2997p |
ML_TIMING_1125_1920x1080_25p |
ML_TIMING_1125_1920x1080_24p |
ML_TIMING_1125_1920x1080_2398p |
ML_TIMING_1125_1920x1080_24PsF |
ML_TIMING_1125_1920x1080_2398PsF |
ML_TIMING_1125_1920x1080_30PsF |
ML_TIMING_1125_1920x1080_2997PsF |
ML_TIMING_1125_1920x1080_25PsF |
ML_TIMING_1250_1920x1080_50p |
ML_TIMING_1250_1920x1080_50i |
ML_TIMING_1125_1920x1035_60i |
ML_TIMING_1125_1920x1035_5994i |
ML_TIMING_750_1280x720_60p |
ML_TIMING_750_1280x720_5994p |
Following is an example that sets the video timing and colorspace for an HDTV signal:
MLpv message[3] message[0].param = ML_VIDEO_TIMING_INT32 message[0].value.int32 = ML_TIMING_1125_1920x1080_5994p; message[1].param = ML_VIDEO_COLORSPACE_INT32; message[1].value.int32 = ML_COLORSPACE_CbYCr_709_HEAD; message[2].param = ML_END; mlSetControls( device, message); |