Chapter 4. Realistic Audio Output Program

The procedure in Chapter 3, “Simple Audio Output Program” was for a single audio buffer. In the example in this chapter, you will process millions of audio samples, using the following procedure:

Step 1: Include the ml.h and mlu.h Files

See “Step 1: Include the ml.h and mlu.h Files” in Chapter 3.

Step 2: Locate a Device

See “Step 2: Locate a Device” in Chapter 3.

Step 3: Open the Device Output Path

Open the device output path just as in the previous example in “Step 3: Open the Device Output Path” in Chapter 3:

mlOpen( pathId, NULL, &openPath );

Opening the path also allocates memory for the message queues used to communicate with the device. One of those queues will hold messages sent from our application to the device, and one will hold replies sent from the device back to our application.

Step 4: Allocate Buffers

If you were only processing a short sound, you could preallocate space for the entire sound and perform the operation straight from memory. However, for a more general and efficient solution, you must allocate space for a small number of buffers and reuse each buffer many times to complete the whole transfer.

Assume that memory has been allocated for 12 audio buffers and that those buffers have been filled with the first few seconds of audio data to be output.

Step 5: Send Buffers to the Open Path

Send each of the 12 buffers to the open path. Here the queue of messages between application and device becomes more interesting. The following code segment enqueues all the audio buffers to the device:

int i;
for ( i=0, i < 12; ++i )
    MLpv msg[3];
    msg[0].param = ML_AUDIO_BUFFER_POINTER;
    msg[0].value.pByte = (MLbyte*)buffers[i];
    msg[0].length = bufferSize;
    msg[1].param = ML_AUDIO_UST_INT64;
    msg[1].param = ML_END;
    mlSendBuffers( openPath, msg );

Notice that each audio buffer is sent in its own message. This is because each message is processed atomically, and therefore refers to a single instant in time. In addition to the audio buffer, this message also contains space for an audio unadjusted system time (UST) time stamp. That time stamp will be filled in as the device processes each message. It will indicate the time at which the first audio sample in each buffer passed out of the machine.

Step 6: Begin the Transfer

Tell the device to begin the transfer. It reads messages from its input queue, interprets the buffer parameters within them, and processes those buffers with the following:


At this point, you could tell the program to sleep while the device processes the buffers, as was done in Chapter 3, “Simple Audio Output Program”. However, a more efficient approach is to select the file descriptor for the queue of messages sent from the device back to your application. In ML terminology, that file descriptor is called a wait handle on the receive queue:

 MLwaitable pathWaitHandle;
  mlGetReceiveWaitHandle(openPath, &pathWaitHandle);

Having obtained the wait handle, you can wait for it to fire by using select on IRIX or Linux, or WaitForSingleObject on Windows, as follows:

On IRIX or Linux:

 fd_set fdset;
  FD_ZERO( &fdset);
  FD_SET( pathWaitHandle, &fdset);

  select( pathWaitHandle+1, &fdset, NULL, NULL, NULL );

On Windows:

WaitForSingleObject( pathWaitHandle, INFINITE );

Step 7: Receive Replies from the Device

Once the select call fires, a reply will be waiting. Retrieve the reply from the receive queue as follows:

 MLint32 messageType;
  MLpv* replyMessage;

  mlReceiveMessage(openPath, &messageType, &replyMessage );

  if( messageType == ML_BUFFERS_COMPLETE )
    printf("Buffer received!\n");

This reply has the same format and content as the buffers message that was originally enqueued, plus any blanks in the original message will have been filled in. In this case, the reply message includes the location of the audio buffer that was transferred, as well as a UST time stamp indicating when its contents started to flow out of the machine:

 MLbyte* audioBuffer = replyMessage[0].value.pByte;
  MLint64 audioUST    = replyMessage[1].value.int64;

Note: The UST time stamp is useful to synchronize several different media streams (for example, to make sure the sounds and pictures of a movie match up).

Step 8: Refill the Buffer for Further Processing

You can refill the buffer with more audio data and send it back to the device to be processed again with the following:

mlSendBuffers(openPath, replyMessage);

In this case, you are making a small optimization. Rather than construct a whole new buffers message, simply reuse the reply to your original message.

At this point, you have processed the reply to one buffer. If you wish, you can now go back to the select call and wait for another reply from the device. This can be repeated indefinitely.

Step 9: End the Transfer

Once enough buffers have been transferred, you can end the transfer as follows:


In addition to ending the transfer, this call performs the following:

  • Flushes the queue to the device

  • Aborts any remaining unprocessed messages

  • Returns any replies on the receive queue to the application

The mlEndTransfer call is a blocking call. When it returns, the queue to the device will be empty, the device will be idle, and the queue from the device to your application will contain any remaining replies.

If you wish, you can send more buffers to the path (see “Step 5: Send Buffers to the Open Path”).

Step 10: Close the Path

Use the following to close the path:


Note: This chapter has provided only a quick introduction to an audio output device. Through a similar interface, ML also supports audio input, video input, video output, and memory-to-memory transcoding operations.