Need help to understand render callback in PlayThroughChannel

edited June 2013

Hi All,

Currently I am getting input data -> encode -> transfer network (tcp) -> decode -> put to circular buffer -> render.
I have some unclear piece of code and I post here and hope that I will know it more clearly.

First of all, the audio patterns:
I read the guide from apple here:

In the document, to dev a audio app we have these patterns:

a. I/O Pass Through
b. I/O Without a Render Callback Function
c. I/O with a Render Callback Function
d. Output-Only with a Render Callback Function

Could someone answer me PlayThroughChannel in TAAE is pattern a) or pattern c) ?

Second, the input callback:

static void inputCallback(id receiver,
AEAudioController *audioController,
void *source,
const AudioTimeStamp *time,
UInt32 frames,
AudioBufferList *audio){

The frames always 128, mean the data is 256 bytes. Am i possible to change the frames to 160.
I think it would have a setting here.

Here is my AudioStreamDescription when I init AEAudioController:

AudioStreamBasicDescription audioDescription;
memset(&audioDescription, 0, sizeof(audioDescription));
audioDescription.mFormatID          = kAudioFormatLinearPCM;
audioDescription.mFormatFlags       = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked | kAudioFormatFlagsNativeEndian;
audioDescription.mChannelsPerFrame  = 1;
audioDescription.mBytesPerPacket    = sizeof(SInt16)*audioDescription.mChannelsPerFrame;
audioDescription.mFramesPerPacket   = 1;
audioDescription.mBytesPerFrame     = sizeof(SInt16)*audioDescription.mChannelsPerFrame;
audioDescription.mBitsPerChannel    = 8 * sizeof(SInt16);
audioDescription.mSampleRate        = 8000.0;

audioController = [[AEAudioController alloc] initWithAudioDescription:audioDescription inputEnabled:true];

Third and most important, The Render callback

static OSStatus renderCallback(id channel,
AEAudioController *audioController,
const AudioTimeStamp *time,
UInt32 frames,
AudioBufferList *audio){

Let I explain how I understand the code:

AudioBufferList *nextBuffer = TPCircularBufferNextBufferList(&THIS->_buffer, NULL);
if( !nextBuffer){
if( nextBuffer->mNumberBuffers== audio->mNumberBuffers) break;
=> Remove some incompatible buffer from BufferList.

AudioStreamBasicDescription audioDescription = [THIS audioDescription];
UInt32 fillCount = TPCircularBufferPeek(&THIS->_buffer, NULL,&audioDescription);
if(fillCount >= frames + THIS->_bufferMaxLatencyInFrames){
UInt32 skip = fillCount - frames;

    TPCircularBufferDequeueBufferListFrames(&THIS->_buffer, &skip, NULL, NULL, &audioDescription);

==> Get fillcount needed, and then remove unused buffer. Then we only have last buffer to play


==> get "frames" data from buffer to audio to play.

okie, it's all my understand, please help if I have misunderstood about it, so that I & anyone will know it better.

Back to my question:

  1. Incase nextBuffer == null -> How do it dequeue to fill data to audio and what for if the data is empty ?
  2. If I let _bufferMaxLatencyInframes = 0 (in default) my voice will "lag", but if I let it large enough (currently is 6000) then the voice is smooth a little bit.
  3. If I transfer my voice from slow network (example 3G) then the voice is not play ( or play with noise), is there a way to delay a little before play and reduce noise ?

    Last but not least, If I don't use headphone, then the voice will noise (not hear anything except noise), but if I put headphone in, and talk in headphone speaker or whether phone speaker it would be ok ? I have checked with some iphone and it still exist. do the reason is because I use 1 channel ?

Any help is really appreciated.

Thanks & Best Regards.

Sign In or Register to comment.