Changing Pitch AND Tempo real time with ZTX (formerly Dirac)

Hi everyone,

I'm completely stuck on a problem trying to implement the ZTX (formerly Dirac) DSP with TAAE. I've got it filtering an AEAudioFilePlayer channel with an AEBlockFilter. Here's a sample of the code:

[AEBlockFilter filterWithBlock:^(AEAudioControllerFilterProducer producer, void *producerToken, const AudioTimeStamp *time, UInt32 frames, AudioBufferList *audio) {
        UInt32 framesNeeded = (UInt32)ZtxFxMaxOutputBufferFramesRequired(mTimeFactor, mPitchFactor, frames);
        //framesNeeded--;
        OSStatus status = producer(producerToken, audio, &frames);
        if ( status != noErr ) return;

        short **audioIn = AllocateAudioBufferSInt16(1, frames);
        short **audioOut = AllocateAudioBufferSInt16(1, framesNeeded);

        for (long v = 0; v < frames; v++) {
            audioIn[0][v] = ((SInt16 *)audio->mBuffers[1].mData)[v];
        }

        long framesOut = ZtxFxProcess(mTimeFactor, mPitchFactor, audioIn, audioOut, frames, mZtx);

        for (int i = 0; i<framesOut; i++) {

            ((SInt16*)audio->mBuffers[0].mData)[i] =
            ((SInt16*)audio->mBuffers[1].mData)[i] = ((SInt16*)audioOut[0])[i];

        }

Both mTimeFactor and mPitchFactor are initialized to 1. The ZtxFxProcess() method returns the number of frames in the 'audioOut' data. When mTimeFactor is 1, 'framesOut' always equals 'frames', passed into the block from the callback and everything sounds fine. When mTimeFactor is changed, 'framesOut' does not equal 'frames' as the audioOut is stretched or shortened to match the tempo change. This results in distortion in the output and no tempo change.

I added an AEAudioUnitFilter: kAudioUnitSubType_NewTimePitch after the AEBlockFilter and changed the tempo with it (sound quality not as good). It changes the frames sent to the AEBlockFilter to reflect the tempo change. Ex normally 512 frames are passed, after changing the tempo with the Audio Unit, it's fewer or more frames depending on the tempo. This altered frame amount is passed through the AUGraph downstream from the AEAudioUnitFilter, through the mixer unit and on to the output.

My question is, is there a way to change the frames passed downstream from the AEBlockFilter? Thanks in advance for any help!

Comments

  • So sorry about the delay, @jsonfellin. Yes indeed, there is: use the third argument to 'producer' to pull as many frames as you need. So, in this case, rather than passing in "frames", pass in "framesNeeded".

  • Thanks so much for responding @Michael.

    I should have been clearer in my question. It's 'framesOut' that I need to pass down stream because the time stretching will happen when ZtxFxProcess(mTimeFactor, mPitchFactor, audioIn, audioOut, frames, mZtx); gets called.

    I have to request 'frames' in the 'producer' method, process those frames with the above method, then 'framesOut' will be either more or fewer frames depending on 'mTimeFactor'. If I use 'framesNeeded' in the 'producer' method I won't get the pre-processed frame count, not the resulting frames after processing.

    I'm guessing once the audio buffer is filled with the call to 'producer', there's no way to change the downstream frames within this AEBlockFilter. Any ideas in how to make this work?

    Thanks again!

  • Actually, you need to go in the other direction: Core Audio is a "pull" model, which means at each node in the chain, that node needs to provide exactly the number of frames requested. So, a time stretching node still needs to provide the requested number of frames, but it can vary the number of frames it requests upstream.

  • Thanks @Michaal.

    I think I understand the "pull" model, this AEBlockFilter pulls the audio from the channel or channelGroup that it's attached to with producer(). Then something else pulls the audio from the filter.

    To follow the logic through then, I need to change the number of frames requested or pulled from the AEBlockFilter time stretching node. I can't change the number of frames requested by the time stretching node because the audio is processed within the AEBlockFilter, resulting in a different number frames, framesOut above, which then need to be requested by the next node in the chain from this AEBlockFilter, correct?

    If so, what is that node? And is this even possible?

  • Besides if I pass a number of frames different from AEBlockFilter's frames parameter in the producer() method I get an OSStasus error from line 383 in AEAudioController.

    When I use AEAudioUnitFilter: kAudioUnitSubType_NewTimePitch and I apply a time stretching scalar, the number of frames in the filterCallback is different than the audioUnitRenderCallback. (ex usually 512 in the filterCallback and more or less in the audioUnitRenderCallback). It seems I need something like this behavior for the AEBlockFilter but I have no idea how to do it or if it's possible...

Sign In or Register to comment.