Render buffer sizes other than 512, 1024, 2048 or 4096?

I have a serious problem with my app that comes up on specific systems that I can't reproduce on any of my Macs or mobile devices.

The thing is, my render routines rely on the fact that the number of frames in render routines is one of: 512, 1024, 2048 or 4096. I do an assert() to make sure it's one of these numbers because otherwise my code will be a lot more complicated (I do FFT, equalization and other stuff).

The documentation says 4096 is possible only on iOS and only when the screen is locked. Fine. But then some of my OS X users reported crashes and one user was kind enough to tell me the details: it's because the buffer size on his system is always 4096 and consequently my assert() fails because I don't expect 4096 on OS X. I sent him a test version that accepts 4096 but now he says there is no audio. The short sound effects are played but the audio files are not. He says he has some pro video and audio apps installed on his system, including Adobe stuff, but otherwise it's the latest OS X 10.11.1.

Now iOS. I have some crash reports from the App Analytics that point at that same assert() which means sometimes on just some of the phones the buffer size is neither of 512, 1024, 2048 or 4096. In this case it would be difficult to debug as I personally don't know anyone who has this problem on their phone or iPad.

Can anyone give any hints as to what's going on?

The general approach (i.e. drop any assumptions on what the buffer size can be) would complicate things for me big time. Any other way to solve this?

I would appreciate your help guys. Thanks!

Comments

  • @crontab I had problems when something was forcing the phone to run at 48kHz but I was using 44.1 for my audioController's audioDescription. The sample rate converters would lead to non 2^n buffer sizes.

    It was fairly obscure and my solution was to try to set 44.1 and if it failed then I'd use the AVAudioSession sample rate instead.

    Now, with the iPhone 6S always running at 48kHz when it's playing through the device speaker, it's become a more common problem. So much so that newer versions of TAAE now automatically adopt the behaviour I described above.

    See the AEAudioControllerOptionUseHardwareSampleRate in the following link:

    http://theamazingaudioengine.com/doc/_a_e_audio_controller_8h.html#a2c0a90ff526ba7b89a519671536969f7

    I hope this helps - our app, LoopTree, is only available on iOS so I'm not sure about your OS X symptoms.

  • @EdSharp thanks very much for the clarification.

    Though I don't understand some aspects of it very well. If I, say, use AEAudioControllerOptionUseHardwareSampleRate does this mean that I will always get 2^n buffers regardless of the audio source rate (a file in my case)? Besides, I use my own subclass of AEAudioPlayable to load the file and again I'm not sure how it all plays together in terms of rate conversions.

    I would appreciate your help, thanks!

  • I have a gut feeling this is related to the 6s issues I've been unable to understand and resolve as well.

  • @warpling with not having a 6S on hand making it even worse :(

  • Hardware changes are the worst! I don't understand what good reason they possibly had for this. Let me know if I can help test anything.

    I think I'm experiencing two issues, one with my circular buffer filling unpredictably and the other with the RMS of the 6s in quiet settings being much different than the 5s (and maybe the 6??). Still trying to investigate…

  • edited November 2015

    There's a simple answer, but it's not very convenient, I'm afraid: at least on iOS, you can never just assume a particular buffer duration, as apps have no reliable control over this. Even if it appears to work in one context, as soon as a user plugs in some hardware with a limited set of supported sample rates, for example, you'll start seeing different buffer durations.

    If you rely on a particular set of buffer durations, for FFT etc., your only choice is to do your own buffering; use TPCircularBuffer to queue up samples until you reach the required number, then do your processing. It'll add a little bit of latency in the situations where there's a mismatch, but that's the only way.

    In pseudocode:

    const int kRequiredBufferSize = 1024 (for example);
    TPCircularBuffer inputBuffer, outputBuffer;
    
    // Use non-atomic buffers for efficiency, as atomicity isn't required
    TPCircularBufferSetAtomic(inputBuffer, NO);
    TPCircularBufferSetAtomic(outputBuffer, NO);
    
    filter callback(audio, length) {
      // Copy incoming samples onto input buffer
      TPCircularBufferCopyAudioBufferList(inputBuffer, audio, ...)
    
      // Process buffered audio in blocks
      while ( TPCircularBufferPeek(inputBuffer, ...) >= kRequiredBufferSize ) {
        // Prepare space on the output buffer
        AudioBufferList * scratchBuffer = TPCircularBufferPrepareEmptyAudioBufferList(outputBuffer, ...)
        // Pull frames from our input buffer
        TPCircularBufferDequeueBufferListFrames(inputBuffer, kRequiredBufferSize, scratchBuffer, ...)
        // Process the audio
        performAudioProcessing(scratchBuffer);
        // Mark the output buffer space as ready for reading, move onto next free space in buffer
        TPCircularBufferProduceAudioBufferList(outputBuffer)
      }
    
      // Pull processed frames from the output buffer
      TPCircularBufferDequeueBufferListFrames(outputBuffer, length, audio);
    }
    
  • Thanks for the example Michael!
    I've been using a nonatomic circular buffer based very closely on the method you used in this post (http://forum.theamazingaudioengine.com/discussion/653/eq-bars). I notice you're using some different methods in your pseudocode. Is this a better way to implement it?

  • @crontab I'd second what Michael said. Also if you do use AEAudioControllerOptionUseHardwareSampleRate you have to be wary of an action that can change the sample rate. For example, if I have my app playing through the 6S Plus speaker, the sample rate seems locked at 48kHz as others have reported. However, the moment you plug headphones in (whilst using the app), the sample rate will usually then drop to 44.1kHz mid whatever you are doing so you have to handle that gracefully, too.

    I did have a comedy bug whereby the parts of my loops that were already loaded into memory would then either pitch shift down or up when the sample rate changed!!

  • edited November 2015

    @warpling: No worries =) Ah, it's much the same. The AudioBufferList additions just make things a bit more convenient, and allow you to work with noninterleaved audio, which I prefer. I don't really use base TPCircularBuffer at all these days; I always use the AudioBufferList stuff. (Nonatomic thing - good point, I forgot that part. You can use TPCircularBufferSetAtomic(buffer, NO) for that, now.

    @EdSharp: Oh, lovely - I was enjoying some humorous pitch shift stuff with a recent Audiobus SDK tweak =)

  • What do you mean TPCircularBuffer doesn't work with noninterleaved audio?…

  • What a doozy. The bug ended up being my own (of course). A nice coincidental bug that wasn't noticeable with the standard 1024 frames and broke with the 940/941. Derp.

  • @warpling said:
    What do you mean TPCircularBuffer doesn't work with noninterleaved audio?…

    Interleaved audio means a separate buffer for each channel. TPCircularBuffer on its own just manages one buffer. The AudioBufferList additions add the ability to use interleaved audio, by storing additional metadata.

  • Got it, makes sense :)

  • edited July 2017

    Hi, I'm just trying to clarify some behaviour regarding AEAudioControllerOptionUseHardwareSampleRate. It seems my queries have already been discussed here, but it would be extremely useful if somebody could clarify the precise behaviour for me.

    Regarding these two comments...

    @crontab said:
    If I, say, use AEAudioControllerOptionUseHardwareSampleRate does this mean that I will always get 2^n buffers regardless of the audio source rate (a file in my case)?

    @EdSharp said:
    Also if you do use AEAudioControllerOptionUseHardwareSampleRate you have to be wary of an action that can change the sample rate. For example, if I have my app playing through the 6S Plus speaker, the sample rate seems locked at 48kHz as others have reported. However, the moment you plug headphones in (whilst using the app), the sample rate will usually then drop to 44.1kHz mid whatever you are doing so you have to handle that gracefully, too.

    ... does this mean that if I use AEAudioControllerOptionUseHardwareSampleRate I will always get 2^n buffers, providing I also explicitly handle all sample rate changes (headphones, Bluetooth audio de/attached, ?other?), at which point I simply restart the audio controller, syncing to the new hardware sample rate?

    I've tried this method and it seems to work great in all scenarios I am able to test with, but I would like to know if this behaviour is 100% deterministic.

    Any feedback would be greatly appreciated :)

  • Nope, @aliMunchkin - you pretty much can't depend on anything in particular, as it may change at any time, in any future iOS hardware, or audio hardware plugged in via the CCK. You pretty much just gotta take what you're given with buffer durations =)

  • @Michael Thank you so much for clarifying, this really helps us understand our problem.

Sign In or Register to comment.