TAAE2: Pesky Questions Thread

124»

Comments

  • Is AERendererRun a general method, or a specific method?
    It seems specific to the output unit, and thus: why is this call in the AERenderer file?
    Is it ever used for subrenderers? Currently the AERenderRun call seems relatively simple, which is probably good, but it seems that calling a subrenderer is fiendishly complex as a consequence...

  • @32BT said:
    Ha, "insane" generally means "innovative"... :wink:

    So innovative =)

  • @32BT said:
    Is AERendererRun a general method, or a specific method?
    It seems specific to the output unit, and thus: why is this call in the AERenderer file?
    Is it ever used for subrenderers? Currently the AERenderRun call seems relatively simple, which is probably good, but...

    It's the main interface to AERenderer - you can use it from any output or intermediate node. You'll see it used in AEAudioUnitModule for format converter types like the varispeed, for instance, as well as the output. I'm halfway through an AEAudioFileOutput too, which also uses it.

    ...it seems that calling a subrenderer is fiendishly complex as a consequence...

    I'm gonna whip up an AESubrenderModule which manages its own sub-renderer and keeps its sample rate and such in sync with the top-level one. Then it's just gonna push a buffer and call AERendererRun with that buffer as the output.

  • Continuing our discussion of renderers, contexts and sample rate:

    I feel right now like we've each got a separate vision for how it should look, and I'm having trouble pulling together the ideas into a coherent picture. So, in the interests of peeling it back to basics and building up from there, a restatement of the requirements:

    • Modules need to know the sample rate before they start rendering, so they can init appropriately (on the main thread, most likely).
    • Modules need to be told when the sample rate changes, on the main thread, and synchronously with the output unit's restart procedure.
    • There may be more than one top-level renderer/output, and they may be running at different sample rates (e.g. simultaneously running live audio output while doing an offline render to file at a different rate).
    • There may be sub-renderers that feed into the same output, which need to be run at the same rate.
    • There may be sub-renderers that need to be run at a different rate (through a format converter unit, for instance)
    • The hierarchy and structure needs to be obvious and intuitive

    Some things that fall out of that:

    • As we've discussed, there needs to be an explicit connection between modules and whatever is in control of the sample rate.
    • Sample rate probably either needs to be a property of the renderer or of a context class controlled by/associated with the renderer (on a one-to-one basis).
    • A singleton controller that manages sample rate isn't an option because the sample rate can differ across the system.
    • Because sample rate changes need to occur synchronously with audio system restart, the class responsible for the audio system (i.e. AEAudioUnitOutput) needs to be able to set the sample rate, which in turn propagates the change through the modules.

    Seems to me there's some blurred lines between the renderer and the context; the renderer is really just a container for the render block plus some logic to set up state for the render block. The context is the renderer's state.

    I don't know what to take away from all that yet. My personal inclination is to keep things as they are, but if people're having problems with the relationship between module and renderer, perhaps some renaming is in order? I'm currently not a fan of making the context into its own class with its own responsibilities, as to me it just complicates things further, introduces yet another relationship.

  • At the very least we need to define when objects need to be re-initialized as a consequence of environment changes (in the majority of the cases). The question then is: do you want to re-initialize, or simply re-instantiate? The latter is generally the best strategy for concurrent processes, since you can ensure more constant ivars which are easier to manage overall.

    The combination of a renderer and a block for example seems quite static, and probably doesn't require the added complexity of changing the block midway thru. In fact, having such functionality is no different than switching renderers one level up.

    Similarly, switching a renderer and parameter combination is likely not relevant, because either of these is directly or indirectly dependent on the environment changes, making them likely candidates for re-initialization in most cases = re-instantiate...

    Another point of interest: as indicated there may be a live audiostream and an offline stream. The latter may have a structure that needs to be a copy of the former. So how can this copy be created? Which parameters propagate between copies, and which don't? How can these be separated, if at all?

  • @32BT said:
    At the very least we need to define when objects need to be re-initialized as a consequence of environment changes (in the majority of the cases). The question then is: do you want to re-initialize, or simply re-instantiate? The latter is generally the best strategy for concurrent processes, since you can ensure more constant ivars which are easier to manage overall.

    Reinit, just as it is now. I can't see how you'd want to destroy and recreate your modules when changing sample rate.

    The combination of a renderer and a block for example seems quite static, and probably doesn't require the added complexity of changing the block midway thru. In fact, having such functionality is no different than switching renderers one level up.

    I think we're getting a bit off topic there...

    Similarly, switching a renderer and parameter combination is likely not relevant, because either of these is directly or indirectly dependent on the environment changes, making them likely candidates for re-initialization in most cases = re-instantiate...

    Not quite sure what you meant by parameter, there, I'm afraid...

    Another point of interest: as indicated there may be a live audiostream and an offline stream. The latter may have a structure that needs to be a copy of the former. So how can this copy be created? Which parameters propagate between copies, and which don't? How can these be separated, if at all?

    You'd just use the same renderer, but driven by the offline output, rather than the live one.

    Anyway, I think it might be Executive Decision time, as we've been on this issue for ages... I think I'm going to keep everything pretty much as it is, unless there are really compelling reasons to change it. So: renderer is responsible for rendering and managing the render context (which is just a struct of values passed to modules). Output drives the renderer. Modules are bound to their renderer, and are initted with the renderer so they can watch it for changes. The output modifies renderer params (e.g. sample rate) as necessary, and modules update synchronously with these changes.

  • Okay, my apologies for sidetracking. Didn't mean to start this discussion all over. I think you've already proven that the current setup works, especially for what you intended to do.

  • No problem at all! It's been great, actually, please do keep pushing as much as you like =)

  • I think this falls into the pesky question thread. I wander if you tried the MatrixMixer out and only ran into problems with it throwing errors when you mess with its inputs and outputs and AUs connected to it. Thus you had to created essentially your own MatrixMixer that handles these things better. Or maybe you never explored Apple's Matrix Mixer and went the route you did for other reasons.

  • Hey @Lucas_Goossen =)

    I did always plan to use the matrix mixer in order to implement multichannel output for TAAE1, but I never got around to it - I began, but it was a huuuge job and I had more pressing things to do, so it just sat as a stash in my local git repo for ages.

    The new architecture of TAAE2 makes multichannel output trivial, as well as lots of other things; the idea was going for power, as I needed quite a bit of it for Loopy Masterpiece Edition, and Jonatan's buffer stack idea served it quite well.

Sign In or Register to comment.