TAAE2: Pesky Questions Thread

13

Comments

  • ...if you try to formulate the situation here, it boils down to: some modules need to be notified thru a non-realtime thread that the StreamFormat on the "external side" of a specific RIO unit changed. So then I think it makes sense to express it like this also in code.

  • edited April 2016

    @j_liljedahl said:
    Another alternative would be to make an explicit "connection" for those modules that need to handle sample rate changes that way, by hooking them into the StreamFormat callback.

    Way too manual, we don't want devs to have to concern themselves with that level of implementation detail per module

  • @Michael said:
    Hmm. Interesting. Maybe that could be handled by the AEModule superclass too. It could do all the monitoring of the context passed to it and then dispatch/etc to rendererDidChangeSampleRate on main thread. Means modules now have to be concerned with rendering while changing their state, but that's not too hard. It moves inelegance from one place to another, but it might be nicer.

    I don't think they need to be concerned about the changing-samplerate-state. iOS will do sample rate conversion if an IAA node is rendered while having a different streamFormat than the hardware rate:

    For the other built-in AU's (EQs and filters etc..), and AUv3 extensions, it simply means there might be a small inconsistency in their DSP calculations during this short moment of time (cutoff frequencies, oscillator pitch, etc). And again, I wouldn't expect a sample rate change to be glitch-free.. It's not something you do during a performance or recording!

  • I mean parallel execution stuff - like if the module is uninitting and reinitting, it needs to make sure it doesn't render during that time cos it might crash. Wasn't an issue before because the system stops during the swapover. This way, the system restarts and then once it starts again, the modules get the new sample rate and restart themselves. Less neat a process on the inside. May be worth it though for the cleaner interface.

  • @Michael said:
    I mean parallel execution stuff - like if the module is uninitting and reinitting, it needs to make sure it doesn't render during that time cos it might crash. Wasn't an issue before because the system stops during the swapover. This way, the system restarts and then once it starts again, the modules get the new sample rate and restart themselves. Less neat a process on the inside. May be worth it though for the cleaner interface.

    Oh, right! I don't think it crashes though, it just returns errorAudioUnitNotInitialized or something?

    But yeah, it does feel better to stop the output unit, let the modules handle the sample rate changes, and then restart. The only way to do that is for the object that manages the output unit (or rather, its streamformat callback listener) to know which modules should be notified. I wonder if it's not cleanest and clearest to keep it as is, but use initWithRenderer only on those modules that really need it. They would then add a rate-changed notification handler for that renderer, and the renderer posts the notification before restarting the unit. This expresses exactly what's happening.

  • I like the idea of the context as controller, might need to be an object (or at least encapsulated in an object) to do that, so you can make those obj-c connections. The connection should be observer-type, so it remains optional.

    The problem then boils down to: can a module handle context changes in real-time, or does it require a re-initialization. And the question is whether that switch is a responsibility of the module, or the context. The context could decide the best coarse of action:

    1. stop running all together and allow modules to update
    2. stop running the affected modules until they updated themselves
    3. keep running and let modules take care of the decision to process

    I believe the problem is the same for the decision what to do when one module returns an error from the renderloop. What action to take?

    The side question is: can the context notify observers on main, in sync with the renderloop?

  • initWithRenderer needs to go entirely, since that suggest that the module controls the renderer, or that it is a fixed relation. Neither is true. The first time i saw that code I figured that the renderer is merely a shell around the processing logic of the module, so each module got its own renderer.

    I currently believe that modules should only be concerned with the stack.

    initWithSampleRate or initWithStreamFormat would for example be a better initializer for modules that have to reinitialize when those parameters change.

    A side question: should we consider a module to be a controller object between its parameters and its processingfunction?

  • @32BT said:
    initWithRenderer needs to go entirely, since that suggest that the module controls the renderer, or that it is a fixed relation. Neither is true.

    For what it's worth, I certainly came away with the assumption, in the back of my mind, that module<-->renderer was a fixed relation.

  • @j_liljedahl said:
    Oh, right! I don't think it crashes though, it just returns errorAudioUnitNotInitialized or something?

    For AUs, yep, but I can imagine a custom module that needs reinit that would crash if it weren't careful.

    But yeah, it does feel better to stop the output unit, let the modules handle the sample rate changes, and then restart. The only way to do that is for the object that manages the output unit (or rather, its streamformat callback listener) to know which modules should be notified. I wonder if it's not cleanest and clearest to keep it as is, but use initWithRenderer only on those modules that really need it. They would then add a rate-changed notification handler for that renderer, and the renderer posts the notification before restarting the unit. This expresses exactly what's happening.

    I agree, it does feel better. Not an option to use initWithRenderer only with those modules that need it. How does the developer know which modules those are? That would be a horrible experience.

    No, I think it's a matter of choosing where to put the ugly. It's ugly either way: either it's the way it is now, where modules need to know about the renderer, or modules have to look at the sample rate and handle changes themselves. The latter can be handled in the AEModule superclass, so that will be nicely encapsulated, and I think it'll use dispatch_async to fire the main thread event so that it's fast. As you've observed, a sample rate change will not be a smooth event at the best of times, so we don't really care about glitches for that transition.

    I was actually wondering about stopping the output, then running the renderer for 0 frames from the main thread, then starting the output again, so that the change can be synchronous. I think that's too "magic" though, and could well introduce multithreading issues.

    @32BT said:
    The problem then boils down to: can a module handle context changes in real-time, or does it require a re-initialization. And the question is whether that switch is a responsibility of the module, or the context. The context could decide the best coarse of action:

    Following the simpler-is-better philosophy, I think the context needs to be utterly oblivious to what modules need to update in response to sample rate changes. Keep internal details internal.

    I believe the problem is the same for the decision what to do when one module returns an error from the renderloop. What action to take?

    A module won't - AEModuleProcess is a void return time deliberately; if a generator module fails, it should push a silent buffer. If a processor module fails, it should leave the top buffer untouched, etc. Simple =)

    The side question is: can the context notify observers on main, in sync with the renderloop?

    Not in sync, no.

  • @32BT said:
    initWithRenderer needs to go entirely,

    Yeah, quite possibly. It's rather confusing, by the sounds of it.

    initWithSampleRate or initWithStreamFormat would for example be a better initializer for modules that have to reinitialize when those parameters change.

    Probably better to pick these details from the context at render time, otherwise it's a burden for developers. Downside: module's going to have to defer final initialization until the first render, because it'll have to reconfigure to fit the sample rate it finds in the context. That's a pretty big downside.

    A side question: should we consider a module to be a controller object between its parameters and its processingfunction?

    Could you elaborate, there?

    @leothiessen said:

    @32BT said:
    initWithRenderer needs to go entirely, since that suggest that the module controls the renderer, or that it is a fixed relation. Neither is true.

    For what it's worth, I certainly came away with the assumption, in the back of my mind, that module<-->renderer was a fixed relation.

    Yeah, it is a fixed relation, right now.

  • The nice thing about obj-c is that it is very hierarchical, so as long as you keep the responsibility of parameters with the owning objects, it should work clean and simple.

    I think that the context is the owner & keeper of sampleRate (and streamFormat) and should therefore be the controller to go to for updates.

    I also realized that stopping the renderloop is probably necessary for reinitializing the context itself as well, is it not? e.g. stack?

    There used to be a different type of construct: initUsing....

    So this might be a solution:

    context->initWithStreamFormat

    module->initUsingContext

    "using" indicates that you can read the parameters from the context but that you don't control it, or keep it. And it will be fairly simple to write a default initializer to jump to with the relevant parameters.

    module->initUsingContext
    { module->initWithSampleRate:context->sampleRate }

  • @Michael said:
    A module won't - AEModuleProcess is a void return time deliberately; if a generator module fails, it should push a silent buffer. If a processor module fails, it should leave the top buffer untouched, etc. Simple =)

    Simple to whom? :wink: Yes, the process function should do those options, unless it can't, so in addition it should return an error and the environment should respond appropriately. (Solving the error handling, solves the hierarchy problem as well, and will result in a more elegant and graceful environment for both user as well as developer).

  • How about this:

    module->initUsingContext
    default implementation adds "self" as observer to context for significant context changes

    context observes system for significant changes like sampleRate

    1. a systeminduced samplerate change occurs
    2. context receives notification
    3. context stops main renderloop
    4. context reinitializes stack etc
    5. context notifies observers of change
    6. context restarts main renderloop
  • note that we can implement a weak-observer table, so that you only need to worry about adding, but deallocation (and removal) then is not an issue of concern.

  • Sounds feasible - lemme get back to you on that, gotta take some time off to work on Loopy a little

  • Yeah, I think the idea of detecting changes in the render code and deferring it to main queue is a bad idea. The downside you mentioned, that modules need a first render to initialize, is pretty bad! It all feels ugly, a good sign that it's the wrong way :)

    So let's go with initUsingContext or similar. Note that instead of KVO it might as well use NSNotification. The module would add a handler for the notifications it needs (eg samplerateChanged) for the object it want to observe (the context/renderer). It doesn't need to keep a reference to it, and there's no need for any table to keep track of it. Posting the notification is synchronous, so when the postNotification method returns all modules will have handled it. (Don't use the handle-notification-on-other-queue thing in the module, just the plain old target-selector observer).

    I'm still a bit unsure about the context vs renderer thing.. Isn't it the renderer that is the controller object? the context is just a data structure, owned by the renderer, to pass information to the nodes (numFrames, current sample rate, timestamps, etc). So then it should rather be [module initUsingRenderer].

  • renderer sounds like an engine-object (which it basically is).
    when initializing an object with an engine-object, I want to think it is contained and controlled by the object. Context is more like a parameter-object, and therefore probably the better choice for initialization.

    The module effectively could/should be the controller object managing the translation between its parameters and its engine.

    The other thing to watch for is the initialization of submodules, such as in the varispeed renderblock which uses submodules that might require different parameters than the main context provides. Of course, we can always look at CoreGraphics and PostScript as a parallel, where all this stuff has practically been figured out for us: popping and pushing a current-context... (just to keep Michael busy). :sunglasses:

  • @j_liljedahl said:
    So let's go with initUsingContext or similar. Note that instead of KVO it might as well use NSNotification. The module would add a handler for the notifications it needs (eg samplerateChanged) for the object it want to observe (the context/renderer). It doesn't need to keep a reference to it, and there's no need for any table to keep track of it. Posting the notification is synchronous, so when the postNotification method returns all modules will have handled it. (Don't use the handle-notification-on-other-queue thing in the module, just the plain old target-selector observer).

    I like plain old target-selector observer, but I would like to see a weak observer table, because modules are more-or-less ad-hoc objects. You might use them temporarily. I do not like to go for observing every specific parameterchange just yet, but initially go for a general context change.

  • Intuitively the following proposition seems reasonable:

    context->init
    { context->initWithRenderer:renderer->new }

    context->initWithRenderer:

    module-initUsingContext:
    and if you need the renderer here, use context->renderer...

  • @32BT said:

    @j_liljedahl said:
    So let's go with initUsingContext or similar. Note that instead of KVO it might as well use NSNotification. The module would add a handler for the notifications it needs (eg samplerateChanged) for the object it want to observe (the context/renderer). It doesn't need to keep a reference to it, and there's no need for any table to keep track of it. Posting the notification is synchronous, so when the postNotification method returns all modules will have handled it. (Don't use the handle-notification-on-other-queue thing in the module, just the plain old target-selector observer).

    I like plain old target-selector observer, but I would like to see a weak observer table, because modules are more-or-less ad-hoc objects. You might use them temporarily. I do not like to go for observing every specific parameterchange just yet, but initially go for a general context change.

    Not sure what the observer table would be use for? A module would just start observing on init, and stop observing on dealloc. Sure, one could have a generic AEContextChangedNotification, but the only thing that actually changes from the outside is the streamformat, and the only thing that can change in streamformat is the sample rate. The other parameters are local to the current render cycle (inNumberFrames, inAudioTimeStamp, etc) and are only valid during the rendering. One could as well pass them as arguments to AEModuleProcess(), but it's more convenient to wrap them in a struct.

  • @j_liljedahl said:
    Not sure what the observer table would be use for? A module would just start observing on init, and stop observing on dealloc. Sure, one could have a generic AEContextChangedNotification, but the only thing that actually changes from the outside is the streamformat, and the only thing that can change in streamformat is the sample rate. The other parameters are local to the current render cycle (inNumberFrames, inAudioTimeStamp, etc) and are only valid during the rendering. One could as well pass them as arguments to AEModuleProcess(), but it's more convenient to wrap them in a struct.

    It's simpler than KVO:
    1. you only need to add and not worry about removal,
    2. you could add concurrent updates if desired,
    3. you can write a specific message for clarity (e.g. didChangeAEContext:),
    4. none of the overhead of KVO

    If sampleRate changes, you likely need to change maxFramesPerSlice. But otherwise: if there is little that can actually change, then there is little reason to observe individual parameters, imo.

    Can't we make a shortlist of what parameters can actually change from the environment?
    In addition: is there a possible use case in being able to render off-line to a file? implying that the context is not always interested in environment changes?

  • I'm not saying it should use KVO, but NSNotificationCenter. A module starts observing the notification for the context at init, and stops observing at dealloc. No KVO overhead, no observer table needed, etc. Also, only modules that need it can observe the notification, and neither the modules or context need to have any other connection between them.

    I think it's best to just keep maxFramesPerSlice at 4096.

    As far as I know, sample rate is the only thing that can change, and which is not local to the current render cycle. inNumberFrames can of course change for each render cycle, and every module must render the number of frames asked for.

  • @j_liljedahl said:
    I'm not saying it should use KVO, but NSNotificationCenter. A module starts observing the notification for the context at init, and stops observing at dealloc. No KVO overhead, no observer table needed, etc. Also, only modules that need it can observe the notification, and neither the modules or context need to have any other connection between them.

    Okay, fair enough. (Although NSNotificationCenter is like a manager yelling in the dark, because they don't have any workers to command... :wink: )

    I think it's best to just keep maxFramesPerSlice at 4096.

    I'm also thinking desktop. Where we may have to cope with 196kHz.
    The context defines the buffersizes based on desired latency, and may have to adjust based on varispeed? not sure yet, but perhaps 4k is enough.

    As far as I know, sample rate is the only thing that can change, and which is not local to the current render cycle. inNumberFrames can of course change for each render cycle, and every module must render the number of frames asked for.

    maybe route-changes? HW-volume changes? External clock? Midi exceptions? Not sure if these are relevant for the AEContext though...

  • edited April 2016

    @32BT and @j_liljedahl said:

    Solving the error handling, solves the hierarchy problem as well, and will result in a more elegant and graceful environment for both user as well as developer

    I think I beg to differ, there - too much error handling/checking and you blow any advantages for having it. I'd argue that having to handle errors on modules at the point you run the module is going to make it a whole load of not fun to work with. I'd be fine with a BOOL return (maybe even a OSStatus), so you can react if you need to, but the error state should leave the stack in the expected state.

    Yeah, I think the idea of detecting changes in the render code and deferring it to main queue is a bad idea. The downside you mentioned, that modules need a first render to initialize, is pretty bad! It all feels ugly, a good sign that it's the wrong way

    I concur! Feels wrong

    Note that instead of KVO it might as well use NSNotification

    Yep, I agree, notifications are the easiest way to go

    I think it's best to just keep maxFramesPerSlice at 4096.

    Agreed - this simplifies module implementation a lot

    As far as I know, sample rate is the only thing that can change, and which is not local to the current render cycle. inNumberFrames can of course change for each render cycle, and every module must render the number of frames asked for.

    maybe route-changes? HW-volume changes? External clock? Midi exceptions? Not sure if these are relevant for the AEContext though...

    Just sample rate, and the number of input/output channels. All those other things aren't properties of the context.

    may have to adjust based on varispeed? not sure yet, but perhaps 4k is enough.

    I think we're okay there - Apple's docs say, with respect to kAudioUnitProperty_MaximumFramesPerSlice, "Additionally, an Audio Unit will not ask its input for more samples than the maximum frames. If more frames are required the Audio Unit will pull multiple times. This can arise in situations, for example, when using the Varispeed Audio Unit where it may be performing playback at 4X speed."

    Intuitively the following proposition seems reasonable:

    context->init
    { context->initWithRenderer:renderer->new }

    context->initWithRenderer:

    module-initUsingContext:
    and if you need the renderer here, use context->renderer...

    I'm in two minds about this. It adds some complexity to the init process, which means more boilerplate. It also complicates the scenario where a varispeed/etc format converter is used, like in the sample app - now you need two separate contexts, which you need to tie to the correct renderer, as opposed to the way it is right now, where the renderer is the only entity involved. I could live with the renderer owning the context, so you init modules with renderer.context, perhaps, but it still adds some complexity, and it may not be immediately obvious where one's meant to get the context from, as a new initiate.

    It seems to me like a trade-off between minimizing boilerplate and init simplicity on one side, and avoiding confusion with a module owning a renderer reference, on the other side. I have trouble putting myself in a newbie's shoes, so I don't completely understand the latter confusion, but I wonder if there's a third solution, here: renaming the renderer class. AERenderer is a very simple object, really just a wrapper for the render block and a few properties, and you could make the argument it isn't really a renderer at all - that's the thing that uses AERenderer, like AEAudioUnitOutput and the soon-to-come AEAudioFileOutput. So, is there something we could call it that would avoid that confusion? I'm a little tempted to call it AEContext and be done with it.

  • @Michael said:
    @32BT and @j_liljedahl said:

    Solving the error handling, solves the hierarchy problem as well, and will result in a more elegant and graceful environment for both user as well as developer

    I think I beg to differ, there - too much error handling/checking and you blow any advantages for having it. I'd argue that having to handle errors on modules at the point you run the module is going to make it a whole load of not fun to work with. I'd be fine with a BOOL return (maybe even a OSStatus), so you can react if you need to, but the error state should leave the stack in the expected state.

    A) "you can react if you need to" is ldo implied
    B) asserts and exceptions is the worst possible solution
    C) A module can't make decisions about the state of the context based on internal errors. The lower down the chain of command, the "dumber" it should be. It knows how to render samples, but if the state of the environment doesn't allow it to do its job, then it should signal that it didn't do its job, and the environment can decide what to do next: continue or call the whole thing off...

    As far as I know, sample rate is the only thing that can change, and which is not local to the current render cycle. inNumberFrames can of course change for each render cycle, and every module must render the number of frames asked for.

    maybe route-changes? HW-volume changes? External clock? Midi exceptions? Not sure if these are relevant for the AEContext though...

    Just sample rate, and the number of input/output channels. All those other things aren't properties of the context.

    Can input/output channels chance "on the fly" by outside influences?

    And btw: clocksynchronization most certainly is part of the context, since you're using AudioTimeStamp. But d*mn, don't these look fine?: http://wooaudio.com/products/wa7fireflies.html

    Intuitively the following proposition seems reasonable:

    context->init
    { context->initWithRenderer:renderer->new }

    context->initWithRenderer:

    module-initUsingContext:
    and if you need the renderer here, use context->renderer...

    I'm in two minds about this. It adds some complexity to the init process, which means more boilerplate. It also complicates the scenario where a varispeed/etc format converter is used, like in the sample app - now you need two separate contexts, which you need to tie to the correct renderer, as opposed to the way it is right now, where the renderer is the only entity involved. I could live with the renderer owning the context, so you init modules with renderer.context, perhaps, but it still adds some complexity, and it may not be immediately obvious where one's meant to get the context from, as a new initiate.

    Yes, two separate contexts, just like you also use a subrenderer. So the problem remains either way. In graphics you also have an equivalent in a current graphics context that you have to set specifically before rendering. When you create a systemenvironment like Cocoa, it sets the stuff up before calling your specific drawRect routine. Similarly I'd like the TAAE to setup the environment before calling my module-render routine.

    The only thing that remains in my mind is the question of whether a context contains a main renderengine,, or a whether a main renderengine should contain a context.

    I think of a renderer as a postscript interpreter, and a context as the canvas to draw to.
    The canvas contains a resolution and size, just like the AEContext contains sampleRate (and for example duration)

    The PS interpreter used to render in bands, just like the context is split in slices...

    Again, the further down the (containing) chain, the dumber an object should be. That is, it may be smart in sample processing, but should be dumb with regards to the environment. It either can do its job, or it can't, and if it can't there are some heavy decisions to make. Not the module's problem, but certainly not to be ignored either.

    It seems to me like a trade-off between minimizing boilerplate and init simplicity on one side, and avoiding confusion with a module owning a renderer reference, on the other side.

    No, that is not the issue. Setting up a correct structure automatically results in simplicity and less confusion, and, in my experience, goes hand-in-hand with reasonable error handling and a more hierarchical chain of command. Unfortunately, finding the correct structure isn't always an easy process.

  • edited April 2016

    @32BT said:
    But d*mn, don't these look fine?: http://wooaudio.com/products/wa7fireflies.html

    Wow, so purty!

    Lemme get back to you on the rest

  • Is there ever a reason to initialize a renderer without a block?

  • @32BT said:
    Is there ever a reason to initialize a renderer without a block?

    I don't think so. As I see it, a renderer is the block (render loop code), plus some additional state.

  • @32BT said:
    Can input/output channels chance "on the fly" by outside influences?

    During route changes, they can; otherwise, not that I'm aware.

    Intuitively the following proposition seems reasonable:

    context->init
    { context->initWithRenderer:renderer->new }

    context->initWithRenderer:

    module-initUsingContext:
    and if you need the renderer here, use context->renderer...

    I'm in two minds about this. It adds some complexity to the init process, which means more boilerplate. It also complicates the scenario where a varispeed/etc format converter is used, like in the sample app - now you need two separate contexts, which you need to tie to the correct renderer, as opposed to the way it is right now, where the renderer is the only entity involved. I could live with the renderer owning the context, so you init modules with renderer.context, perhaps, but it still adds some complexity, and it may not be immediately obvious where one's meant to get the context from, as a new initiate.

    Yes, two separate contexts, just like you also use a subrenderer. So the problem remains either way.

    I'd prefer for the renderer to create the context, rather than requiring the developer to do that. It's just more work, and more stuff that could go wrong.

    The only thing that remains in my mind is the question of whether a context contains a main renderengine,, or a whether a main renderengine should contain a context.

    Pretty sure it's the latter. Compare with, say, UIView's drawInRect: - the developer doesn't create a CGContextRef when they create the UIView. It's created for them, and passed in when needed. The alternative would be kinda insane.

  • @Michael said:

    The only thing that remains in my mind is the question of whether a context contains a main renderengine,, or a whether a main renderengine should contain a context.

    Pretty sure it's the latter. Compare with, say, UIView's drawInRect: - the developer doesn't create a CGContextRef when they create the UIView. It's created for them, and passed in when needed. The alternative would be kinda insane.

    Ha, "insane" generally means "innovative"... :wink:

Sign In or Register to comment.