ABetterMetronome using TAAE (a source code example)

edited March 2014

Over the past few weeks I've come across an unusual amount (for me :-) of talk about metronomes and timing. After looking through a bunch of code I found that alot of people are using NSTimer or performSelector:withObject:afterDelay: or some other "not the most accurate" mechanism. Sometimes this may be fine for your app but if you want rock-solid metronome performance, IMO, the best way is to count frames. The audio sub-system will always deliver 44,100 (for example) samples per second so if you piggy-back off of this fact you can have the tightest metronome performance around.

So, I put together an example - hope it helps someone! Please comment and make suggestions if you know of a better way to handle anything here.

https://github.com/zobkiw/ABetterMetronome

«1

Comments

  • Zobkiw,
    Great effort, had a play with this today and it's a great walkthrough (also good to see how to generate blocks, as I've previously not done this before).

    Only one thing, I did the following:
    1. Added block channel to an existing record-playback project
    2. Hit record, which has the metronome playing simultaneously.
    3. Hit stop, pressed play.
    4. The metronome (created by the block) and the metronome recorded (in the track, pickup via the mic) are noticeably out of sync. What's more, if I stop / start playback, the metronome seems to 'beep' at different points every time the track is played.

    This makes me think that the metronome is, because of its reliance on fulfilling a number of samples before 'going', doesn't reset to zero (effectively) on playback.

    Any thoughts on how I might achieve this, enabling perfect metronome sync?

  • Thanks Ross,

    The metronome in my example is stand-alone and was not implemented to sync with anything in particular other than maybe someone playing to it - much like a metronome in the physical world.

    Having said that, if you want to synchronize it to another source of audio you can easily do so. Note the next_beat_frame variable. This defines the next frame that the beat sound will play on, exactly.

    Depending on how your existing project records/plays there may be significant delay (in frames) from when audio actually begins recording/playing after you trigger the action. The synchronization between multiple tracks is currently an exercise left to the reader (coder) but it isn't any harder than syncing ANY number of tracks.

    I think the key here is to reset the metronome appropriately but also know exactly what frame the downbeat of your recorded audio occurs on and offset the metronome accordingly.

    If you are recording in any way other than grabbing the frames yourself you likely will have some offset issues. You have to get down to the frame level across the board if you want multiple tracks to be truly in sync. For example, AVAudioRecorder and AVAudioPlayer are not really made for perfect synchronization in my experience.

    Hope this helps.

  • @zobkiw,

    Any ideas on how to achieve grabbing the 'downbeat' (e.g. timecode for audio start) from AAE?

  • Also - Just regarding your point above. As far as I was aware, syncing multiple tracks would happen automatically in AAE, as nothing in playback/record (using AEFilePlayer, AERecorder) gets triggered until you start the audio controller (e.g. _audiocontroller start). Given that I start/stop, rather than just mute/unmute channels, I think I get around broader sync issues (though correct me if I'm totally wrong on that one, as it's an important assumption!).

    So really, I just need to have this block per your example aligned to the start time that everything else is using.

  • edited March 2014

    Hey RossBeaf,

    I'm guessing the audio timestamp passed into the various block functions would be helpful. It's going to depend on managing an offset I suspect.

    You may be correct about starting the audio controller and keeping things in sync but I did notice a slight discrepancy recently in a project I've been working on regarding the audio written to disk vs reported time - you may take a look at this bug to see if it effects you in any way.

    https://github.com/TheAmazingAudioEngine/TheAmazingAudioEngine/issues/81

    I solved the problem by tweaking the code to count samples - so at the point the samples are being written to disk, they are counted. This way things are exact.

    My personal favorite way to test sync issues is to let something run overnight. In the morning if it's completely out of whack (sync wise) then you know you have a problem somewhere.

  • Interesting - good testing methodology too!

    I've managed to crack it (i think!).
    Given that my audio app is either recording or playing back, I can pull 'current time' from the AEFileplayer or AErecorder classes (which, at playback/record start = 0).

    Setting Total Frames to 0 effectively resets the counter at track start.

    Then, for mid-track playback, all that is necessary is to take the modulus of current time and frames per beat (e.g. 44100 / 120 bpm / 60) as next_beat_frame (0 if we're at the start of a tune).

    This results in rock solid timings :-)

    There's still a bit of a latency issue to work out re. recording with a metronome, which I'm not quite sure how to overcome as of yet, as obviously there's a delay between speakers pushing out noise and the mic picking them back up, but for now this is a whole lot better.

    Thanks a lot for your input!

  • Very good! Glad you got it working!

    I've read of ways to calculate hardware latency regarding the output/input like you mention but I'm blanking at the moment. I want to say one of Michael's examples even has some code that takes into account hardware latency - maybe his Engine Sample or maybe I'm thinking of something else.

  • Its an interesting conundrum.

    The first time you record you have a blank slate, so to speak. So if metronome is created at sample 10 and recorded at sample 20 you'll hear the slight delay on playback.

    But even if you corrected the metronome (e.g. moved it back 10), it'd still result in the audio recording picking up 10 samples later (e.g. 30). I'm not sure there's a lot you can do - In fact, I just tried this on Loopy and it's definitely an issue there as well, so I wonder if there can be a fix.

    In a studio this isn't a problem, as you push out the click to your headphones and everyone records to the click as a baseline, so the audio, regardless of delay, is at least in sync across multiple tracks. So maybe I just need to prompt the user to wear headphones (which they should anyway) when using the click.

  • Headphones and a visible mechanism as well, like I think Loopy does. Some sort of flash, etc.

  • Hey guys,

    I agree that NSTimer and performSelector:afterDelay aren't accurate enough when it comes to precisely timing sounds.

    @zobkiw, I've taken a look at your code in ABetterMetronome, and I have some comments:

    • Will this pattern work if I try to synchronise twoAEBlockChannels?

    • Why use an AEBlockChannel and not a AEBlockScheduler? The docs seem to favour the latter for managing sounds at precise times.

    • If I use your pattern to implement a metronome that uses audio files as its sounds, then I run into trouble. To see what I mean, add a small audio file to the Xcode project and try to insert this code in if (making_beat) {

    AEAudioFilePlayer *clickPlayer = [AEAudioFilePlayer audioFilePlayerWithURL:[[NSBundle mainBundle] URLForResource:@"click" withExtension:@"caf"] audioController:_audioController error:NULL];
    [clickPlayer setRemoveUponFinish:YES];
    [_audioController addChannels:@[clickPlayer]];
    
    • Is keeping track of the amount of frames a good way to track time? Why not use the AudioTimeStamp?
  • Thanks for the comments. To respond to your questions:

    • You can likely use a similar method to keep multiple AEBlockChannels in sync - depending on what you're actually doing, mind. You may be able to have one AEBlockChannel grab multiple sources of audio and mix them together as well.

    • Regarding AEBlockScheduler - this will schedule and execute a block of code but you still need to generate samples and get them into the audio stream. So, you would still need a generator of some sort running, maybe use AEBlockScheduler to set a flag? Like with most things programming there may be multiple ways to accomplish a task - I just decided to use AEBlockChannel :-)

    • Loading an audio file in that call is not the recommended approach. I would recommend instead that you load the samples from the audio file into a buffer when the application starts up. Then when it's time to make the sound return those samples in audio->mBuffers[0].mData and audio->mBuffers[1].mData as opposed to the way I do it which is generate the samples based on a sine wave. REMEMBER, in the block you should do as LITTLE work as possible.

    • The AudioTimeStamp is the time that the frames are being generated. It's really only truly valid for the first frame. We may be dealing with 512 frames. So, as you can imagine, as you get closer to the 512 the time becomes less and less valid. Therefore, by counting frames, I'm able to be much more accurate and begin the click on the EXACT frame where it needs to occur.

    Hope this helps.

  • Hi @zobkiw:

    First of All, Thank you for Sharing your Knowledge and taking the Time to Prepare a Demo Project.

    I'm Trying to Build your Example both for an iPad and an iOS Simulator and I Get the Following:

    clang: error: no such file or directory: '/Volumes/Data/DEV/SOURCES/ABetterMetronome-master/TheAmazingAudioEngine/TheAmazingAudioEngine/AEUtilities.c'
    clang: error: no input files
    Command /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang failed with exit code 1

    Any Ideas on how to Solve this Compilation Issue?

    Thanks

    Hernan

  • Woops... No TAAE Included... Just found the problem... Thanks

  • edited April 2014

    Glad you got the problem solved! TAAE is actually a submodule but if you don't tell git to get it it won't :-)

  • Hi Zobklw

    Could you please post a Sample Code where you Load a Sound into TAAE and then Play it Randomly?

    I Just Don't Want to Create the Channel, Load the Sound and RemoveitUponFinish Every time the Click Plays...

    any Ideas?

    Thanks

  • So Far: I've Tried Creating an AudioFilePlayer to Hold the Sound:

    NSError* audioError = nil;
    self.upBeatPlayer = [AEAudioFilePlayer audioFilePlayerWithURL:[[NSBundle mainBundle] URLForResource:LSCLICK_UP withExtension:@"aif"] audioController:[LSAudioManager sharedAudioManager].audioController error:&audioError];
    if(audioError)
        LSLog(@"We got an Error Loading the CLICK (UP): %@", audioError);
    
    self.upBeatPlayer.channelIsMuted = NO;
    self.upBeatPlayer.loop = NO;
    self.upBeatPlayer.removeUponFinish = YES;
    

    And then When It is time to Play I Call:

        [[LSAudioManager sharedAudioManager].audioController addChannels:[NSArray arrayWithObject:self.upBeatPlayer]];
    

    RESULT: It Works One Time and THAT's IT :-(

  • You definitely don't want to remove on finish - I would suggest loading the samples into buffers from the audio files so they are always ready to play. Then, create a channel maybe specifically for the metronome (so you don't have to manually multiply samples from any other audio you have going on) then simply copy the samples from the storage buffers to the output buffers when the time rolls around.

    No time to implement that example at the moment but I'll add it to the wish list. :)

  • Is counting samples significantly more accurate than just using an AUSampler and adding midi events on each click?

  • There are usually multiple ways of doing these things in the audio world. Depending on your need one may be better/easier than the other.

  • edited October 2014

    Thanks for sharing

    I saw there are two comments talking about creating audio player to hold sound. You suggested them loading samplers to the buffer and then return those samples in audio->mBuffers[0].mData and audio->mBuffers[1].mData. Here is how I did:

    in viewdidload I have

     - (void)viewDidLoad {
    self.audioController = [[AEAudioController alloc]
                            initWithAudioDescription:[AEAudioController nonInterleavedFloatStereoAudioDescription]
                            inputEnabled:YES];
    NSError *error;
    [_audioController start:&error];
    self.recorder = [[AERecorder alloc] initWithAudioController:_audioController];
    
    
     self.tick=[AEAudioFilePlayer audioFilePlayerWithURL:[[NSBundle mainBundle] URLForResource:@"XXXX" withExtension:@"m4a"]
                                         audioController:_audioController
                                                   error:NULL];
    
       self.beat=[AEAudioFilePlayer audioFilePlayerWithURL:[[NSBundle mainBundle] URLForResource:@"XXXX" withExtension:@"m4a"]
                                         audioController:_audioController
                                                   error:NULL];
    }
    

    Both recorder and sounds loaded before recording.

    in beat function, I have following. But I wonder what type should I cast audio to ?

    -(void)beat{
    
     if (making_tick) {
    
    
                (audio->mBuffers[0].mData)[i] = self.tick;
                (audio->mBuffers[1].mData)[i] = self.tick;
    
    
           }
    
    if (making_beat) {
    
    
                (audio->mBuffers[0].mData)[i] = self.beat;
                (audio->mBuffers[1].mData)[i] = self.beat;
    
    
            }
    }
    

    When we pressed record button, the following function will run

    - (IBAction)record:(id)sender {
    AERecorderStartRecording(_recorder);
    [_audioController addOutputReceiver:_recorder];
    [_audioController addInputReceiver:_recorder];
    [self beat];
    }
    

    Do I understand loading samplers to the buffer and then return those samples in audio->mBuffers[0].mData and audio->mBuffers[1].mData correctly? Based on my code, what potential problems will I have ?

    Thanks, I just start to learn CoreAudio. So the question may be stupid.

    Thanks in advance

  • edited October 2014

    Hi Sean,

    You can not assign the beat/tick like you are doing. I'm surprised you didn't receive a warning or error when trying to compile - maybe you did.

    What I'm referring to is actually reading the RAW samples of the beat/tick, keeping them in a buffer, and then copying them (sample by sample) into the output audio stream at the exact point they are needed.

    You can not make a metronome any other way and have it be rock-solid in my experience.

    I would suggest reading up on some audio programming topics before diving into this - there is some basic theory that you'll want to study up on first when dealing with audio at the sample level.

    Hope this helps!

  • zobkiw why you don't write some example about what you are talking about instead of repeating the same over again ? Most of people want to load sample for metronome... Simple example about your buffers will be very help-full and we can than look for more on google , thank you ;)

  • The answer is: because there is only one of me :-)

    Maybe someday but it isn't a priority at the moment, sorry.

  • edited October 2014

    Hey zobkiw,

    Thanks for your code. I am looking for the solution to stop playing beats in block. Here is what I have so far. I copied the most code from you. Only thing I changed is that if the current frame is equal to a value, then it will stop. I feel I did not something wrong, because I remove the blockchannel in this blockchannel. Can you give me some advice how to stop executing the block and remove beats from channel?

    Some time, Xcode also shows that exc_bad_access(code =1) on following code

      __attribute__((weak)) BOOL ABSenderPortIsConnected(ABSenderPort *senderPort) {
        printf("ABSenderPortIsConnected stub called\n");
        return NO;
     }
    

    Here is my code:

        static UInt64 total_frames = 0;
        
        // The next frame that the beat will play on
        static UInt64 next_beat_frame = 0;
        static UInt64 next_tick_frame = 0;
       
        _upper=2;
        static BOOL making_beat = NO;
        static BOOL tick=NO;
        _bpm=self.tempo_slider.value;
        // Oscillator specifics - instead you can easily load the samples from cowbell.aif or somesuch
        float oscillatorRate = 440./44100.0;
        __block float oscillatorPosition = 0; // this is outside the block since beats can span calls to the block
        
        self.blockChannel.volume=0.5;
            // The block that is our metronome
        self.blockChannel = [AEBlockChannel channelWithBlock:^(const AudioTimeStamp *time, UInt32 frames, AudioBufferList *audio) {
            UInt64 frames_between_beats = 44100/(_bpm/60.);
            UInt64 end_frame=frames_between_beats*_length;
            // For each frame, count and if we reach the frame that should start a beat, start the beat
            for (int i=0; i 2.0) { /* oscillatorPosition -= 2.0; */ making_beat = NO; } // turn off the beat, just a quick tick!
                    ((SInt16*)audio->mBuffers[0].mData)[i] += x * 0.25; // -12dB
                }
                
                
                if (tick){
                    float x = oscillatorPosition;
                    x *= x; x -= 1.0; x *= x;       // x now in the range 0...1
                    x *= INT16_MAX;
                    x -= INT16_MAX / 2;
                    oscillatorPosition += oscillatorRate;
                    if (oscillatorPosition > 1.0) { /* oscillatorPosition -= 2.0; */tick = NO; } // turn off the beat, just a quick tick!
                    ((SInt16*)audio->mBuffers[0].mData)[i] += x * 0.25; // -12dB
                }
                // Increment the count
                total_frames++;
             
            }
        }];
        
        // Add the block channel to the audio controller
        [_audioController addChannels:[NSArray arrayWithObject:_blockChannel]];
    

    Thanks so much

  • Thanks a lot zobkiw for sharing such a great idea and code.
    Would you know how to add other actions to your metronome example in addition to the sound?: flash light, beat/bars counter / vibration / synchronized visual effect in UI...
    That would be great if they could all be perfectly in sync with the frame approach.
    I went through several discussions on time accuracy, searched in AAE forum but did not manage to find any direction on this (I am new to ios development).
    Thanks in advance.

  • sean - I think you just want to be able to mute the beats? Unless your app is having trouble on particular devices you can probably leave the block in place (don't remove it) and simply set a flag to either do work in it or do nothing - i.e.: return silence.

    heuristic - so the blocks are going to be executing on a thread that will not allow you to do UI work. However, that doesn't mean you can't trigger code on the main thread to do UI work. The goal would be to kick off a block on another thread asynchronously as quickly as possible - or maybe even simply set a flag that a block on another thread keeps an eye on and triggers the work. You may have to experiment a bit to get it right but there isn't any reason I can think of that you can't do what you're asking.

  • thanks zobkiw, will experiment following your suggestions

  • heuristic- here is what I did in the block, just add following codes.

    dispatch_async(dispatch_get_main_queue(),^{
    // UI here

                }
    

    But I do not know they could all be perfectly in sync.

    zobkiw -I am able to mute the beats, by using _blockChannel.channelIsMuted=YES and _blockChannel.channelisPlaying=NO. Lets say I mute the beat by clicking on the pause, when I click on restart it I wanna total fram start from 0. However the totalfram will not reset to 0 . I tried to totalfram =0, but it does not work.

    <

    pre lang="objc">
    for (int i=0; i<frames; i++) { // frame...by frame...
    if(end_frame==total_frames){
    if (_recordStatus != 0){
    NSLog(@here);
    [_recorder finishRecording];
    [_audioController removeOutputReceiver:_recorder];
    //[_audioController removeInputReceiver:_recorder];
    // [_audioController removeChannels:[NSArray arrayWithObject:_blockChannel]];

                    _blockChannel.channelIsMuted=YES;
                    _blockChannel.channelIsPlaying=NO;
                    [self getMusicRatingResult:_midiFile audioPathin:_recordFile];
                    dispatch_async(dispatch_get_main_queue(),^{
                        _textview.text=_midi_result;_recordStatus = 0;});
                    total_frames=0;
    
    
                }
                return;
    
            }  
    

    total_frames++;

    One more thing, maybe this it not related question. If I want to mute the beats when I jump to another view, what should I do?

  • If you can not reset a variable (total_frames in this case) you likely just have a logic error. Walk through your code very carefully and fully understand what the code is doing. When you are dealing with asynchronous code things can get confusing. It's a learning process.

    Jumping to another view should not be an issue. Get the code working to allow you to mute beats like you want. Then you should be able to trigger that code from any situation - pushing a button, displaying another view, etc. Don't confuse the functionality of muting with the triggering of that functionality.

    Hope this helps.

  • I've just seen this thread - just chiming in, I'm not 100% certain that the sample rate frames per second thing is totally reliable, as the hardware oscillator can be out slightly, IIRC. Safest way is to use host ticks (mach_absolute_time()); use a fixed time base (on clock start, _startedTime = mach_absolute_time();), then when you're asked for a buffer, figure out how long since then (uint64_t timeSinceStart = mach_absolute_time() - _startedTime;), and use that to determine when to generate tick noises.

Sign In or Register to comment.