Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
655 views
in Technique[技术] by (71.8m points)

ios - AVAudioEngine multiple AVAudioInputNodes do not play in perfect sync

I've been trying to use AVAudioEngine to schedule multiple audio files to play in perfect sync, but when listening to the output there seems to be a very slight delay between input nodes. The audio engine is implemented using the following graph:

//
//AVAudioPlayerNode1 -->
//AVAudioPlayerNode2 -->
//AVAudioPlayerNode3 --> AVAudioMixerNode --> AVAudioUnitVarispeed ---> AvAudioOutputNode
//AVAudioPlayerNode4 -->                                            |
//AVAudioPlayerNode5 -->                                        AudioTap
//      |                                                         
//AVAudioPCMBuffers    
//

And I am using the following code to load the samples and schedule them at the same time:

- (void)scheduleInitialAudioBlock:(SBScheduledAudioBlock *)block {
    for (int i = 0; i < 5; i++) {
        NSString *path = [self assetPathForChannel:i trackItem:block.trackItem]; //this fetches the right audio file path to be played
        AVAudioPCMBuffer *buffer = [self bufferFromFile:path];
        [block.buffers addObject:buffer];
    }

    AVAudioTime *time = [[AVAudioTime alloc] initWithSampleTime:0 atRate:1.0];
    for (int i = 0; i < 5; i++) {
        [inputNodes[i] scheduleBuffer:block.buffers[i]
                                   atTime:time
                                  options:AVAudioPlayerNodeBufferInterrupts
                        completionHandler:nil];
    }
}

- (AVAudioPCMBuffer *)bufferFromFile:(NSString *)filePath {
    NSURL *fileURl = [NSURL fileURLWithPath:filePath];
    NSError *error;
    AVAudioFile *audioFile = [[AVAudioFile alloc] initForReading:fileURl commonFormat:AVAudioPCMFormatFloat32 interleaved:NO error:&error];
    if (error) {
        return nil;
    }

    AVAudioPCMBuffer *buffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:audioFile.processingFormat frameCapacity:audioFile.length];
    [audioFile readIntoBuffer:buffer frameCount:audioFile.length error:&error];

    if (error) {
        return nil;
    }

    return buffer;
}

I've noticed the issue is only perceivable on devices, I'm testing with an iPhone5s, but I cannot figure out why the audio files are playing out of sync, any help would be greatly appreciated.

** ANSWER **

We ended up sorting the issue with the following code:

AVAudioTime *startTime = nil;

for (AVAudioPlayerNode *node in inputNodes) {
    if(startTime == nil) {
        const float kStartDelayTime = 0.1; // sec
        AVAudioFormat *outputFormat = [node outputFormatForBus:0];
        AVAudioFramePosition startSampleTime = node.lastRenderTime.sampleTime + kStartDelayTime * outputFormat.sampleRate;
        startTime = [AVAudioTime timeWithSampleTime:startSampleTime atRate:outputFormat.sampleRate];
    }

    [node playAtTime:startTime];
}

This gave each AVAudioInputNode enough time to load the buffers and fixed all our audio syncing issues. Hope this helps others!

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Problem:

Well, the problem is that you retrieve your player.lastRenderTime in every run of the for-loop before playAt:

So, you'll actually get a different now-time for every player!

The way you do it you might as well start all player in the loop with play: or playAtTime:nil !!! You would experience the same result with a loss of sync...

For the same reason your player run out-of-sync in different ways on different devices, depending on the speed of the machine ;-) Your now-times are random magic numbers - so, don't assume they will always work if they just happen to work in your scenario. Even the smallest delay because of a busy run loop or CPU will throw you out-of-sync again...

Solution:

What you really have to do is to get ONE discrete snapshot of now = player.lastRenderTime before the loop and use this very same anchor in order to get a batched synchronized start for all your player.

This way you do not even need to delay your player's start. Admittedly, the system will clip some of the leading frames - (but of course the same amount for every player ;-) - to compensate for the difference between your recently set now (which is actually already in the past and gone) and the actual playTime (which still lies ahead in the very near future) but eventually start all your player exactly in-sync as if you actually had really started them at now in the past. These clipped frames are almost never noticeable and you'll have peace of mind regarding to responsiveness...

If you happen to need these frames - because of audible clicks or artifacts at file/segment/buffer start - well, shift your now to the future by starting your player delayed. But of course you'll get this little lag after hitting the start button - although of course still in perfect sync...

Conclusion:

The point here is to have one single reference now-time for all player and to call your playAtTime:now methods as soon as possible after capturing this now-reference. The bigger the gap the bigger the portion of clipped leading frames will be - unless you provide a reasonable start-delay and add it to your now-time, which of course causes unresponsiveness in form of a delayed start after hitting your start button.

And always be aware of the fact that - whatever delay on whatever device is produced by the audio buffering mechanisms - it DOESN'T effect the synchronicity of any amount of player if done in the proper, above described way! It DOESN'T delay your audio, either! Just the window that actually lets you hear your audio gets opened at a later point in time...


Be advised that:

  • If you go for the un-delayed (super-responsive) start option and for whatever reason happen to produce a big delay (between the capturing of now and the actual start of your player), you will clip-off a big leading portion (up to about ~300ms/0.3sec) of your audio. This means when you start your player it will start right away but not resume from the position you recently paused it but rather (up to ~300ms) later in your audio. So the acoustic perception is that pause-play cuts out a portion of your audio on the go although everything is perfectly in-sync.
  • As the start delay that you provide in the playAtTime:now + myProvidedDelay method call is a fixed constant value (that doesn't get dynamically adjusted to accommodate buffering delay or other varying parameters at heavy system load) even going for the Delayed Option with a provided delay time smaller than about ~300ms can cause a clipping of leading audio-samples if the device-dependent preparation time exceeds your provided delay time.
  • The maximum amount of clipping does (by design) not get bigger than these ~300ms. To get prove just force a controlled (sample-accurate) clipping of leading frames by e.g. adding a negative delay-time to now and you will perceive a growing clipped audio-portion by augmenting this negative value. Every negative value that is bigger then ~300ms gets rectified to ~300ms. So a provided negative delay of 30 seconds will lead to the same behavior as a negative value of 10, 6, 3 or 1 seconds, and of course also including negative 0.8, 0.5 seconds down to ~0.3

This examples serves well for demonstration purposes but negative delay values shouldn't be used in production code.


ATTENTION:

The most important thing of all in a multi-player setup is to keep your player.pause in sync. There is still no synchronized exit strategy in AVAudioPlayerNode as of June 2016.

Just a little method look-up or logging out something to the console in-between two player.pause calls could force the latter one to be executed one or even more frame/sample(s) later than the former one. So your player wouldn't actually stop at the same relative position in time. And above all - different devices would yield different behavior...

If you now start them in the above mentioned (sync'ed) manner, these out-of-sync current player positions of your last pause will definitely get force-sync'ed to your new now-position at every playAtTime: - which essentially means that you are propagating the lost sample/frame(s) into the future with every new start of your player. This of course adds up with every new start/pause cycle and widens the gap. Do this fifty or hundred times and you already get a nice delay effect without using an effect-audio-unit ;-)

As we don't have any (by the system provided) control over this factor the only remedy is to put all calls to player.pause straight one after the other in a tight sequence without anything in-between them, like you can see in the examples below. Don't throw them in a for-loop or anything similar - this would be a guaranty for ending up out-of-sync at the next pause/start of your player...

Whether keeping these calls together is a 100% perfect solution or the run-loop under any big CPU load could by chance interfere and force-separate the pause calls from each other and cause frame drops - I don't know - at least in weeks messing around with the AVAudioNode API I could in no way force my multi-player-set to get out-of-sync - however, I still don't feel very comfy or safe with this un-synchronized, random-magic-number pause solution...


Code-example and alternative:

If your engine is already running you got a @property lastRenderTime in AVAudioNode - your player's superclass - This is your ticket to 100% sample-frame accurate sync...

AVAudioFormat *outputFormat = [playerA outputFormatForBus:0];

const float kStartDelayTime = 0.0; // seconds - in case you wanna delay the start

AVAudioFramePosition now = playerA.lastRenderTime.sampleTime;

AVAudioTime *startTime = [AVAudioTime timeWithSampleTime:(now + (kStartDelayTime * outputFormat.sampleRate)) atRate:outputFormat.sampleRate];

[playerA playAtTime: startTime];
[playerB playAtTime: startTime];
[playerC playAtTime: startTime];
[playerD playAtTime: startTime];

[player...

By the way - you can achieve the same 100% sample-frame accurate result with the AVAudioPlayer/AVAudioRecorder classes...

NSTimeInterval startDelayTime = 0.0; // seconds - in case you wanna delay the start
NSTimeInterval now = playerA.deviceCurrentTime;

NSTimeIntervall startTime = now + startDelayTime;

[playerA playAtTime: startTime];
[playerB playAtTime: startTime];
[playerC playAtTime: startTime];
[playerD playAtTime: startTime];

[player...

With no startDelayTime the first 100-200ms of all players will get clipped off because the start command actually takes its time to the run loop although the players have already started (well, been scheduled) 100% in sync at now. But with a startDelayTime = 0.25 you are good to go. And never forget to prepareToPlay your players in advance so that at start time no additional buffering or setup has to be done - just starting them guys ;-)


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...