Code Monkey home page Code Monkey logo

theamazingaudioengine's Introduction

Important Notice: The Amazing Audio Engine has been retired. See the announcement here

License

Copyright (C) 2012-2015 A Tasty Pixel

This software is provided 'as-is', without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this software.

Permission is granted to anyone to use this software for any purpose, including commercial applications, and to alter it and redistribute it freely, subject to the following restrictions:

  1. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required.

  2. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software.

  3. This notice may not be removed or altered from any source distribution.

Changelog

1.5.8

  • Fixed a crash that can occur with rapid adding/removing of channels

1.5.7

1.5.6

  • Replaced internal use of synchronous cross-thread messaging with async messaging, to avoid risk of deadlocks and other timing issues
  • Fixed a crash that can occur when input format is changing
  • Watch for audio unit stream format changes in order to better react to sample rate changes
  • Implemented AEMessageQueue message exchange blocks

1.5.5

  • Added AEAudioBufferManager class, to enable management of AudioBufferList structures using normal ARC/retain-release memory management techniques
  • Addressed a problem introduced in 1.5.3 that could cause a 30s hang when restarting audio system
  • Revised timestamp management with Audiobus/IAA: now, TAAE will pass uncompensated timestamps to ABReceiverPortReceive, and will assume incoming timestamps, when hosted via IAA or Audiobus, are uncompensated.

1.5.4

  • Fixed an output latency compensation issue when hosted via Inter-App Audio
  • Deprecated "audiobusSenderPort" facility (use ABSenderPort's audioUnit initializer instead, with AEAudioController's audioUnit property)
  • Improved performance reports (made these less verbose, added percentage of render budget)
  • Fixed a crash when using AEPlaythroughChannel and changing the sample rate

1.5.3

  • Added AEAudioBufferListCreateOnStack utility
  • Enable automaticLatencyManagement by default
  • Fixed a race condition when using setAudiobusSenderPort*
  • Added tvOS support (thanks to Florian Doyon)
  • Added playAtTime: facility to AEMemoryBufferPlayer (thanks to Anton Holmberg)
  • Added setup/teardown methods to AEInputReceiver
  • Fixed missing setup/teardown calls to input filters
  • Replaced AEPlaythroughChannel initializer

1.5.2

  • Added composite setAudioDescription:inputEnabled:outputEnabled: update method
  • Added new initializer with AEAudioControllerOptions bitmask (thanks to Jonatan Liljedahl)
  • Added setting to always use the hardware sample rate (thanks to Jonatan Liljedahl)
  • Added missing teardown procedure for channels and filters
  • Fixed incorrect audio input conversion for interleaved formats
  • Fixed conversion issue with AEAudioUnitFilter
  • Fixed OS X build issue by removing AEReverbFilter for OS X (not supported on that platform)
  • Added 'audioGraphNode' properties to ABAudioUnitFilter/Channel
  • Updated TPCircularBuffer with added safety measures that will refuse to compile or crash early when a version mismatch is detected with other instances in your project
  • Address Audiobus issues for apps with both receiver and filter ports

1.5.1

  • Important fixes for the iPhone 6S
  • Added some AudioStreamBasicDescription utilities
  • Added extra AudioBufferList utilities and renamed existing ones for consistent naming
  • Added wrapper classes for Apple's effect audio units (thanks to Dream Engine's Jeremy Flores!)
  • Added Audio Unit parameter facilities (setParameterValue:forId: and getParameterValue:forId:)
  • Added AEMemoryBufferPlayer (a reincarnation of the previous in-memory AEAudioFilePlayer class)
  • Implemented 'playAtTime:' synchronisation method on AEAudioFilePlayer
  • Refactored out cross-thread messaging system into AEMessageQueue (thanks Jonatan Liljedahl!)
  • Replaced 'updateWithAudioDescription:...' mechanism with separate 'setAudioDescription', 'setInputEnabled' and 'setOutputEnabled' methods
  • Bunch of other little improvements; see git log for details.

1.5

  • OS X support! Many, many thanks to Steve Rubin!
  • Replaced in-memory version of AEAudioFilePlayer with an audio unit version which streams from disk (thanks to Ryan King and Jeremy Huff of Hello World Engineering, and Ryan Holmes for their contributions to this great enhancement).
  • Bunch of other little improvements; see git log for details.

theamazingaudioengine's People

Contributors

5ke avatar akumpf avatar arielelkin avatar bryansum avatar caoer avatar ciphercom avatar cjhanson avatar crontab avatar develophant avatar essej avatar fdoyon avatar goonzoid avatar jayrhynas avatar jlugia avatar kyleweiner avatar lanephillips avatar lijon avatar macbuildserver avatar manderson-productions avatar manide avatar michaeltyson avatar nilknarfuw avatar ploenne avatar psobot avatar rokgregoric avatar sampage avatar srubin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

theamazingaudioengine's Issues

Make checkResult function globally available

Many times we have to debug functions in our code that return an OSStatus, such as AudioUnitSetParameter(). Many classes of TAAE contain a very useful checkResult function which does that, for example, in AEMixerBuffer.m:

checkResult(AudioUnitSetParameter(_mixerUnit, kMultiChannelMixerParam_Pan, kAudioUnitScope_Input, index, value, 0),
            "AudioUnitSetParameter(kMultiChannelMixerParam_Pan)");

It would be useful if checkResult would be available whenever one imports classes from TAAE!

AEAudioController not delivering requested sample rate

We have just migrated to Amazing Audio Engine and released a new version of our app on TestFlight. After release our beta testers report that the audio is recorded at a much higher sample rate. The current workaround we have is for the user to reinstall the app or restart the phone, after which the recordings will be made in the right sample rate.

I can't find the exact way to reproduce the bug. After the workaround the bug cannot be reproduced on same device again, even when I tried to remove, reinstall or upgrade from older version.

After looking into the source code, I found the line of code that I suspect is causing the bug. Following is the code snippet from AEAudioController.m:

- (BOOL)initAudioSession {
    NSMutableString *extraInfo = [NSMutableString string];

    ..... // code omitted

    // Set sample rate
    Float64 sampleRate = _audioDescription.mSampleRate;
    result = AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareSampleRate, sizeof(sampleRate), &sampleRate);
    checkResult(result, "AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareSampleRate)");

    // Fetch sample rate, in case we didn't get quite what we requested
    Float64 achievedSampleRate;
    UInt32 size = sizeof(achievedSampleRate);
    result = AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareSampleRate, &size, &achievedSampleRate);
    checkResult(result, "AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareSampleRate)");
    if ( achievedSampleRate != sampleRate ) {
        NSLog(@"Warning: Delivered sample rate is %f", achievedSampleRate);
        _audioDescription.mSampleRate = achievedSampleRate;  // <- Problem here
        [extraInfo appendFormat:@", sample rate %f", achievedSampleRate];
    }
}

At line 2046 AEAudioController sets the ASBD sample rate to the current hardware rate if it failed to change the preferred hardware sample rate! I don't think this is to be expected as AEAudioController should record in the sample rate that the user requested. From my understanding Core Audio would follow the sample rate we requested in the ASBD even when it is different from the hardware sample rate. (albeit with some performance penalty)

Can anyone verify that this is indeed causing the bug I described?

(P.S. Just want to thank you @michaeltyson for creating The Amazing Audio Engine, it is really amazing! :)

Regards,

Soares

Possible memory leak in AEAudioController

Copied and pasted from TAAE forum:

I'm finding what I think is a memory leak caused by AEAudioController.

In my app, I have a custom audio file player that is modeled closely after AEAudioFilePlayer, and I add the channel to AEAudioController when I load a song. channelIsPlaying is set to NO initially, then it is set to YES when I want the file to start playing. The song is stopped by setting channelIsPlaying to NO, and the channel is removed from AEAudioController when I unload the song.

I'm finding that the dealloc function of my audio file player is not being called when I have loaded a song, played a portion of the file, and then unload the song. However it IS called if I simply load a song and unload it without playing. This leads me to believe that AEAudioController must be adding an extra retain to the channel when it is played (when channelIsPlaying is set to YES). I have scoured my code and don't see anything that would cause extra retains when a channel is played.

I should note that I have not updated my version of TheAmazingAudioEngine for a few months.

Any idea what's going on? Thanks.

Engine crashes on iOS7 device

Hey,

Found this critical issue today. Currently I'm trying to add iOS7 support to my app but faced with crash in AEAudioController:1725 called from AEAudioController: 355. EXC_BAD_ACCESS (code=1).

This crash could be reproduced on device only (I used iPhone 4S). Simulator works fine. Additionally on app start it asks user to grant an audio recording permissions.

22 Xcode analyzer issues

Hi,

I tried to run Xcode analyzer tool and it came out with 22 issues. Some of them makes me really nervous and prevents me to stop use this really amazing audio engine in production app. Is is possible for you to run through this issues and at least reduce the number of critical ones?

Thanks.

output level metering shows input level

I have a setup with one channel and one filter.
Trying to add output level metering shows the level from the channel, not the main audio coming out from the engine (through the filter).

UPDATE: This is when adding the filter as an output filter. Adding it as an input filter works fine, it seems. Perhaps output level metering reads the signal at the main mixer output instead of at the output of the last output filter?

Small AEAudioFileLoaderOperation.m cleanup

Line 209: scratchBufferList->mBuffers[i].mDataByteSize = MIN(16384, (fileLengthInFrames-readFrames) * _targetAudioDescription.mBytesPerFrame);

Should "16384" be replaced by "4 * kIncrementalLoadBufferSize" ?

why is channelIsMuted/Playing readonly?

It seems that since channelIsMuted and channelIsPlaying are used internally by TAAE, an AEAudioPlayable doesn't actually need to implement anything to have these properties working, only declare them as readwriteable.

Why not change this in the AEAudioPlayable protocol? Otherwise it breaks polymorphism in cases like this:

id<AEAudioPlayable> someChannel;
...
someChannel.channelIsMuted = YES;

work in ios7

IOKitUser-920.1.11/hid.subproj/IOHIDEventQueue.c, line: 512

2013-09-23 15:02:08.531 NORDIC[9767:60b] TAAE: Setting audio session category to PlayAndRecord

AEAudioController.m
if ( _topChannel->audiobusOutputPort && ABOutputPortGetConnectedPortAttributes(_topChannel->audiobusOutputPort) & ABInputPortAttributePlaysLiveAudio ) {
return @"Audiobus";
} else {
return _audioRoute;
}

AudioSession -> AVAudioSession

Hi Michael - do you have any plans to switch over to the newer AVAudioSession now that AudioSession is deprecated in iOS7? Also, do you have any knowledge of when the AudioSession API is slated to be removed for good? I imagine this will impact a lot of developers.

Allow microfades when changing graph

Suggestion: The ability to microfade the input to the AudioUnit when you addFilter/removeFilter. In fact, I think the option of a microfade on both of these methods may prove quite useful as there are many fx that simply work better when they gradually receive a signal as opposed to all of sudden get blasted with samples.

Enhancement: AUSplitter

Hi Michael,

I have just started looking at TAAE and so far I am loving it.
There's just one thing I need that it doesn't appear to support, the AUSplitter Audio Unit.
Would it be possible to get support for this added?

Also, I'd like to be able to set a specific channel or group as the source for the IOUnits output, so that not everything is output to the speakers.

Thanks.

AERecorder renders file at 2x speed

I'm attempting to record input from the built-in mic and mix the result with my app's audio output (which is a combination of audio files loaded into a channel group).

For recording, I'm using the example code here: http://theamazingaudioengine.com/doc/_receiving-_audio.html#Recording

However, after I stop recording, the resulting audio file is twice the speed of the original audio and has some pops and clicks.

If I record without using addOutputReceiver then the audio is recorded correctly.

I've also tried changing the audio format that I'm recording to with no luck.

Any ideas of what's going on?

(un)plugging headphones on iOS7

While my app is in the background, regardless if the audio engine is running or not, plugging in the headphones gives me:

TAAE: Changed audio route to HeadphonesAndMicrophone
AEAudioController.m:2580: AudioConverterNew result 1718449215 666D743F fmt?
AEAudioController.m:2581: AudioConverterSetProperty(kAudioConverterChannelMap result -50 FFFFFFCE ˇˇˇŒ
/ABInputPort.m:243: AudioConverterNew result 1718449215 666D743F ?tmf
/ABInputPort.m:245: AudioConverterNew result 1718449215 666D743F ?tmf
TAAE: Input status updated (0 channel, non-interleaved)

Then disconnecting them again, nothing happens. Bringing the app to the foreground then shows:

TAAE: Starting Engine
TAAE: Input status updated (1 channel, non-interleaved, with converter)
ERROR:     [0x3d33d18c] 1207: AUIOClient_StartIO failed (-66628)
AEAudioController.m:924: AUGraphStart result -66628 FFFEFBBC ˇ˛˚º
TAAE: Trying to recover from system error (3 retries remain)
TAAE: Stopping Engine
TAAE: Input status updated (1 channel, non-interleaved, with converter)
TAAE: Engine setup
TAAE: Starting Engine
TAAE: Successfully recovered from system error
TAAE: Changed audio route to SpeakerAndMicrophone
TAAE: Changed audio route to SpeakerAndMicrophone

Same thing happens the other way around: plugging out the headphones while in background.

request: audioDescription on receivers and filters

It would be nice to be able to set an audioDescription on receivers and have it converted when needed. And/or expose an AECopyAndConvertBuffer(AudioBufferList *source, AudioStreamBasicDescription *sourceAudioDescription, AudioBufferList *destination, AudioStreamBasicDescription *destinationAudioDescription).

Audio stops playing when device sleeps.

Pressing the lock button causes audio to stop playing. Audio should continue to play if all AudioUnits have kAudioUnitProperty_MaximumFramesPerSlice set to 4096. It appears from the code that this is being done for some audio units, but apparently it's not set on all of them.

Please add semantic version tags

I’ve recently added TheAmazingAudioEngine to the CocoaPods package manager repo.

CocoaPods is a tool for managing dependencies for OSX and iOS Xcode projects and provides a central repository for iOS/OSX libraries. This makes adding libraries to a project and updating them extremely easy and it will help users to resolve dependencies of the libraries they use.

However, TheAmazingAudioEngine doesn't have any version tags. I’ve added the current HEAD as version 0.0.1, but a version tag will make dependency resolution much easier.

Semantic version tags (instead of plain commit hashes/revisions) allow for resolution of cross-dependencies.

In case you didn’t know this yet; you can tag the current HEAD as, for instance, version 1.0.0, like so:

$ git tag -a 1.0.0 -m "Tag release 1.0.0"
$ git push --tags

AEAudioController is a slow starter

I created a small demo with 1 channel (AEAudioPlayer) and it seems it takes about 1 second for the AEAudioController to actually start the playback.
Is there any way i can speed it up ?
btw, i can't actually prepare it in advance because everything is real-time.

Allow explicit converter format for non-discoverable-format filters

Apple's AU3DMixerEmbedded audio unit has a poorly documented (gasp!) need for mono input to work correctly, which the AEAudioUnitFilter setup-check misses.

Would a way to pass an explicit input format to a filter solve this issue? Or is the 3D mixer better off integrated as something other than a filter, somewhere else in the chain? (Perhaps as a replacement to the default multi-channel mixer?)

I considered the latter, but obviously the default mixer is pretty tightly integrated.

AEAudioFilters always pass host mSampleTime to producers

This is an odd edge case, but it might be a broader bug that should be fixed.

I've tried to use my own AEAudioFilter that implements a basic time-stretching algorithm (as the AUNewTimePitch audio unit is broken on iOS6) but I've run into an odd issue. Take the following code, which calls producer(producerToken, audio, &frames) when it needs to grab more frames. If the time stretching algorithm has enough frames already, producer does not get called.

static OSStatus filterCallback(id                               filter,
                               AEAudioController                *audioController,
                               AEAudioControllerFilterProducer  producer,
                               void                             *producerToken,
                               const AudioTimeStamp             *time,
                               UInt32                           frames,
                               AudioBufferList                  *audio)
{
    RXAudioStretchFilter *THIS = (RXAudioStretchFilter*)filter;

    while (THIS->soundTouchEngine.numSamples() < frames) {
        OSStatus status = producer(producerToken, audio, &frames);
        if (status != noErr) return status;

        for (int i = 0; i < frames; i++) {
            THIS->tmp[2*i] = ((SInt16*)audio->mBuffers[0].mData)[i];
            THIS->tmp[(2*i) + 1] = ((SInt16*)audio->mBuffers[1].mData)[i];
        }

        THIS->soundTouchEngine.putSamples(THIS->tmp, frames);
    }

    UInt32 samplesWritten = THIS->soundTouchEngine.receiveSamples(THIS->tmp, frames);

    // ... more code omitted ...
}

The issue arises when producer has not been called and time has passed outside the filterCallback. The next time producer is called, the mTimeStamp.mSampleTime it receives will be equal the current host sample time - and not the mSampleTime at the last time the function was called. This makes time-stretching impossible.

As a better example, imagine producer calls an AUAudioFilePlayer somewhere to play back an audio file. This audio file is exactly 5 seconds long. However, the AEAudioFilter I've implemented for time stretching wants to extend this clip to 10 seconds. Hence, on average, every other call to filterCallback will call producer. The expected behaviour is that the mSampleTime value passed to the audio unit will start at the current host time, but then only be incremented when producer is called - in essence, allowing the filter to fake the current time and fetch samples only as they're needed. The current behaviour instead sets mSampleTime to the current real sample time every time filterCallback, making the AUAudioFilePlayer play in real time, no matter what.

I've added a quick hack of a solution to my branch (at: psobot@cdd1398) but I'm not sure if this is something that TAAE should support more broadly.

Fix glitches when removing channels

There's an issue right now in the routine that removes channels that causes other channels to pause for a buffer or two sometimes.

TAAE stores the control structures for channels within an array of channel_t, and points each AURenderCallbackStruct's inputProcRefCon to the structure in that array associated with each channel.

The problem is due to the fact that when removing channels, channel structures that come later in the channel array for the given group are shuffled back on the Core Audio thread (to keep the channels contiguous in memory) prior to reassigning the inputProcRefCon pointers for the channels that remain, back on the main thread.

The interval between shuffling the elements around in the channel array, and reassigning the pointers for each channel may be large, and in that time the channels may render the wrong actual channel data, or may be silent. Due to the way that AUGraphs can only be updated on the main thread, this latter reconfiguration can't be done at the same time as reconfiguring the channel array.

There are two potential solutions:

  1. Rather than shuffling the channels array so that the channels array is contiguous (as in, channel n can be found in element n of the array), instead blank out the element of the removed channel. When adding new channels, search for the first blank entry in the channel array. That means keeping track of which channel index is associated with each record in the channel array, but removes the requirement to sync the state of the channel array and the inputProcRefCon pointers when removing channels.
  2. Rather than using a fixed-size channel array to store channel control data, allocate a new structure each time a channel is added, storing a pointer to the structure in a dictionary keyed by the channel object. On removing channels, perform the graph update first, and free the structure afterwards.

The second option seems preferable.

AEMixerBuffer (bogus?) error when setting up AERecorder & Out of buffer space

The error on AEMixerBuffer.m:1039 is returned yet everything is still functioning as expected with one exception...sometimes (in the simulator at least, have yet to test on device) AEMixerBuffer will repeatedly spit out...

<AEMixerBuffer 0xb0bac00>: Out of buffer space

...this is logged the entire time the AERecorder is attempting to record. If I set a breakpoint and stop temporarily during the init call the memory error is less likely to occur.

UPDATE: In LIMITED testing if I put a sleepForTimeInterval immediately after AERecorder's init the buffer error does NOT occur. Removing the sleep it occurs every time.

... [[AERecorder alloc] initWithAudioController:...];
[NSThread sleepForTimeInterval:1.0];

However, the following error is returned every time (at least in simulator):
AEMixerBuffer.m:1039: AudioUnitSetProperty(kAudioUnitProperty_StreamFormat) result -10868 FFFFD58C å’ˇˇ

Details of this follow:

Having setup an AEAudioController with a number of elements that work, when I try to setup an AERecorder I get the error shown above. The error occurs in the call to AERecorder initWithAudioController.

[[AERecorder alloc] initWithAudioController:

Specifically, audioController.inputAudioDescription is zero-ed out and therefore its mChannelsPerFrame doesn't match audioController.audioDescription which is

(AudioStreamBasicDescription) $1 = {
mSampleRate = 44100
mFormatID = 'lpcm'
mFormatFlags = 12
mBytesPerPacket = 4
mFramesPerPacket = 1
mBytesPerFrame = 4
mChannelsPerFrame = 2
mBitsPerChannel = 16
mReserved = 0
}

AEMixerBuffer's setAudioDescription: is then called to fix this. The audioDescription returned from the inputCallbacks is:

Printing description of THIS->_inputCallbacks->audioDescription:
(AudioStreamBasicDescription) audioDescription = {
mSampleRate = 0
mFormatID = 0
mFormatFlags = 0
mBytesPerPacket = 0
mFramesPerPacket = 0
mBytesPerFrame = 0
mChannelsPerFrame = 0
mBitsPerChannel = 0
mReserved = 0
}

Ultimately upon the call to AudioUnitSetProperty to set the mixerUnit's stream format the above audioDescription (empty) is being passed which returns the error.

The AERecorder is still returned and functions (seemingly) normally.

unit on/off (bypass) switch

Add the ability to bypass a unit in the graph by implementing an on/off mechanism ie: a filter. This would eliminate the need to manually remove the unit and add it back in just to temporarily turn it off.

Asynchronous loading of audio files

It looks like +[AEAudioFilePlayer audioFilePlayerWithURL:audioController:error:] loads an entire audio file from disk synchronously. In my tests, this method takes about 1.6s on an iPhone 5 for a 7MB AAC file. That's not counting the initialization of the AEAudioController, nor adding the player as a channel.

I see the implementation starts a loading operation on the current thread but blocks until completion. Does this mean you have it in mind to do asynchronous loading with a completion block? I'd really like to be able to do that.

Failing that, is it safe to create the audio file player on a low-priority thread and add it to the controller later on the main thread?

time->mHostTime no longer increments in channel renderCallback functions

Commit 3bcb6f1 changed AEAudioController.m:440 so that &channel->timeStamp is passed to the AEAudioControllerRenderCallback instead of &arg->inTimeStamp.

But after mHostTime is incremented in renderCallback() at AEAudioController.m:477, it never gets copied to channel. Now when channelAudioProducer() calls the AEAudioControllerRenderCallback, host time remains static.

To see this in action in the Example Project, add this line inside of renderCallback() in AEAudioUnitChannel.m:155

printf("host %lld, sample %0.f \n", time->mHostTime, time->mSampleTime);

I'm not sure it is the right place for it, but adding this line right before arg is created at AEAudioController.m:484 appears to fix the problem:

channel->timeStamp.mHostTime = timestamp.mHostTime;

Scheduling a file using AEBlockScheduler fails for playback

Scheduler triggers like it should adding this code;

self.scheduler = [[AEBlockScheduler alloc] initWithAudioController:_audioController];
[_audioController addTimingReceiver:_scheduler];

and this:

[self.scheduler scheduleBlock:^(const AudioTimeStamp *intervalStartTime, UInt32 offsetInFrames) {
    NSLog(@"Hit Me!");
} atTime:[AEBlockScheduler timestampWithSecondsFromNow:startTimeSecondsFromNow] timingContext:AEAudioTimingContextOutput identifier:@"clip1"];

But let's say I add this line in the schedule block to play a clip at specific point in time:

[self.audioController addChannels:[NSArray arrayWithObject:clip] toChannelGroup:self.mainChannelGroup];

I then get this error:

AEAudioController.m:1003: Update graph result -10863 FFFFD591 ˇˇ’ë
TAAE: Timed out while performing message exchange

Moving the addChannels line of code out of the scheduling block, plays of the audio clip like it should.

request: sharing signals between channels

It would be nice with a way to share signals between channels, for example introduce an AEAuxBus, where one channel can push audio and other channels read it back. The audio on the bus buffer would contain one block of audio per rendering cycle, only replacing it when a channel (or filter or receiver) writes to it. So regarding the order of execution: channels reading the bus before the writers would get it delayed one buffer cycle, which is OK and expected. I haven't tested but I assume channels execute in the order they were added. Such a feature would allow one to implement sends to global effects, etc.

ChannelGroup mute not working as expected

Hi there,
there seems to be an issue with the -[AEAudioController setMuted: forChannelGroup:] method. I was trying to mute/unmute a channel group and did not succeed until I patched it:

diff --git a/TheAmazingAudioEngine/AEAudioController.m b/TheAmazingAudioEngine/AEAudioController.m
index 1a040aa..2716852 100644
--- a/TheAmazingAudioEngine/AEAudioController.m
+++ b/TheAmazingAudioEngine/AEAudioController.m
@@ -1162,7 +1162,8 @@ static OSStatus topRenderNotifyCallback(void *inRefCon, AudioUnitRenderActionFla
     AEChannelGroupRef parentGroup = [self searchForGroupContainingChannelMatchingPtr:group userInfo:NULL index:&index];
     NSAssert(parentGroup != NULL, @"Channel not found");

-    AudioUnitParameterValue value = group->channel->muted = muted;
+    AudioUnitParameterValue value = !(group->channel->muted = muted);
+  
     OSStatus result = AudioUnitSetParameter(parentGroup->mixerAudioUnit, kMultiChannelMixerParam_Enable, kAudioUnitScope_Input, index, value, 0);
     checkResult(result, "AudioUnitSetParameter(kMultiChannelMixerParam_Enable)");
 }

The thing is that the property kMultiChannelMixerParam_Enable is the inverse of muted.

Regards,
Thierry

Deadlock on main thread for -[AEAudioFileWriter finishWriting]

Xcode Trace

This is the 4th time I have initialized the audioController, but otherwise recording multiple times works okay. Specifically it deadlocks on the main thread here: https://github.com/TheAmazingAudioEngine/TheAmazingAudioEngine/blob/master/TheAmazingAudioEngine/AEAudioFileWriter.m#L220

- (void)finishWriting {
    if ( !_writing ) return;

    _writing = NO;

    // Gets stuck here
    checkResult(ExtAudioFileDispose(_audioFile), "AudioFileClose"); 

    if ( _priorMixOverrideValue ) {
        checkResult(AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryMixWithOthers, sizeof(_priorMixOverrideValue), &_priorMixOverrideValue),
                    "AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryMixWithOthers)");
    }
}

First I used TAAE to record an AAC file two times, then used the hardware video encoder to record a video, then used TAAE to record another file. I am using a fork of TAAE that "fixes" the crash mentioned here: #27 via this method: OpenWatch@eb92e78

Here is the end of my logs:

2013-05-16 12:12:26.522 OpenWatch[482:907] TAAE: Setting audio session category to PlayAndRecord
2013-05-16 12:12:27.215 OpenWatch[482:907] TAAE: Audio session initialized (input available, audio route 'SpeakerAndMicrophone')
2013-05-16 12:12:27.217 OpenWatch[482:907] TAAE: Input status updated (1 channel, non-interleaved)
2013-05-16 12:12:27.222 OpenWatch[482:907] TAAE: Engine setup
2013-05-16 12:12:27.224 OpenWatch[482:907] TAAE: Starting Engine
2013-05-16 12:12:27.623 OpenWatch[482:907] TAAE: Changed audio route to SpeakerAndMicrophone
2013-05-16 12:12:27.626 OpenWatch[482:907] TAAE: Changed audio route to SpeakerAndMicrophone
2013-05-16 12:12:27.629 OpenWatch[482:907] TAAE: Changed audio route to SpeakerAndMicrophone
2013-05-16 12:12:37.227 OpenWatch[482:907] 12:12:37.227 <com.apple.main-thread> AudioSessionGetProperty posting message to kill mediaserverd (51)
2013-05-16 12:12:38.241 OpenWatch[482:907] TAAE: Timed out while performing message exchange
2013-05-16 12:12:39.247 OpenWatch[482:907] TAAE: Timed out while performing message exchange

interruptionListener crash when switching out of app

Great library, by the way!

I noticed that the interruptionListener is being called even after my recording activities are complete and when I switch out of the app.

https://github.com/TheAmazingAudioEngine/TheAmazingAudioEngine/blob/master/TheAmazingAudioEngine/AEAudioController.m#L303

Specifically it crashes in this area of that function:

        if ( THIS->_runningPriorToInterruption ) {
            [THIS stop];
        }

        [[NSNotificationCenter defaultCenter] postNotificationName:AEAudioControllerSessionInterruptionBeganNotification object:THIS];

Either on "stop" or on the postNotification. The THIS data that is coming in is basically just garbage data and is usually some sort of garbled string value. It seems that the interruptionListener isn't being cleared when I'm done recording. Perhaps I'm not stopping my recording properly.

Here's how I use it if that helps:
https://github.com/OpenWatch/OpenWatch-iOS/blob/master/OpenWatch/OWAudioRecordingViewController.m#L95

Setting the pan property on a channel does not work as expected. (With code example)

UPDATE 2: This was not a bug in TAAE. It was actually a hardware issue that TAAE help me solve.

UPDATE: The link is now correct.

The AEAudioPlayable protocol has an attribute named pan. If this is set to -1.0 or 1.0 at the time a channel is created then the channel will be on the full left or right output. However, if the value is set later while the app is running the expected behavior is not seen.

Here is the TAAE example that has been modified to demonstrate the issue.

http://ericjknapp.com/samples/EJKSample.zip

The app was modified to add a new slider in the table view and code was added to the ViewController.m file. All added code has been marked with a comment line that starts with // EJK: and has detailed explanations.

Either the code is not behaving as expected or I am not understanding how the pan property is supposed to work.

How to allow simultaneous Play back and recording of audio using Amazing audio engine

The problem i am facing is that , one the user completes the recording of the audio and if he play it and at the same time he starts recording another audio , then the playing audio is also merged in the present recorded audio.
I dont want the output from speaker to be merged in the recording. It has been more than a week with this issue. All suggestions are welcome.

Puneet

inputAveragePowerLevel with Audiobus

When setting audioController.audiobusInputPort to receive audio from Audiobus instead of the hardware input, inputAveragePowerLevel still meters the hardware input instead of the Audiobus port.

iOS7 restoring audio doesn't work anymore

Hi guys, not sure if this issue been reported. I got latest last week, and on iOS7 if you press home and come back, no audio is heard anymore. Playback doesn't respond for new audio also.

It works fine running on iOS6.

How I handle it:
applicationWillResignActive {
[self.audioController stop];

applicationWillEnterForeground {
[self.audioController start:NULL ];

If I don't do those calls at all, the behaviour is even worse. The whole application becomes very unresponsive, and runs at like 1 fps.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.