Comments (13)
Definitely option 2, and just store a pointer directly to the single struct. Perhaps you also don't need to allocate/free when adding/removing, but store it with the channel object? Or can one channel be added multiple times?
from theamazingaudioengine.
While a channel should only be added once, storing this in the channel structure itself would require replacing the AEAudioPlayable protocol with an actual class, which I'd prefer to avoid. Should be relatively straightforward implementing option 2 though.
from theamazingaudioengine.
Ok, I see. Another alternative would be to use objc_setAssociatedObject()
to attach the structure to the playable object.
from theamazingaudioengine.
True, that would work =)
from theamazingaudioengine.
I'm an idiot. Option 2 is already how it works - I'd forgotten than I'd already made this change. So now I have to figure out why it still glitches!
from theamazingaudioengine.
Okay, here's the answer: They're not glitches per se. In fact, they're microfades!
I've come across this once before while working on Loopy's track merge feature: The MultiChannelMixer audio unit sometimes applies a microfade to its channels. There's no documentation on when it does so, and I've not been able to come up with a solid theory yet.
You can see it quite clearly here, in a test that, at 0.25 second intervals, adds 4 AEAudioFilePlayers which are playing a 1-second aiff with a sine wave:
Here's a second run of the same code, with no changes - you can see it seems to apply that microfade pretty randomly:
I can say that the reason it's happening is because when a channel's removed, the mixer's buses are shuffled backwards. For example, say you:
- Add channel A - it's assigned to bus 0. Mixer's bus count is set to 1.
- Add channel B - it's assigned to bus 1. Mixer's bus count is set to 2.
- Add channel C - it's assigned to bus 2. Mixer's bus count is set to 3.
- Remove channel A - channel B is now assigned to bus 0, and channel C is assigned to bus 1. The mixer's bus count is set to 2.
At that point, as far as the mixer's concerned, buses 1 and 2 are now playing totally different audio, after which, I would assume, some algorithm is run that decides whether or not to apply the microfade. Perhaps it looks to see if the first sample is 0, and applies a microfade if not? That would explain the apparent randomness of the use of the microfade use.
I suppose one possible solution would be to never remove buses on the mixer, and instead only disable them when a channel's removed. Then when adding a new channel, look for a disabled bus before incrementing the bus count. That's not very efficient, though, as it doesn't provide any way to scale back resources after playing a large number of channels.
The only other solution I can think of is to pass, say, 256 samples of silence through each of the mixer's channels after making a change. That's what I ended up doing with Loopy's merge stuff: Before it begins running the offline mixer to mix the tracks together, it pumps a little silence through it first.
If only Apple offered an interface to enable/disable that feature!
from theamazingaudioengine.
Some thoughts about this issue:
While moving channels on the mixer inputs might seem economical, I don't think it's a good idea. Better to let a channel stay on the same mixer bus as long as it's playing, and only throw out unused mixer busses when they don't have any channel playing on them. I don't think this AUGraph-with-mixer-units approach is suitable for any large scale dynamic voice allocations anyhow.. in those cases one would probably want to have pre-allocated voice channels that justs sits there, and custom voice allocation to distribute sounds among them.
As said before, I don't think reconfiguring the AUGraph is suitable for low-level musical precision stuff. In a synth app, I wouldn't add/remove channels as notes are played/stopped, but rather have a pre-configured graph. The graph would only be reconfigured when the user changes the polyphony-number-of-voices setting. Similarly, in a DAW I would let the graph reflect the tracks, and only reconfigure it when the number of tracks change, not each time a region starts or stops playing, etc..
from theamazingaudioengine.
Hearty agreement =)
from theamazingaudioengine.
More: Yep, channels shouldn't be removed and re-added; instead, they should be muted/unmuted.
from theamazingaudioengine.
Good ideas here - I agree with lijon's points as well. My only request would be that TAAE works as well as possible in those cases where you do have to remove a channel and add another - in the case of the AEAudioFilePlayer for example.
Another option might be to allow an AEAudioFilePlayer instance to be assigned a different file so it can stay in the graph as it is and continue to play the subsequent file. This could also address that particular use case.
from theamazingaudioengine.
Yeah, I think AEAudioFilePlayer should be replaced with an AEBufferPlayer that holds an instance of a separate AESoundBuffer class. This buffer can then be swapped on the fly. It would use performAsync under the hood to swap the buffer for a new one in a safe way and release the old one on the main thread when finished. There would also be a C interface to start/stop it to get precise synchronization.
from theamazingaudioengine.
I like that =)
from theamazingaudioengine.
Closing this issue: I've tweaked the track removal procedure in SHA e1bd0fb, and what remains is good enough for now, I think, given the expense of building a workaround to the mixer issue.
from theamazingaudioengine.
Related Issues (20)
- 1.5.7 not on Cocoapods HOT 2
- Open the dome,Bluetooth speakers will automatically stop working? HOT 1
- How can I set the preferredInput? Everytime I change it, it sets it back to previous.
- Stop doesn't work if the audioDescription preferred sampleRate is different than detected
- user AEAudioController Empty my control center
- Using play through channel ,but need to delay the playing
- Prepare and begin recording
- Is TheAmazingAudioEngine possible to perform audio synthesis?
- Can only record m4a type ? HOT 1
- 请问一下theamazingaudioengine 录音的时候如何过滤扬声器声音 HOT 2
- Hi, I need to create a visualizer like ZLHistogramAudioPlot. I try to use this code
- Create ZLHistogramAudioPlot like visualizer using TheAmazingAudioEngine
- React Native version?
- regionStartTime not work
- 可以绘制音波图吗
- Crash AEUtilities.h -[AEAudioController teardown] HOT 1
- demo 不清楚怎么使用的 可以讲一下吗
- Multiple samples not playing at the same time in iOS 14 HOT 5
- `AEAudioBufferListGetLength ()` has a incorrect implementation HOT 2
- How to export modified audio file?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from theamazingaudioengine.