Code Monkey home page Code Monkey logo

Comments (13)

lijon avatar lijon commented on July 18, 2024

Definitely option 2, and just store a pointer directly to the single struct. Perhaps you also don't need to allocate/free when adding/removing, but store it with the channel object? Or can one channel be added multiple times?

from theamazingaudioengine.

michaeltyson avatar michaeltyson commented on July 18, 2024

While a channel should only be added once, storing this in the channel structure itself would require replacing the AEAudioPlayable protocol with an actual class, which I'd prefer to avoid. Should be relatively straightforward implementing option 2 though.

from theamazingaudioengine.

lijon avatar lijon commented on July 18, 2024

Ok, I see. Another alternative would be to use objc_setAssociatedObject()
to attach the structure to the playable object.

from theamazingaudioengine.

michaeltyson avatar michaeltyson commented on July 18, 2024

True, that would work =)

from theamazingaudioengine.

michaeltyson avatar michaeltyson commented on July 18, 2024

I'm an idiot. Option 2 is already how it works - I'd forgotten than I'd already made this change. So now I have to figure out why it still glitches!

from theamazingaudioengine.

michaeltyson avatar michaeltyson commented on July 18, 2024

Okay, here's the answer: They're not glitches per se. In fact, they're microfades!

I've come across this once before while working on Loopy's track merge feature: The MultiChannelMixer audio unit sometimes applies a microfade to its channels. There's no documentation on when it does so, and I've not been able to come up with a solid theory yet.

You can see it quite clearly here, in a test that, at 0.25 second intervals, adds 4 AEAudioFilePlayers which are playing a 1-second aiff with a sine wave:

test-1

Here's a second run of the same code, with no changes - you can see it seems to apply that microfade pretty randomly:

test-2

I can say that the reason it's happening is because when a channel's removed, the mixer's buses are shuffled backwards. For example, say you:

  1. Add channel A - it's assigned to bus 0. Mixer's bus count is set to 1.
  2. Add channel B - it's assigned to bus 1. Mixer's bus count is set to 2.
  3. Add channel C - it's assigned to bus 2. Mixer's bus count is set to 3.
  4. Remove channel A - channel B is now assigned to bus 0, and channel C is assigned to bus 1. The mixer's bus count is set to 2.

At that point, as far as the mixer's concerned, buses 1 and 2 are now playing totally different audio, after which, I would assume, some algorithm is run that decides whether or not to apply the microfade. Perhaps it looks to see if the first sample is 0, and applies a microfade if not? That would explain the apparent randomness of the use of the microfade use.

I suppose one possible solution would be to never remove buses on the mixer, and instead only disable them when a channel's removed. Then when adding a new channel, look for a disabled bus before incrementing the bus count. That's not very efficient, though, as it doesn't provide any way to scale back resources after playing a large number of channels.

The only other solution I can think of is to pass, say, 256 samples of silence through each of the mixer's channels after making a change. That's what I ended up doing with Loopy's merge stuff: Before it begins running the offline mixer to mix the tracks together, it pumps a little silence through it first.

If only Apple offered an interface to enable/disable that feature!

from theamazingaudioengine.

lijon avatar lijon commented on July 18, 2024

Some thoughts about this issue:

While moving channels on the mixer inputs might seem economical, I don't think it's a good idea. Better to let a channel stay on the same mixer bus as long as it's playing, and only throw out unused mixer busses when they don't have any channel playing on them. I don't think this AUGraph-with-mixer-units approach is suitable for any large scale dynamic voice allocations anyhow.. in those cases one would probably want to have pre-allocated voice channels that justs sits there, and custom voice allocation to distribute sounds among them.

As said before, I don't think reconfiguring the AUGraph is suitable for low-level musical precision stuff. In a synth app, I wouldn't add/remove channels as notes are played/stopped, but rather have a pre-configured graph. The graph would only be reconfigured when the user changes the polyphony-number-of-voices setting. Similarly, in a DAW I would let the graph reflect the tracks, and only reconfigure it when the number of tracks change, not each time a region starts or stops playing, etc..

from theamazingaudioengine.

michaeltyson avatar michaeltyson commented on July 18, 2024

Hearty agreement =)

from theamazingaudioengine.

michaeltyson avatar michaeltyson commented on July 18, 2024

More: Yep, channels shouldn't be removed and re-added; instead, they should be muted/unmuted.

from theamazingaudioengine.

zobkiw avatar zobkiw commented on July 18, 2024

Good ideas here - I agree with lijon's points as well. My only request would be that TAAE works as well as possible in those cases where you do have to remove a channel and add another - in the case of the AEAudioFilePlayer for example.

Another option might be to allow an AEAudioFilePlayer instance to be assigned a different file so it can stay in the graph as it is and continue to play the subsequent file. This could also address that particular use case.

from theamazingaudioengine.

lijon avatar lijon commented on July 18, 2024

Yeah, I think AEAudioFilePlayer should be replaced with an AEBufferPlayer that holds an instance of a separate AESoundBuffer class. This buffer can then be swapped on the fly. It would use performAsync under the hood to swap the buffer for a new one in a safe way and release the old one on the main thread when finished. There would also be a C interface to start/stop it to get precise synchronization.

from theamazingaudioengine.

michaeltyson avatar michaeltyson commented on July 18, 2024

I like that =)

from theamazingaudioengine.

michaeltyson avatar michaeltyson commented on July 18, 2024

Closing this issue: I've tweaked the track removal procedure in SHA e1bd0fb, and what remains is good enough for now, I think, given the expense of building a workaround to the mixer issue.

from theamazingaudioengine.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.