Code Monkey home page Code Monkey logo

Comments (4)

igorski avatar igorski commented on August 30, 2024 1

So "2 beats" would be measure 1, start of beat 2 and "7 beats" would be measure 2, start of beat 3 ? Makes sense.

What you need to know is the duration of a measure in seconds. You can leverage the helper math in bufferUtility to do this. All you need to know is the current sample rate, and the tempo and time signature of the sequencer, which are properties you have defined during setup.

int samplesPerBar = ( float ) bufferUtility.getSamplesPerBar( sampleRate, tempo, timeSigBeatAmount, timeSigBeatUnit );
float secondsPerBar = bufferUtility.bufferToSeconds( samplesPerBar, sampleRate );
int samplesPerBeat = samplesPerBar / timeSigBeatAmount;
float secondsPerBeat = bufferUtility.bufferToSeconds( samplesPerBeat, sampleRate );

Note: when your song switches time signature or tempo, you must recalculate the above. If you change tempo or time signature when switching between individual measures (instead of adjusting this globally), you must recalculate this more often. Basically you must ensure that above values for samplesPerBeat and secondsPerBeat are valid for the current measure the Sequencer is playing.

// convenience method to position events using beats

void positionEventUsingBeats( AudioEvent event, int eventStartMeasure, float startOffsetInBeats, float durationInBeats ) {

    // 1. note we subtract one beat as beats start at 1, not 0.
    float startOffsetInSeconds = ( startOffsetInBeats - 1 ) * secondsPerBeat;

    // 2. assumption here is that all measures have the same duration
    // if not, this must be calculated differently, see explanation below
    // note we subtract one beat as measures start at 1, not 0 for consistency
    float eventStartMeasureInSeconds = ( eventStartMeasure - 1 ) * secondsPerBar;

    // 3. use positioning in seconds
    event.setStartPosition( eventStartMeasureInSeconds + startOffsetInSeconds );

    // 4. set duration in seconds, note we subtract one beat as beats start at 1, not 0.
    event.setDuration(( durationInBeats - 1 ) * secondsPerBeat );
}

Where timeSigBeatAmount is the "3" in 3/4 and timeSigBeatUnit is the "4" in 3/4.

When setting the events start offset, be sure to add the duration of the measures preceding it. For instance to position an event a half beat in length on the fourth beat of the third measure:

positionEventUsingBeats( event, 3, 4f, .5f );

Note on using SampleEvents: if the duration of the event should equal the total duration of the sample, it is preferred you replace step 4 in positionEventUsingBeats with:

event.setSampleLength( event.getBuffer().getBufferSize());

from mwengine.

igorski avatar igorski commented on August 30, 2024

If I understand correctly, you want to design a model that represents notes both visually and of course, audibly. I would suggest creating a value Object which has a property which is an MWEngine AudioEvent instance (like SynthEvent or SampleEvent).

Define your remaining properties and methods to accommodate the creation of a note in the piano roll and internally translate these those event positions (either using buffer samples or seconds, see Wiki). Whenever the position of such an event changes horizontally, update the positions of their AudioEvents accordingly.

The Sequencer is indeed a container for any event. You don't have to consciously think about "shoving in events as needed", but merely construct events and define at what point in time they start playing and invoke addToSequencer() once. Whenever the Sequencers playhead reachers that point in time, it will become audible. Just ensure that when an event should play at a different time, its position properties are updated.

For non-quantization, you have absolute freedom to position an event at any point in time, regardless of whether strictly speaking that is the most musical interval. Choose your preferred way of performing this calculus (either at the sample level or using seconds) and you can "offset" your events from the grid as you please

from mwengine.

teotigraphix avatar teotigraphix commented on August 30, 2024

Yes, this seems like it was more of a mental gymnastics thing for me to over come.

I was using float beats for my Caustic apps. So ALL my logic in my other apps are based around floting point numbers in the sequencers not samples.

I am thinking about writing and adapter and see if all my existing note edit logic I have built up with float beats can be transferred to samples without any loss of detail.

I think the idea was Rej gave me OSC messages to communicate with the core, those messages then "posted" float beats to the OSC parser.

Once the parser handled them, the 2nd level C++ layer handled everything in float betas. But once in process() etc, everything was converted to audio sample locations.

So the ONLY thing that knew about audio samples was the deep core audio engine.

From what you have said, it seems like there really is 3 ways to set a note event;

  • Sample location as perfect as it gets
  • float Time location, ratio to get samples
  • float Beat location, ratio to get samples

So 1.5 beats which is measure 1, beat 1, "half way through first beat"

Does this make sense to you?

from mwengine.

teotigraphix avatar teotigraphix commented on August 30, 2024

EDIT: Yes, you have the concept of float beats correct, it's just another linear key. What I found great about this is it has a 1 to 1 relationship in UI components like piano rolls, so it's really easy to get that data into the UI.

Thanks! I am going to try this out in a proto app.

I have plenty of "material" that I know exactly how it should work, so now I will get it to work.

I'll update with my progress.

from mwengine.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.