Code Monkey home page Code Monkey logo

smplr's Introduction

smplr

npm version

smplr is a collection of sampled instruments for Web Audio API ready to be used with no setup required.

Examples:

import { Soundfont } from "smplr";

const context = new AudioContext();
const marimba = new Soundfont(context, { instrument: "marimba" });
marimba.start({ note: 60, velocity: 80 });
import { DrumMachine } from "smplr";

const context = new AudioContext();
const dm = new DrumMachine(context);
dm.start({ note: "kick" });
import { SplendidGrandPiano, Reverb } from "smplr";

const context = new AudioContext();
const piano = new SplendidGrandPiano(context);
piano.output.addEffect("reverb", new Reverb(context), 0.2);

piano.start({ note: "C4" });

See demo: https://danigb.github.io/smplr/

smplr is still under development and features are considered unstable until v 1.0

Read CHANGELOG for changes.

Library goals

  • No setup: specifically, all samples are online, so no need for a server.
  • Easy to use: everything should be intuitive for non-experienced developers
  • Decent sounding: uses high quality open source samples. For better or worse, it is sample based 🀷

Setup

You can install the library with a package manager or use it directly by importing from the browser.

Samples are stored at https://github.com/smpldsnds and there is no need to download them. Kudos to all samplerist πŸ™Œ

Using a package manger

Use npm or your favourite package manager to install the library to use it in your project:

npm i smplr

Usage from the browser

You can import directly from the browser. For example:

<html>
  <body>
    <button id="btn">play</button>
  </body>
  <script type="module">
    import { SplendidGrandPiano } from "https://unpkg.com/smplr/dist/index.mjs"; // needs to be a url
    const context = new AudioContext(); // create the audio context
    const marimba = new SplendidGrandPiano(context); // create and load the instrument

    document.getElementById("btn").onclick = () => {
      context.resume(); // enable audio context after a user interaction
      marimba.start({ note: 60, velocity: 80 }); // play the note
    };
  </script>
</html>

The package needs to be serve as a url from a service like unpkg or similar.

Documentation

Create and load an instrument

All instruments follows the same pattern: new Instrument(context, options). For example:

import { SplendidGrandPiano, Soundfont } from "smplr";

const context = new AudioContext();
const piano = new SplendidGrandPiano(context, { decayTime: 0.5 });
const marimba = new Soundfont(context, { instrument: "marimba" });

Wait for audio loading

You can start playing notes as soon as one audio is loaded. But if you want to wait for all of them, you can use the load property that returns a promise:

piano.load.then(() => {
  // now the piano is fully loaded
});

Since the promise returns the instrument instance, you can create and wait in a single line:

const piano = await new SplendidGrandPiano(context).load;

⚠️ In versions lower than 0.8.0 a loaded() function was exposed instead.

Shared configuration options

All instruments share some configuration options that are passed as second argument of the constructor. As it name implies, all fields are optional:

  • volume: A number from 0 to 127 representing the instrument global volume. 100 by default
  • destination: An AudioNode that is the output of the instrument. AudioContext.destination is used by default
  • volumeToGain: a function to convert the volume to gain. It uses MIDI standard as default.
  • scheduleLookaheadMs: the lookahead of the scheduler. If the start time of the note is less than current time plus this lookahead time, the note will be started. 200ms by default.
  • scheduleIntervalMs: the interval of the scheduler. 50ms by default.
  • onStart: a function that is called when starting a note. It receives the note started as parameter. Bear in mind that the time this function is called is not precise, and it's determined by lookahead.
  • onEnded: a function that is called when the note ends. It receives the started note as parameter.

Usage with standardized-audio-context

This package should be compatible with standardized-audio-context:

import { AudioContext } from "standardized-audio-context";

const context = new AudioContext();
const piano = new SplendidGrandPiano(context);

However, if you are using Typescript, you might need to "force cast" the types:

import { Soundfont } from "smplr";
import { AudioContext as StandardizedAudioContext } from "standardized-audio-context";

const context = new StandardizedAudioContext() as unknown as AudioContext;
const marimba = new Soundfont(context, { instrument: "marimba" });

Play

Start and stop notes

The start function accepts a bunch of options:

piano.start({ note: "C4", velocity: 80, time: 5, duration: 1 });

The velocity is a number between 0 and 127 the represents at which velocity the key is pressed. The bigger the number, louder the sound. But velocity not only controls the loudness. In some instruments, it also affects the timbre.

The start function returns a stop function for the given note:

const stopNote = piano.start({ note: 60 });
stopNote({ time: 10 });

Bear in mind that you may need to call context.resume() before playing a note

Instruments have a global stop function that can be used to stop all notes:

// This will stop all notes
piano.stop();

Or stop the specified one:

// This will stop C4 note
piano.stop(60);

Schedule notes

You can schedule notes using time and duration properties. Both are measured in seconds. Time is the number of seconds since the AudioContext was created, like in audioContext.currentTime

For example, next example plays a C major arpeggio, one note per second:

const now = context.currentTime;
["C4", "E4", "G4", "C5"].forEach((note, i) => {
  piano.start({ note, time: now + i, duration: 0.5 });
});

Looping

You can loop a note by using loop, loopStart and loopEnd:

const sampler = new Sampler(audioContext, { duh: "duh-duh-ah.mp3" });
sampler.start({
  note: "duh"
  loop: true
  loopStart: 1.0,
  loopEnd: 9.0,
});

If loop is true but loopStart or loopEnd are not specified, 0 and total duration will be used by default, respectively.

Change volume

Instrument output attribute represents the main output of the instrument. output.setVolume method accepts a number where 0 means no volume, and 127 is max volume without amplification:

piano.output.setVolume(80);

⚠️ volume is global to the instrument, but velocity is specific for each note.

Events

Two events are supported onStart and onEnded. Both callbacks will receive as parameter started note.

Events can be configured globally:

const context = new AudioContext();
const sampler = new Sample(context, {
  onStart: (note) => {
    console.log(note.time, context.currentTime);
  },
});

or per note basis:

piano.start({
  note: "C4",
  duration: 1,
  onEnded: () => {
    // will be called after 1 second
  },
});

Global callbacks will be invoked regardless of whether local events are defined.

⚠️ The invocation time of onStart is not exact. It triggers slightly before the actual start time and is influenced by the scheduleLookaheadMs parameter.

Effects

Reverb

An packed version of DattorroReverbNode algorithmic reverb is included.

Use output.addEffect(name, effect, mix) to connect an effect using a send bus:

import { Reverb, SplendidGrandPiano } from "smplr";
const reverb = new Reverb(context);
const piano = new SplendidGrandPiano(context, { volume });
piano.output.addEffect("reverb", reverb, 0.2);

To change the mix level, use output.sendEffect(name, mix):

piano.output.sendEffect("reverb", 0.5);

Experimental features

Cache requests

If you use default samples, they are stored at github pages. Github rate limits the number of requests per second. That could be a problem, specially if you're using a development environment with hot reload (like most React frameworks).

If you want to cache samples on the browser you can use a CacheStorage object:

import { SplendidGrandPiano, CacheStorage } from "smplr";

const context = new AudioContext();
const storage = new CacheStorage();
// First time the instrument loads, will fetch the samples from http. Subsequent times from cache.
const piano = new SplendidGrandPiano(context, { storage });

⚠️ CacheStorage is based on Cache API and only works in secure environments that runs with https. Read your framework documentation for setup instructions. For example, in nextjs you can use https://www.npmjs.com/package/next-dev-https. For vite there's https://github.com/liuweiGL/vite-plugin-mkcert. Find the appropriate solution for your environment.

Instruments

Sampler

An audio buffer sampler. Pass a buffers object with the files to be load:

import { Sampler } from "smplr";

const buffers = {
  kick: "https://smpldsnds.github.io/drum-machines/808-mini/kick.m4a",
  snare: "https://smpldsnds.github.io/drum-machines/808-mini/snare-1.m4a",
};
const sampler = new Sampler(new AudioContext(), { buffers });

And then use the name of the buffer as note name:

sampler.start({ note: "kick" });

Soundfont

A Soundfont player. By default it loads audio from Benjamin Gleitzman's package of pre-rendered sound fonts.

import { Soundfont, getSoundfontNames, getSoundfontKits } from "smplr";

const marimba = new Soundfont(new AudioContext(), { instrument: "marimba" });
marimba.start({ note: "C4" });

It's intended to be a modern replacement of soundfont-player

Soundfont instruments and kits

Use getSoundfontNames to get all available instrument names and getSoundfontKits to get kit names.

There are two kits available: MusyngKite or FluidR3_GM. The first one is used by default: it sounds better but samples weights more.

const marimba = new Soundfont(context, {
  instrument: "clavinet",
  kit: "FluidR3_GM", // "MusyngKite" is used by default if not specified
});

Alternatively, you can pass your custom url as the instrument. In that case, the kit is ignored:

const marimba = new Soundfont(context, {
  instrumentUrl:
    "https://gleitz.github.io/midi-js-soundfonts/MusyngKite/marimba-mp3.js",
});

Soundfont sustained notes

You can enable note looping to make note names indefinitely long by loading loop data:

const marimba = new Soundfont(context, {
  instrument: "cello",
  loadLoopData: true,
});

⚠️ This feature is still experimental and can produces clicks on lot of instruments.

SplendidGrandPiano

A sampled acoustic piano. It uses Steinway samples with 4 velocity groups from SplendidGrandPiano

import { SplendidGrandPiano } from "smplr";

const piano = new SplendidGrandPiano(new AudioContext());

piano.start({ note: "C4" });

SplendidGrandPiano constructor

The second argument of the constructor accepts the following options:

  • baseUrl:
  • detune: global detune in cents (0 if not specified)
  • velocity: default velocity (100 if not specified)
  • volume: default volume (100 if not specified)
  • decayTime: default decay time (0.5 seconds)
  • notesToLoad: an object with the following shape: { notes: number[], velocityRange: [number, number]} to specify a subset of notes to load

Example:

const piano = new SplendidGrandPiano(context, {
  detune: -20,
  volume: 80,
  notesToLoad: {
    notes: [60],
    velocityRange: [1, 127],
  },
});

Electric Piano

A sampled electric pianos. Samples from https://github.com/sfzinstruments/GregSullivan.E-Pianos

import { ElectricPiano, getElectricPianoNames } from "smplr";

const instruments = getElectricPianoNames(); // => ["CP80", "PianetT", "WurlitzerEP200"]

const epiano = new ElectricPiano(new AudioContext(), {
  instrument: "PianetT",
});

epiano.start({ note: "C4" });

// Includes a (basic) tremolo effect:
epiano.tremolo.level(30);

Available instruments:

  • CP80: Yamaha CP80 Electric Grand Piano v1.3 (29-Sep-2004)
  • PianetT: Hohner Pianet T (type 2) v1.3 (24-Sep-2004)
  • WurlitzerEP200: Wurlitzer EP200 Electric Piano v1.1 (16-May-1999)

Mallets

Samples from The Versilian Community Sample Library

import { Mallet, getMalletNames } from "smplr";

const instruments = getMalletNames();

const mallet = new Mallet(new AudioContext(), {
  instrument: instruments[0],
});

Mellotron

Samples from archive.org

import { Mellotron, getMellotronNames } from "smplr";

const instruments = getMellotronNames();

const mallet = new Mellotron(new AudioContext(), {
  instrument: instruments[0],
});

Drum Machines

Sampled drum machines. Samples from different sources:

import { DrumMachine, getDrumMachineNames } from "smplr";

const instruments = getDrumMachineNames();

const context = new AudioContext();
const drums = new DrumMachine(context, { instrument: "TR-808" });
drums.start({ note: "kick" });

// Drum samples could have variations:
const now = context.currentTime;
drums.getVariations("kick").forEach((variation, index) => {
  drums.start({ note: variation, time: now + index });
});

Smolken double bass

import { Smolken, getSmolkenNames } from "smplr";

const instruments = getSmolkenNames(); // => Arco, Pizzicato & Switched

// Create an instrument
const context = new AudioContext();
const doubleBass = await new Smolken(context, { instrument: "Arco" }).load;

Versilian

Versilian is a sample capable of using the Versilian Community Sample Library.

⚠️ Not all features are implemented. Some instruments may sound incorrect ⚠️

import { Versilian, getVersilianInstruments } from "smplr";

// getVersilianInstruments returns a Promise
const instrumentNames = await getVersilianInstruments();

const context = new AudioContext();
const sampler = new Versilian(context, { instrument: instrumentNAmes[0] });

Soundfont2Sampler

Sampler capable of reading .sf2 files directly:

import { Soundfont2Sampler } from "smplr";
import { SoundFont2 } from "soundfont2";

const context = new AudioContext();
const sampler = Soundfont2Sampler(context, {
  url: "https://smpldsnds.github.io/soundfonts/soundfonts/galaxy-electric-pianos.sf2",
  createSoundfont: (data) => new SoundFont2(data),
});

sampler.load.then(() => {
  // list all available instruments for the soundfont
  console.log(sampler.instrumentNames);

  // load the first available instrument
  sampler.loadInstrument(sampler.instrumentNames[0]);
});

Still limited support. API may vary.

License

MIT License

smplr's People

Contributors

danigb avatar drscottlobo avatar georgecartridge avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

smplr's Issues

Support for using standarized-audio-context

Hello,

Thanks for this awesome library!
I have a pretty large project that also uses ToneJS(mainly for the transport control), but it seems that it's immpossible to use them both together.

The reason seems to be stemming from the fact that Tone uses Standarized Audio Context, and your project uses native audio nodes by default.

The reason i'm creating this issue here is to maybe consider finding a way of supporting using standarized audio context as well with this library, I think it could really benefit from it.

Relevant issue in ToneJS

Drum Machine: Roland missing ogg files, sampleNames issue with MFB-512

Hi danigb! OK, so playing around with the drum machines, and they mostly work great! I found a couple issues:

The Roland CR-8000 appears to be missing its samples, when I load it, they all come back with an error like this one:

"Error loading buffer. Invalid status: 404 https://danigb.github.io/samples/drum-machines/Roland-CR-8000/cr8kcowb.ogg"

All the sampleNames in that instrument appear to be formatted differently than the others as well, perhaps the sample names are the problem, when I look at the list, rather than human readable names I get:

Cr8kbass
Cr8kchat
Cr8kclap
Cr8kclav
Cr8kcowb
Cr8kcymb
Cr8khitm
Cr8klcng
Cr8klotm
Cr8kmcng
Cr8kohat
Cr8krim
Cr8ksnar

A similar issue happens with the MFB-512, the samples on that drummachine load fine, but the sampleNames come up as:

512bdrum
512clap
512cymb
512hhcl
512hhop
512hitom
512lotom
512mdtom
512snare

All the other Drum Machines work as expected!

You can see my work-in-progress drum sampler, and if you click the dropdown and choose either the Roland or the MFB you should see the incorrect(?) sampleNames coming up along the lanes, and the console will show the missing files on the Roland.

Outdated Documentation

Hey, just wanted to point out (after spending half an hour wondering what I was doing wrong) that the method for handling load times has changed in name.

image

setVolume is not a function

Hi @danigb, hope all goes well!

Looks like the Soundfont player object is giving the following error: player.setVolume() is not a function.

I had a look at the object in the console and it looks like that function is currently in player.output, so player.output.setVolume() works. Is that the intended functionality, or should the function be moved?

I still want to have a way to buy you a coffee! This library is saving me so much time!

loading mp3 file

This example doesn't seem right:
const sampler = new Sampler(audioContext, { duh: "duh-duh-ah.mp3" }); sampler.start({ note: "duh" loop: true loopStart: 1.0, loopEnd: 9.0, });

It's throwing this error: Object literal may only specify known properties, and 'duh' does not exist in type 'Partial'.ts(2353)

I've an MP3 file I would like to know how I can load it.

Version 0.11.0 - onStart does not fire when added to player.start({})

Hi there! I'm so excited that you have added an onStart to smplr! I've just tested it as follows. onEnded fires after 1 second, but onStart doesn't seem to fire. (This is using the Soundfont player - I think the onStart might just be missing from that class?)

player.start({ note: "C4", duration: 1, onEnded: () => { console.log("I ended!"); }, onStart: () => { console.log("started!"); }, });

When scheduling large number of notes, no sounds is being output

Hi @danigb,

Thank you for such amazing library, I am using the piano sampler in one of my project and when I schedule more than 2000+ notes that scattered across mins, no sounds is being output.

I tried to schedule less notes and it worked - so I am wondering if this is a bug?

I also experiment with batch scheduling the notes using the onEnded callback but clean up is really messy. Would you mind please let me know if there's a workaround for this issue ? Thank you very much

Store the SoundFont objects in a list, and cannot reference them

The reason I'm doing this is trying to have arbitary number of instruments (from MIDI files), it seems like it won't produce a sound after the instrument is being pushed to the array

instruments = [];
const instr = new Soundfont(context, { instrument: this.trackSettings[0].instrument });

instr.load.then(() => {
    this.instruments.push(instr);
  }

this.instruments[0].start({ note: "C4", velocity: 80, time: 5, duration: 1 })

Or schedule events at a given time

Here's how to schedule activities at a given time, just like soundfont-player:
soundFont.schedule(0, [
{ note: key, duration: lengthInMs / 1000, gain: sampleVolume * this.GainMultiplier },
]);

Error while importing Soundfont

I'm getting this weird GET error:
http://localhost:5173/node_modules/.vite/deps/smplr.js?v=eaf8f5cd 504 (Outdated Optimize Dep)

I was just trying to do like

import { Soundfont } from 'smplr'
const context = new AudioContext()
const piano = new Soundfont(context, { instrument: 'acoustic_grand_piano' })

Playing back mp3 in Buffer - onEnded doesn't fire

Currently using smplr 12.1. I'm playing back a single mp3 file in a buffer per the instructions in the docs, and when the mp3 finishes I need to set "isPlaying" state back to false with onEnded event but it doesn't fire using Sampler to play back an mp3. Can onEnded be added to the Sampler class when playing back from a buffer or perhaps it just needs a fix?

const sampler = new Sampler(ctx, {buffers: {example: 'audio/example.mp3'}}).load
sampler.start({note: 'example'}, {onEnded: () => console.log('ended!))})

Thanks!

Providing frequency for playback instead of notename/noteNumber

How fisable is it to provide frequencies for playback for soundfont.start?

I understand there is a 'detune' option - I couldn't quite get that to work. but regardless,
I'm interested in providing specific frequencies for the instrument to playback.
Would that be complex to implement?

setVolume and reverb effect no longer working.

Hi, been loving package and appreciate the quick responses.

I think this issue was here on recent prior versions but only just tested now.

Setting the volume to 0 and max reverb results in the piano still outputting sound as well as no reverb. Updating the volume or reverb with sendEffect thereafter has no effect either. I believe velocity still works as intended though.

https://codesandbox.io/s/young-monad-forked-lw8pyj?file=/src/index.js

  • Not sure if it's a codepen issue or part of the bug, refresh the codepen to ensure it plays

Ideas to handle clicking when looping and custom soundfonts

Hey @danigb, I wanted to first thank you for your incredible work! This project, as well as your other work, is truly amazing and inspiring!

Alright, now, I have a few things to discuss so I'll introduce some background context first:

I want to use my custom soundfonts in Web Audio related projects.

Initially, I came across jet2jet/js-synthesizer, which is a fantastic project and worked liked a charm until ... well ... until it didn't.

Essentially the issue I had with it is that it doesn't allow for a nice way to share the AudioContext with standardized-audio-context (Hard requirement for me, as I have several other audio nodes that rely on that)

I want to design and use my own soundfonts.

That was super easy with jet2jet/js-synthesizer since it allows me to load the raw .sf2 file in its web assembly module approach.

I noticed with smplr, however, you seem to have adopted a standard of pre-rendered samples, without the need for web assembly parsers that can read the binary .sf2 but rather use raw .mp3 / .ogg files through data:audio/mpeg;base64, encodings.

I think that this is an acceptable trade-off, I read through how this happens for smplr and ended up landing on this soundfont_builder.rb script.

Using that script was remarkably easy, and I managed to generate my custom soundfonts in a smplr friendly format, so that's cool!

Except, it looks like smplr took an approach of getting the loop points from this process @goldst came up with through this generate-loop-data.js script. β€” This is where I was a bit lost to be honest, I think it would be nice to integrate that with the soundfont_builder.rb approach from @gleitz but I'm not sure how to do it currently, would appreciate some advice here πŸ˜…

PS: Do correct me if any of what I said is nonsense.

Looping!

I want looping, so I can have long sustained notes. Like an 8 second pad chord.

Despite not having a nice mycustomsoundfont-loop.json that smplr can automatically load for with loadLoopData: true, it looks like I can still achieve looping by doing:

const pads = new Soundfont(context, {
    instrumentUrl: './pads/acoustic_grand_piano-ogg.js', // ignore the acoustic_grand_piano name, that's because soundfont_builder.rb turns program 0 into that and I didn't rename it
    loadLoopData: true, // this param is effectively a noOp since I don't have a -loop.json file
})
["C4", "E4", "G4", "A4"].forEach((note, i) => {
  pads.start({
    note,
    duration: 8,
    loop: true,
    loopStart: 0.25, // fine tuned manually
    loopEnd: 2.85, // fined tuned manually
  });
});

Interestingly enough, my actual loop point data from Polyphony was 13926 and 176400 over 44100 samples which translates into 0.33 and 4 seconds I believe, but the samples created by soundfont_builder.rb seem to be capped at 3 seconds so it didn't even make sense to use them. β€” Would appreciate some advice on this as well πŸ™‡β€β™‚οΈ

Another thing I noticed is that the way you load the loop points seem to assume a sample rate of 41000 but a lot of the soundfonts are using 44100

const sampleRate = 41000; // this is sample rate from the repository samples

I would assume part of the reason you're getting clicks in some instruments is due the sample rate mismatch and potential downsampling from conversion loss during this step in soundfont_builder.rb

https://github.com/gleitz/MIDI.js/blob/1433d3913d26c1e5f80b3fa0ab63c98584b0087d/generator/ruby/soundfont_builder.rb#L405-L407

Alright, what next

That all said, I was super happy that that worked and I could hear my custom soundfont playing a chord for a whole 8 seconds!

Finally, what I want to know is related to sometimes getting clicking or the feeling of the note being interrupted. I noticed in the docs you say "This feature is still experimental and can produces clicks on lot of instruments."

Why do you think that this is happening? How can I help or what do you recommend looking into to enable better support for looping?

I am wondering if it could also be a clock/sync issue, or if we can soften this with some attack/decay parameters in an envelope filter.

Happy to connect and chat more about this if you want too!

Error: "Relative module specifiers must start with β€œ./”, β€œ../” or β€œ/”.`"

Hi,
I'm getting this error when trying to make a simple sound :
Uncaught TypeError: The specifier β€œsmplr” was a bare specifier, but was not remapped to anything. Relative module specifiers must start with β€œ./”, β€œ../” or β€œ/”.
The code I run -after installing using 'npm i smplr' is like this:

<script type="module">
import { Soundfont } from "smplr";
const context = new AudioContext();
const marimba = new Soundfont(context, { instrument: "marimba" });
marimba.start({ note: 60, velocity: 80 });
</script>

onEnded has stopped working with version 0.10

Hi, anyone else having issues with onEnded no longer firing? Plays back fine, then no console.log("I ended") after the note completes.

Here's a quick functional component showing the issue:


"use client";

import { Soundfont } from "smplr";
import { useState, useRef, useEffect } from "react";

export default function MIDIPlayer() {
	const [player, setPlayer] = useState(null);
	const [isLoading, setIsLoading] = useState(true);
	const ctx = useRef();

	async function loadInstrument() {
		const newPlayer = await new Soundfont(ctx.current, {
			instrument: "marimba",
		}).load;
		setPlayer(newPlayer);
		setIsLoading(false);
	}

	useEffect(() => {
		ctx.current = new AudioContext();
		async function init() {
			await loadInstrument();
		}
		init();
	}, []);

	function handlePlay() {
		player.start({
			note: "C4",
			duration: 1,
			onEnded: () => {
				console.log("I ended!");
				// will be called after 1 second
			},
		});
	}

	if (isLoading) {
		return "loading...";
	}

	return (
		<div>
			<button onClick={handlePlay}>Click me</button>
		</div>
	);
}

Possible bug? No notes longer than about 3 seconds in duration when using Soundfont

Hi @danigb !

OK, I just ran into something strange, I was playing some long tones against a metronome click at 60BPM, currently using any instrument in the soundfonts list that's capable of a long tone, I've tried several, but right now I'm using "drawbar_organ." But for some reason, nothing will last longer than about 3.5 seconds, it cuts off there every time, and with every instrument I load from the default set. I'm not sure why it won't play a note any longer than that? Is that just a limitation of those sample libraries?

Here's a basic mock-up:

const ctx = new AudioContext()
const organ = await new Soundfont(ctx, {instrument: 'drawbar_organ').loaded();

organ.start({note: 'C4', time: 0, duration: 5}) /// always cuts off at around 3.5 seconds

Are you getting the same behavior? Any reason why the sample won't go above that duration?

Thanks for any help!

Non-integer note values do not work with Soundfont class

Hey, ran into something interesting. If you pass a non-integer note value to Soundfont.start, no sound will play.

I did some digging, this essentially is because spreadRegions produces regions like this:

{
  midiHigh: 36
  midiLow: 36
  midiPitch: 36
  sampleName: "C2"
}

This means that any note that isn't exactly 36 cannot trigger this region (or any other, for that matter).

No region is found, so start no-ops.

The fix, here, I think would be the set midiHigh to 37 instead of 36, and then correspondingly the following line in findSampleInRegion:

- const matchMidi = midi >= (region.midiLow ?? 0) && midi <= (region.midiHigh ?? 127);
+ const matchMidi = midi >= (region.midiLow ?? 0) && midi < (region.midiHigh ?? 128);

This way the entire number line from 36 up to (but not including) 37 gets mapped to region 36.

After that, I think the logic you already have written should take care of detuning the sample appropriately.

I would submit a PR, but I am not in a great position to test this change, so I wanted to float this as an issue first.

Does this seem reasonable, or am I missing something?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.