Code Monkey home page Code Monkey logo

beads's People

Contributors

aengusm avatar angelofraietta avatar benitoelbonito avatar ceefour avatar chadawagner avatar cwong4311 avatar daveho avatar david290 avatar jeremydouglass avatar kevinnorth avatar orsjb avatar zalinius avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

beads's Issues

Building beads

Hi @orsjb, @cwong4311, I'm trying to build beads and running into a bit of trouble. Could you please advise? I'm happy to take the information and add it to build documentation.

  1. git clone https://github.com/orsjb/beads.git
  2. Eclipse 4.8.0 > File > Import > Gradle > Existing Gradle Project
  3. all defaults on Gradle Buildship wizard (official Buildship Gradle Integration 3.0 from Eclipse marketplace)

fails with:

Could not get unknown property 'ossrhUsername' for object of type
org.gradle.api.publication.maven.internal.deployer.DefaultGroovyMavenDeployer.

  1. edit build.gradle and remove
      repository(url: "https://oss.sonatype.org/service/local/staging/deploy/maven2/") {
      	authentication(userName: ossrhUsername, password: ossrhPassword)
      }

      snapshotRepository(url: "https://oss.sonatype.org/content/repositories/snapshots/") {
        authentication(userName: ossrhUsername, password: ossrhPassword)
      }
  1. redo Import -- import succeeds, with warning:

  2. if "Gradle Tasks" pane is not visible, add with Window > Show View > Gradle > Gradle Tasks

  3. run task Gradle Tasks > beads > build > build

Build fails with:

Gradle Executions > Run build > Run tasks > :signArchives > Execute generate for :signArchives

So what I'm seeing here without digging any deeper is that either build tasks are fully integrated with Maven uploading (for which I would need credentials) or else I am using the wrong entrypoint to start a local. I just want to build locally and test without having to ask for credentials and send test builds to Maven. Possible?

cheers,
Jeremy

FFT is slow compared to older release

I just tried upgrading to this newer release - thanks for the continued development! In my experience, Beads is by far the most stable & predictable audio library for Java/Processing.

I noticed that simply by switching to the new .jars and not changing any code, the FFT results are very slow - they seem to update at a much slower frame rate than before. I switched back to the old .jars, and the FFT was fast again.

This is the code that I'm using to do audio analysis and feed the results into an abstracted audio data object:

https://github.com/cacheflowe/haxademic/blob/master/src/com/haxademic/core/media/audio/analysis/AudioInputBeads.java

You can see the difference by watching the waveform between these two uploaded videos. The first shows the older library running fast, and the second shows the new library running slow. The FFT bars are misleading because I'm doing my own smoothing at 60fps.

I'm happy to provide more info or help test if anybody knows what might be happening here. I'm on the latest Windows 10, Java 1.8, and the latest Processing 3.

cacheflowe-interphase-av-01.mp4.crop.0.01-10s.mp4
cacheflowe-interphase-av-03.mp4.crop.0.01-10s.mp4

My old classpath contains these jars:

beads.jar
jarjar-1.0.jar
jl1.0.1.jar
org-jaudiolibs-audioservers-jack.jar
org-jaudiolibs-audioservers-javasound.jar
org-jaudiolibs-audioservers.jar
org-jaudiolibs-jnajack.jar
tools.jar
tritonus_aos-0.3.6.jar
tritonus_share.jar"

And the new classpath had these before reverting:

audioservers-api-2.0.0.jar
audioservers-jack-2.0.0.jar
audioservers-javasound-2.0.0.jar
beads.jar
jlayer-1.0.1.4.jar
jlayer-1.0.1.jar
jna-5.6.0.jar
jnajack-1.4.0.jar
junit-3.8.2.jar
mp3spi-1.9.5.4.jar
tritonus-aos-1.0.0.jar
tritonus-share-0.3.7.4.jar
tritonus-share-1.0.0.jar

Output Buffer Looping Issue when using an Audio Input

Hey, I'm new to beads, so maybe I am just doing things wrong, but googleing doesn't really help in that regard, so maybe you can help me pinpoint the problem.

This issue only appears when I use the Audio Input (Commenting out gains[0].addInput(inputContext.getAudioInput());` is enough to "fix" the problem):

I currently have multiple gains which are feeded a microphone (said inputContext) and a sine wave for testing on another gain. Whenever the Microphone is part of the Output Chain, I can clearly hear the audio buffer looping for both the microphone and the sine wave (the bug is also there without the sine wave, so the microphone seems to be enough):

        input = new JavaSoundAudioIO();
        float latency = 500f;
        int bufSize = Math.round(48000f / (1000f / latency));
        outputContext = new AudioContext(bufSize);
        inputContext = new AudioContext(input, bufSize, new IOAudioFormat(48000f, 16, 1, 1, true, false));

        gains[0] = new Gain(outputContext, 1, 1f);
        gains[1] = new Gain(outputContext, 1, 1f);
        outputContext.out.addInput(gains[0]);
        outputContext.out.addInput(gains[1]);

        inputContext.start();
        outputContext.start();
        gains[0].addInput(inputContext.getAudioInput());

So I can hear the sound perfectly fine for 500ms, then it drops out for some time and then it starts again.
Also I guess a buffer size of 50ms would be appropriate for voice? (If I go like 5ms, I have a flanger effect on it).

Edit: After some debugging I found that this issue is always present when two Audio Contexts exist. I tried it with a different Audio Context to output the same sound to two devices, exactly the same bug. I guess this is expected? Or should one be able to use multiple Audio Contexts?

GranularSamplePlayer and Reverb cause no audio

Putting a GSP through a Reverb causes audio to stop processing.

Code:

AudioContext ac = new AudioContext();
String audioFileName = selection.getAbsolutePath();
SamplePlayer player = new GranularSamplePlayer(ac, SampleManager.sample(audioFileName));
Gain g = new Gain(ac, 2, 1);
g.addInput(player);
Reverb rb = new Reverb(ac);
ac.out.addInput(g);
ac.out.addInput(rb); //comment this out (or switch to SamplePlayer above
ac.start();

GranularSamplePlayer clicks

Sometimes GranularSamplePlayer causes a click on first playback. I haven't managed to isolate this yet as it is occurring in a giant program and may be related to processor overload.

SampleManager cannot load .mp3

The sample example mentions that you can load an .mp3 file, but there's no mp3 class to load the file and I couldn't load one exported from Ableton Live. Is this a feature or bug?

Please add Readme.md file with project description and overview

I suggest you to add project description file to your project. So it will be easy to find this project and understand what is that. I spend a lot of time googling until I found this project which is useful for me.
Anyway this is just suggestion. Thanks for your work.

newbie: ac.start() --> line unsupported error

not making a sound unfortunately. any fix ideas anyone?
AudioContext : no AudioIO specified, using default => net.beadsproject.beads.core.io.JavaSoundAudioIO.
JavaSoundAudioIO: Chosen mixer is Port Speakers (Conexant SmartAudio H.
Exception in thread "Thread-0" java.lang.IllegalArgumentException: Line unsupported: interface SourceDataLine supporting format PCM_SIGNED 44100.0 Hz, 16 bit, stereo, 4 bytes/frame, big-endian
at java.desktop/com.sun.media.sound.PortMixer.getLine(Unknown Source)
at net.beadsproject.beads.core.io.JavaSoundAudioIO.create(Unknown Source)
at net.beadsproject.beads.core.io.JavaSoundAudioIO$1.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)

Cannot update to 3.2

The Contribution Manager in Processing 3.5.4 shows an available update for Beads to 3.2 but the update doesn't seem to update at all and Beads still shows as 1.02.

FFT has frequent unexpected pauses using Windows audio input

I was testing some things to confirm the other issue that I submitted about FFT being slow with the newer Beads release. I found another issue that exists in both the new release and the older release of Beads. When using the computer's microphone (or any system audio input) as an input to the FFT node, the spectrum & waveform data results have very frequent momentary pauses/freezes (1-3x per second). This doesn't happen when using FFT on sounds (audio samples or oscillators) that are triggered within Beads. This might suggest that there's an issue with the incoming audio stream via ac.getAudioInput()? Other Java/Processing audio libraries that do FFT on real-time audio input don't have this problem (ESS, Minim, & Processing Sound library), so I don't think it's my crappy system sound card.

Below is a basic Processing sketch if anyone else wants to test. This is based on the example code Lesson09_Analysis.pde except the FFT is listening to the system audio input instead of an audio file output.

I'm testing with Windows 10 and both Processing 3.5.4 and 4.0 alpha 3. I've tested on 2 different Windows machines, and all of these combinations exhibit the same FFT stutters/freezes.

import beads.*;

AudioContext ac;
PowerSpectrum ps;

void setup() {
  size(300,300);
  ac = AudioContext.getDefaultContext();
  Gain g = new Gain(2, 0.1);
  g.addInput(ac.getAudioInput());   // <- use system audio input instead of ac.out
  ShortFrameSegmenter sfs = new ShortFrameSegmenter(ac);
  sfs.addInput(g);
  FFT fft = new FFT();
  ps = new PowerSpectrum();
  sfs.addListener(fft);
  fft.addListener(ps);
  ac.out.addDependent(sfs);
  ac.start();
}

color fore = color(255, 102, 204);
color back = color(0,0,0);

void draw()
{
  background(back);
  stroke(fore);
  if(ps == null) return;
  float[] features = ps.getFeatures();
  if(features != null) {
    //scan across the pixels
    for(int i = 0; i < width; i++) {
      int featureIndex = i * features.length / width;
      int vOffset = height - 1 - Math.min((int)(features[featureIndex] * height), height - 1);
      line(i,height,i,vOffset);
    }
  }
}

This is the output I get when I start the app:

Beads System Buffer size=5000
AudioContext : no AudioIO specified, using default => beads.JavaSoundAudioIO.
Beads Output buffer size=40000
CHOSEN INPUT: interface TargetDataLine supporting 8 audio formats, and buffers of at least 32 bytes, buffer size in bytes: 5000

Spatial Sound Tutorial

Hello!
First of all, thank you all for your work.

I'm interesting in using your Spatial Method.
I have been working with JOAL for 3D audio but it's outdated while your project is still recent.

This is the code that I already tried: (Based on tutorial Lesson04)

	AudioContext ac;

	JavaSoundAudioIO jsaIO = new JavaSoundAudioIO();
	jsaIO.selectMixer(3);
	
	ac = new AudioContext(jsaIO);

	String audioFile = "audio/1234.aif";
	SamplePlayer player = new SamplePlayer(ac, SampleManager.sample(audioFile));


	Gain g = new Gain(ac, 1, 0.2f);
	g.addInput(player);
	
	Spatial spatial = new Spatial(ac, 3, 1.0f);
			
	/*
	float[] pos = {0.0f, 0.0f, 0.0f};
	float[] pos2 = {1.0f, 1.0f, 1.0f};
	float[][] sPos = {pos, pos2};
	spatial.setLocation(g, 0, pos);
	spatial.setSpeakerPositions(sPos);
	*/

	spatial.addInput(g);
	ac.out.addInput(spatial);
	ac.start();

All of the tutorials work perfectly in my eclipse project.
But I cannot hear any sound...
I already tried to change the position of the "source" as well change the position of the "Speaker" but nothing changes.

[Edit :] I explored your Tutorial Manual but there is not information about Spatial sound.
All of the other methods work, Gain, Reverb, Panner, Compressor, etc...

Thanks in advance!
Gaspar

Question re: using with Android

More a question than anything, but is it possible to use the Beads library on an Android device? I've been trying to run the Beads example projects via Processing for Android, but none seem to run - I'm trying to determine whether this is an issue with running on an Emulator, or something else...wanted to figure that out before I purchase an actual Android device. Thanks!

sonifying processing code listing 4.3.1 granular 01 getWidth() does not exist

hi there,

i can't seem to get the documented code from the Sonifying Processing book, example 'code listing 4.3.1 granular_01.pde' working.

A exception comes up on running stating 'The function getWidth() does not exist.'

Can anybody advise?

// Sampler_Interface_01.pde

// this is a complex, mouse-driven sampler
// make sure that you understand the examples in Sampling_01, Sampling_02 and Sampling_03 before trying to tackle this

// import the java File library
// this will be used to locate the audio files that will be loaded into our sampler
import java.io.File;

import beads.*; // import the beads library

AudioContext ac; // and declare our parent AudioContext as usual

// these variables store mouse position and change in mouse position along each axis
int xChange = 0;
int yChange = 0;
int lastMouseX = 0;
int lastMouseY = 0;

int numSamples = 0; // how many samples are being loaded?
int sampleWidth = 0; // how much space will a sample take on screen? how wide will be the invisible trigger area?
String sourceFile[]; // an array that will contain our sample filenames
Gain g[]; // an array of Gains
Glide gainValue[];
Glide rateValue[];
Glide pitchValue[];
SamplePlayer sp[]; // an array of SamplePlayer

// these objects allow us to add a delay effect
TapIn delayIn;
TapOut delayOut;
Gain delayGain;

void setup()
{
  size(800, 600); // create a reasonably-sized playing field for our sampler
  
  ac = new AudioContext(); // initialize our AudioContext

  // this block of code counts the number of samples in the /samples subfolder
  File folder = new File(sketchPath("") + "samples/");
  File[] listOfFiles = folder.listFiles();
  for (int i = 0; i < listOfFiles.length; i++)
  {
      if (listOfFiles[i].isFile())
      {
        if( listOfFiles[i].getName().endsWith(".wav") )
        {
          numSamples++;
        }
      }
  }
  
  // if no samples are found, then end
  if( numSamples <= 0 )
  {
    println("no samples found in " + sketchPath("") + "samples/");
    println("exiting...");
    exit();
  }
    
  sampleWidth = (int)(this.getWidth() / (float)numSamples); // how many pixels along the x-axis will each sample occupy?
  
  // this block of code reads and stores the filename for each sample
  sourceFile = new String[numSamples];
  int count = 0;
  for (int i = 0; i < listOfFiles.length; i++)
  {
      if (listOfFiles[i].isFile())
      {
        if( listOfFiles[i].getName().endsWith(".wav") )
        {
          sourceFile[count] = listOfFiles[i].getName();
          count++;
        }
      }
  }
  
  // set the size of our arrays of unit generators in order to accomodate the number of samples that will be loaded
  g = new Gain[numSamples];
  gainValue = new Glide[numSamples];
  rateValue = new Glide[numSamples];
  pitchValue = new Glide[numSamples];
  sp = new SamplePlayer[numSamples];

  // set up our delay - this is just for taste, to fill out the texture  
  delayIn = new TapIn(ac, 2000);
  delayOut = new TapOut(ac, delayIn, 200.0);
  delayGain = new Gain(ac, 1, 0.15);
  delayGain.addInput(delayOut);
  
  ac.out.addInput(delayGain); // connect the delay to the master output

  // enclose the file-loading in a try-catch block
  try {  
    // run through each file
    for( count = 0; count < numSamples; count++ )
    {
      println("loading " + sketchPath("") + "samples/" + sourceFile[count]); // print a message to show which file we are loading
      
      // create the SamplePlayer that will run this particular file
      sp[count] = new SamplePlayer(ac, new Sample(sketchPath("") + "samples/" + sourceFile[count]));
      //sp[count].setLoopPointsFraction(0.0, 1.0);
      sp[count].setKillOnEnd(false);

      // these unit generators will control aspects of the sample player
      gainValue[count] = new Glide(ac, 0.0);
      gainValue[count].setGlideTime(20);
      g[count] = new Gain(ac, 1, gainValue[count]);
      rateValue[count] = new Glide(ac, 1);
      rateValue[count].setGlideTime(20);
      pitchValue[count] = new Glide(ac, 1);
      pitchValue[count].setGlideTime(20);

      sp[count].setRate(rateValue[count]);
      sp[count].setPitch(pitchValue[count]);
      g[count].addInput(sp[count]);

      // finally, connect this chain to the delay and to the main out    
      delayIn.addInput(g[count]);
      ac.out.addInput(g[count]);
    }
  }
  // if there is an error while loading the samples
  catch(Exception e)
  {
    // show that error in the space underneath the processing code
    println("Exception while attempting to load sample!");
    e.printStackTrace();
    exit();
  }

  ac.start(); // begin audio processing
  
  background(0); // set the background to black
  text("Move the mouse quickly to trigger playback.", 100, 100); // tell the user what to do!
  text("Faster movement will trigger more and louder sounds.", 100, 120); // tell the user what to do!
}

// the main draw function
void draw()
{
  background(0);

  // calculate the mouse speed and location
  xChange = abs(lastMouseX - mouseX);
  yChange = lastMouseY - mouseY;
  lastMouseX = mouseX;
  lastMouseY = mouseY;

  // calculate the gain of newly triggered samples
  float newGain = (abs(yChange) + xChange) / 2.0;
  newGain /= this.getWidth();
  if( newGain > 1.0 ) newGain = 1.0;

  // calculate the pitch range  
  float pitchRange = yChange / 200.0;
  
  // should we trigger the sample that the mouse is over?
  if( newGain > 0.09 )
  {
    // get the index of the sample that is coordinated with the mouse location
    int currentSampleIndex = (int)(mouseX / sampleWidth);
    if( currentSampleIndex < 0 ) currentSampleIndex = 0;
    else if( currentSampleIndex >= numSamples ) currentSampleIndex = numSamples;
    
    // trigger that sample
    // if the mouse is moving upwards, then play it in reverse
    triggerSample(currentSampleIndex, (boolean)(yChange < 0), newGain, pitchRange);
  }
  
  // randomly trigger other samples, based loosely on the mouse speed
  // loop through each sample
  for( int currentSample = 0; currentSample < numSamples; currentSample++ )
  {
    // if a random number is less than the current gain
    if( random(1.0) < (newGain / 2.0) )
    {
      // trigger that sample
      triggerSample(currentSample, (boolean)(yChange < 0 && random(1.0) < 0.33), newGain, pitchRange);
    }
  }
  


}

// trigger a sample
void triggerSample(int index, boolean reverse, float newGain, float pitchRange)
{
  if( index >= 0 && index < numSamples )
  {
    println("triggering sample " + index); // show a message that indicates which sample we are triggering
  
    gainValue[index].setValue(newGain); // set the gain value
    pitchValue[index].setValue(random(1.0-pitchRange, 1.0+pitchRange)); // and set the pitch value (which is really just another rate controller)
  
    // if we should play the sample in reverse
    if( reverse )
    {
      if( !sp[index].inLoop() )
      {
        rateValue[index].setValue(-1.0);
        sp[index].setToEnd();
      }
    }
    else // if we should play the sample forwards
    {
      if( !sp[index].inLoop() )
      {
        rateValue[index].setValue(1.0);
        sp[index].setToLoopStart();
      }
    }

    sp[index].start();
  }
}

How to switch audio inputs/outputs on OS X

Trying to figure out how to specify the audio-system and then (hopefully) the specific audio inputs/outputs to use...

OSX 10.13
jackd 0.125.0

AudioContext : no AudioIO specified, using default => net.beadsproject.beads.core.io.JavaSoundAudioIO.
JavaSoundAudioIO: Chosen mixer is Default Audio Device.
CHOSEN INPUT: interface TargetDataLine supporting 14 audio formats, and buffers of at least 32 bytes, buffer size in bytes: 5000

examples - wrong index into pixel array

In the examples / lessons pixels[vOffset * height + i] = fore; works by accident (as long as height == width), should be pixels[vOffset * width + i] = fore;

Granulating a stream from Audio Input

I'm working with the example from the Processing tutorial called '7.3. Granulating from Audio Input (Granulating_Input_01)'. I'm wondering if its possible to granulate the entire audio stream (from the mic) rather than just a small recorded bit?

BiquadFilter, setQ and setGain methods assign incorrect initial values

In BiquadFilter.java, Q and Gain assigned incorrect initial value.

q = freqUGen.getValue();
should be
q = qUGen.getValue();

and

gain = freqUGen.getValue();
should be
gain = gainUGen.getValue();

See method:

public BiquadFilter setQ(UGen nqval) {
if (nqval == null) {
setQ(q);
} else {
qUGen = nqval;
qUGen.update();
q = freqUGen.getValue();
isQStatic = false;
areAllStatic = false;
}
vc.calcVals();
return this;
}

And this method:

public BiquadFilter setGain(UGen ngain) {
if (ngain == null) {
setGain(gain);
} else {
gainUGen = ngain;
gainUGen.update();
gain = freqUGen.getValue();
isGainStatic = false;
areAllStatic = false;
}
vc.calcVals();
return this;
}

Proper way to deal with external events reflecting in changes to UGen parameters from concurrency standpoint

I am playing a bit with the library and I'm at the stage I want to wire up external events (callbacks happening in a thread not known to beads) to influence the state of my UGens (e.g. set the freq on a square wave) while the audio context is running.

Is there any suggested approach to do that safely from a concurrency standpoint?

Simply accessing UGens (e.g ugen.setFrequency(freq)) from another thread is a concurrency error: from what I could see all the ugen I've inspected are oblivious to concurrency (understandably) and I think there is an underlying assumption that they are used in a thread-confined fashion.

An idea I have is to setup a "transparent" ugen (output equals input at all times) somewhere in my network (either first or last) and equip it with a threadsafe queue of "tasks" (i.e. Runnable) to execute.
I would then enqueue from the external threads and the queue would be drained (and every task "executed" immediately) at every calculateBuffer() call - of course the type of "task" i have in mind are mostly set parameters on ugens (surely not something blocking). In short it's hijacking calculateBuffer() for sake of concurrency safety.

Is there a better way?
I might be missing something big BTW...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.