Code Monkey home page Code Monkey logo

audiomotion-analyzer's Introduction

About

audioMotion-analyzer is a high-resolution real-time audio spectrum analyzer built upon Web Audio and Canvas JavaScript APIs.

It was originally conceived as part of my full-featured media player called audioMotion, but I later decided to make the spectrum analyzer available as a self-contained module, so other developers could use it in their own JS projects.

My goal is to make this the best looking, most accurate and customizable spectrum analyzer around, in a small-footprint and high-performance package.

What users are saying:

I still, to this day, haven't found anything close to audioMotion in terms of beauty. — Weakky@github

I've been visualizing input with FFT with p5.js for a while, but got sick of how much code was needed.
This looks way better and works better too.
— Staijn1@github

It works amazing! The spectrum is so easy readable even for complex sound. — davay42@github

Features

  • Dual-channel high-resolution real-time audio spectrum analyzer
  • Logarithmic, linear and perceptual (Bark and Mel) frequency scales, with customizable range
  • Visualization of discrete FFT frequencies or up to 240 frequency bands (supports ANSI and equal-tempered octave bands)
  • Decibel and linear amplitude scales, with customizable sensitivity
  • Optional A, B, C, D and ITU-R 468 weighting filters
  • Additional effects: LED bars, luminance bars, mirroring and reflection, radial spectrum
  • Choose from 5 built-in color gradients or easily add your own!
  • Fullscreen support, ready for retina / HiDPI displays
  • Zero-dependency native ES6+ module (ESM), ~30kB minified

Online demos

demo-animation

?> https://audiomotion.dev/demo/

Live code examples

Usage

Node.js project

Install via npm:

npm i audiomotion-analyzer

Use ES6 import:

import AudioMotionAnalyzer from 'audiomotion-analyzer';

Or CommonJS require:

const { AudioMotionAnalyzer } = require('audioMotion-analyzer');

In the browser using native ES6 module (ESM)

Load from Skypack CDN:

<script type="module">
  import AudioMotionAnalyzer from 'https://cdn.skypack.dev/audiomotion-analyzer?min';
  // your code here
</script>

Or download the latest version and copy the audioMotion-analyzer.js file from the src/ folder into your project folder.

In the browser using global variable

Load from Unpkg CDN:

<script src="https://unpkg.com/audiomotion-analyzer/dist"></script>
<script>
  // available as AudioMotionAnalyzer global
</script>

Constructor

new AudioMotionAnalyzer()
new AudioMotionAnalyzer( container )
new AudioMotionAnalyzer( container, {options} )
new AudioMotionAnalyzer( {options} )

Creates a new instance of audioMotion-analyzer.

container is the DOM element into which the canvas created for the analyzer should be inserted.

If not defined, defaults to document.body, unless canvas is defined in the options, in which case its parent element will be considered the container.

options must be an Options object.

Usage example:

const audioMotion = new AudioMotionAnalyzer(
	document.getElementById('container'),
	{
		source: document.getElementById('audio')
	}
);

This will insert the analyzer canvas inside the #container element and start the visualization of audio coming from the #audio element.

?> By default, audioMotion will try to use all available container space for the canvas. To prevent it from growing indefinitely, you must either constrain the dimensions of the container via CSS or explicitly define height and/or width properties in the constructor options.

Options object

Valid properties and default values are shown below.

Properties marked as constructor only can only be set in the constructor call, the others can also be set anytime via setOptions() method or directly as properties of the audioMotion instance.

options = {
  alphaBars: false,
  ansiBands: false,
  audioCtx: undefined, // constructor only
  barSpace: 0.1,
  bgAlpha: 0.7,
  canvas: undefined, // constructor only
  channelLayout: 'single',
  colorMode: 'gradient',
  connectSpeakers: true, // constructor only
  fadePeaks: false,
  fftSize: 8192,
  fillAlpha: 1,
  frequencyScale: 'log',
  fsElement: undefined, // constructor only
  gradient: 'classic',
  gradientLeft: undefined,
  gradientRight: undefined,
  gravity: 3.8,
  height: undefined,
  ledBars: false,
  linearAmplitude: false,
  linearBoost: 1,
  lineWidth: 0,
  loRes: false,
  lumiBars: false,
  maxDecibels: -25,
  maxFPS: 0,
  maxFreq: 22000,
  minDecibels: -85,
  minFreq: 20,
  mirror: 0,
  mode: 0,
  noteLabels: false,
  onCanvasDraw: undefined,
  onCanvasResize: undefined,
  outlineBars: false,
  overlay: false,
  peakFadeTime: 750,
  peakHoldTime: 500,
  peakLine: false,
  radial: false,
  radialInvert: false,
  radius: 0.3,
  reflexAlpha: 0.15,
  reflexBright: 1,
  reflexFit: true,
  reflexRatio: 0,
  roundBars: false,
  showBgColor: true,
  showFPS: false,
  showPeaks: true,
  showScaleX: true,
  showScaleY: false,
  smoothing: 0.5,
  source: undefined, // constructor only
  spinSpeed: 0,
  splitGradient: false,
  start: true, // constructor only
  trueLeds: false,
  useCanvas: true,
  volume: 1,
  weightingFilter: ''
  width: undefined
}

Constructor-specific options

audioCtx AudioContext object

Available since v2.0.0

Allows you to provide an external AudioContext for audioMotion-analyzer, for connection with other Web Audio nodes or sound-processing modules.

Since version 3.2.0, audioCtx will be automatically inferred from the source property if that's an AudioNode.

If neither is defined, a new audio context will be created. After instantiation, audioCtx will be available as a read-only property.

See this live code and the multi-instance demo for more usage examples.

canvas HTMLCanvasElement object

Available since v4.4.0

Allows you to provide an existing Canvas where audioMotion should render its visualizations.

If not defined, a new canvas will be created. After instantiation, you can obtain its reference from the canvas read-only property.

connectSpeakers boolean

Available since v3.2.0

Whether or not to connect the analyzer output to the speakers (technically, the AudioContext destination node).

Some scenarios where you may want to set this to false:

  1. when running multiple instances of audioMotion-analyzer sharing the same audio input (see the multi demo), only one of them needs to be connected to the speakers, otherwise the volume will be amplified due to multiple outputs;
  2. when audio input comes from the microphone and you're not using headphones, to prevent a feedback loop from the speakers;
  3. when you're using audioMotion-analyzer with an audio player which already outputs sound to the speakers (same reason as 1).

After instantiation, use connectOutput() and disconnectOutput() to connect or disconnect the output from the speakers (or other nodes).

See also connectedTo.

Defaults to true.

fsElement HTMLElement object

Available since v3.4.0

HTML element affected by the toggleFullscreen() method.

If not defined, defaults to the canvas. Set it to a container <div> to keep additional interface elements available in fullscreen mode.

See the overlay demo or this pen for usage examples.

After instantiation, fsElement is available as a read-only property.

source HTMLMediaElement or AudioNode object

If source is specified, connects an HTMLMediaElement (<audio> or <video> HTML element) or AudioNode object to the analyzer.

At least one audio source is required for the analyzer to work. You can also connect audio sources after instantiation, using the connectInput() method.

start boolean

If start: false is specified, the analyzer will be created stopped. You can then start it with the start() or toggleAnalyzer() methods.

Defaults to true, so the analyzer will start running right after initialization.

Properties

alphaBars boolean

Available since v3.6.0

When set to true each bar's amplitude affects its opacity, i.e., higher bars are rendered more opaque while shorter bars are more transparent.

This is similar to the lumiBars effect, but bars' amplitudes are preserved and it also works on Discrete mode and radial spectrum.

For effect priority when combined with other settings, see isAlphaBars.

Defaults to false.

!> See related known issue

ansiBands boolean

Available since v4.0.0

When set to true, ANSI/IEC preferred frequencies are used to generate the bands for octave bands modes (see mode). The preferred base-10 scale is used to compute the center and bandedge frequencies, as specified in the ANSI S1.11-2004 standard.

When false, bands are based on the equal-tempered scale, so that in 1/12 octave bands the center of each band is perfectly tuned to a musical note.

ansiBands bands standard octaves' center frequencies
false Equal temperament (A-440 Hz) scale-log-equal-temperament
true ANSI S1.11-2004 scale-log-ansi

Defaults to false.

audioCtx AudioContext object (Read only)

AudioContext used by audioMotion-analyzer.

Use this object to create additional audio sources to be connected to the analyzer, like oscillator nodes, gain nodes and media streams.

The code fragment below creates an oscillator and a gain node using audioMotion's AudioContext, and then connects them to the analyzer:

const audioMotion = new AudioMotionAnalyzer( document.getElementById('container') ),
      audioCtx    = audioMotion.audioCtx,
      oscillator  = audioCtx.createOscillator(),
      gainNode    = audioCtx.createGain();

oscillator.frequency.value = 440; // set 440Hz frequency
oscillator.connect( gainNode ); // connect oscillator -> gainNode

gainNode.gain.value = .5; // set volume to 50%
audioMotion.connectInput( gainNode ); // connect gainNode -> audioMotion

oscillator.start(); // play tone

You can provide your own AudioContext via the audioCtx property in the constructor options.

See also the fluid demo and the multi-instance demo for more usage examples.

barSpace number

Available since v2.0.0

Customize the spacing between bars in frequency bands modes (see mode).

Use a value between 0 and 1 for spacing proportional to the band width. Values >= 1 will be considered as a literal number of pixels.

For example, barSpace = 0.5 will use half the width available to each band for spacing and half for the bar itself. On the other hand, barSpace = 2 will set a fixed spacing of 2 pixels, independent of the width of bars. Prefer proportional spacing to obtain consistent results among different resolutions and screen sizes.

barSpace = 0 will effectively show contiguous bars, except when ledBars is true, in which case a minimum spacing is enforced (this can be customized via setLedParams() method).

Defaults to 0.1.

bgAlpha number

Available since v2.2.0

Controls the opacity of the background, when overlay and showBgColor are both set to true.

It must be a number between 0 (completely transparent) and 1 (completely opaque).

Defaults to 0.7.

canvas HTMLCanvasElement object (Read only)

Canvas element where audioMotion renders its visualizations.

See also the canvas constructor option.

canvasCtx CanvasRenderingContext2D object (Read only)

2D rendering context used for drawing in audioMotion's canvas.

channelLayout string

Available since v4.0.0

Defines the number and layout of analyzer channels.

channelLayout Description Note
'single' Single channel analyzer, representing the combined output of both left and right channels.
'dual-combined' Dual channel analyzer, both channels overlaid. Works best with semi-transparent Graph mode or outlineBars.
'dual-horizontal' Dual channel, side by side - see mirror for additional layout options. since v4.3.0
'dual-vertical' Dual channel, left channel at the top half of the canvas and right channel at the bottom.

!> When a dual layout is selected, any mono (single channel) audio source connected to the analyzer will output sound only from the left speaker, unless a stereo source is simultaneously connected to the analyzer, which will force the mono input to be upmixed to stereo.

See also gradientLeft, gradientRight and splitGradient.

colorMode string

Available since v4.1.0

Selects the desired mode for coloring the analyzer bars. This property has no effect in Graph mode.

colorMode Description Preview ('prism' gradient)
'gradient' Analyzer bars are painted with the currently selected gradient. This is the default behavior. prism
'bar-index' Each analyzer bar is painted with a single color from the selected gradient's colorStops, starting with the first color applied to the first bar, and so on, cycling through the available colorStops. prism-bar-index
'bar-level' Colors from the selected gradient are used to paint each bar, according to its current level (amplitude). prism-bar-level

See also registerGradient().

Defaults to 'gradient'.

connectedSources array (Read only)

Available since v3.0.0

An array of AudioNode objects connected to the analyzer input via the source constructor option, or by using the connectInput() method.

connectedTo array (Read only)

Available since v3.2.0

An array of AudioNode objects to which the analyzer output is connected.

By default, audioMotion-analyzer is connected to the AudioContext destination node (the speakers) upon instantiation, unless you set connectSpeakers: false in the constructor options.

See also connectOutput().

fadePeaks boolean

Available since v4.5.0

When true, peaks fade out instead of falling down. It has no effect when peakLine is active.

Fade time can be customized via peakFadeTime.

See also peakHoldTime and showPeaks.

Defaults to false.

fftSize number

Number of samples used for the FFT performed by the AnalyzerNode. It must be a power of 2 between 32 and 32768, so valid values are: 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, and 32768.

Higher values provide more detail in the frequency domain, but less detail in the time domain (slower response), so you may need to adjust smoothing accordingly.

Defaults to 8192.

fillAlpha number

Available since v2.0.0

Opacity of the area fill in Graph mode, or inner fill of bars in frequency bands modes when outlineBars is true.

It must be a number between 0 (completely transparent) and 1 (completely opaque).

Please note that the line stroke (when lineWidth > 0) is always drawn at full opacity, regardless of the fillAlpha value.

Also, for frequency bands modes, alphaBars set to true takes precedence over fillAlpha.

Defaults to 1.

!> See related known issue

fps number (Read only)

Current frame rate.

frequencyScale string

Available since v4.0.0

Scale used to represent frequencies in the horizontal axis.

frequencyScale description scale preview (10Hz - 24kHz range)
'bark' Bark scale scale-bark
'linear' Linear scale scale-linear
'log' Logarithmic scale scale-log-ansi
'mel' Mel scale scale-mel

Logarithmic scale allows visualization of proper octave bands (see mode) and it's also recommended when using noteLabels.

Bark and Mel are perceptual pitch scales, which may provide better visualization of mid-range frequencies, when compared to log or linear scales.

Defaults to 'log'.

fsElement HTMLElement object (Read only)

Available since v3.4.0

HTML element affected by the toggleFullscreen() method.

See fsElement in the constructor options context for more information.

fsHeight number (Read only)

fsWidth number (Read only)

Canvas dimensions used during fullscreen mode. These take the current pixel ratio into account and will change accordingly when low-resolution mode is set.

gradient string

Name of the color gradient used for analyzer graphs.

It must be a built-in or registered gradient name (see registerGradient()).

gradient sets the gradient for both analyzer channels, but its read value represents only the gradient on the left (or single) channel.

When using a dual channelLayout, use gradientLeft and gradientRight to set/read the gradient on each channel individually.

Built-in gradients are shown below:

gradient preview
'classic' classic
'orangered' orangered
'prism' prism
'rainbow' rainbow
'steelblue' steelblue

See also splitGradient.

Defaults to 'classic'.

gradientLeft string

gradientRight string

Available since v4.0.0

Select gradients for the left and right analyzer channels independently, for use with a dual channelLayout.

Single channel layout will use the gradient selected by gradientLeft.

For dual-combined channel layout or radial spectrum, only the background color defined by gradientLeft will be applied when showBgColor is true.

See also gradient and splitGradient.

gravity number

Available since v4.5.0

Customize the acceleration of falling peaks.

It must be a number greater than zero, representing thousands of pixels per second squared. Invalid values are ignored and no error is thrown.

With the default value and analyzer height of 1080px, a peak at maximum amplitude takes approximately 750ms to fall to zero.

You can use the peak drop analysis tool to see the decay curve for different values of gravity.

See also peakHoldTime and showPeaks.

Defaults to 3.8.

height number

width number

Nominal dimensions of the analyzer.

Setting one or both properties to undefined (default) will trigger the fluid/responsive behavior and the analyzer will try to adjust to the container's height and/or width. In that case, it's important that you constrain the dimensions of the container via CSS to prevent the canvas from growing indefinitely.

You can set both values at once using the setCanvasSize() method.

See also onCanvasResize.

?> The actual dimensions of the canvas may differ from these values, depending on the device's pixelRatio, the loRes setting and while in fullscreen. For the actual pixel values, read height and width directly from the canvas object.

isAlphaBars boolean (Read only)

Available since v3.6.0

true when alpha bars are effectively being displayed, i.e., alphaBars is set to true and mode is set to discrete frequencies or one of the frequency bands modes, in which case lumiBars must be set to false or radial must be set to true.

isBandsMode boolean (Read only)

Available since v4.0.0

true when mode is set to one of the bands mode (modes 1 to 8).

See also isOctaveBands.

isDestroyed boolean (Read only)

Available since v4.2.0

true when the object has been destroyed with destroy().

isFullscreen boolean (Read only)

true when the analyzer is being displayed in fullscreen, or false otherwise.

See toggleFullscreen().

isLedBars boolean (Read only)

Available since v3.6.0; formerly isLedDisplay (since v3.0.0)

true when LED bars are effectively being displayed, i.e., isBandsMode is true, ledBars is set to true and radial is set to false.

isLumiBars boolean (Read only)

Available since v3.0.0

true when luminance bars are effectively being displayed, i.e., isBandsMode is true, lumiBars is set to true and radial is set to false.

isOctaveBands boolean (Read only)

Available since v3.0.0

true when isBandsMode is true and frequencyScale is set to 'log'.

isOn boolean (Read only)

true if the analyzer process is running, or false if it's stopped.

See start(), stop() and toggleAnalyzer().

isOutlineBars boolean (Read only)

Available since v3.6.0

true when outlined bars are effectively being displayed, i.e., isBandsMode is true, outlineBars is set to true and both ledBars and lumiBars are set to false, or radial is set to true.

isRoundBars boolean (Read only)

Available since v4.1.0

true when round bars are effectively being displayed, i.e., isBandsMode is true, roundBars is set to true and ledBars and lumiBars are both set to false.

ledBars boolean

Available since v3.6.0; formerly showLeds (since v1.0.0)

true to activate the LED bars effect for frequency bands modes (see mode).

This effect can be customized via setLedParams() method.

For effect priority when combined with other settings, see isLedBars.

See also trueLeds.

Defaults to false.

linearAmplitude boolean

Available since v4.0.0

When set to true, spectrum amplitudes are represented in linear scale instead of decibels (logarithmic).

This may improve the visualization of predominant tones, especially at higher frequencies, but it will make the entire spectrum look much quieter.

See also linearBoost.

Defaults to false.

linearBoost number

Available since v4.0.0

Performs an nth-root operation to amplify low energy values when using linear scale for the amplitude.

It should be a number >= 1, while 1 means no boosting. Only effective when linearAmplitude is set to true.

Defaults to 1.

lineWidth number

Available since v2.0.0

Line width for Graph mode, or outline stroke in frequency bands modes when outlineBars is true.

For the line to be distinguishable, set also fillAlpha < 1.

Defaults to 0.

loRes boolean

true for low resolution mode. Defaults to false.

Low resolution mode halves the effective pixel ratio, resulting in four times less pixels to render. This may improve performance significantly, especially in 4K+ monitors.

?> If you want to allow users to interactively toggle low resolution mode, you may need to set a fixed size for the canvas via CSS, like so:

canvas {
    display: block;
    width: 100%;
}

This will prevent the canvas size from changing, when switching the low resolution mode on and off.

lumiBars boolean

Available since v1.1.0

This is only effective for frequency bands modes (see mode).

When set to true all analyzer bars will be displayed at full height with varying luminance (opacity, actually) instead.

lumiBars takes precedence over alphaBars and outlineBars, except on radial spectrum.

For effect priority when combined with other settings, see isLumiBars.

Defaults to false.

maxDecibels number

minDecibels number

Highest and lowest decibel values represented in the Y-axis of the analyzer. The loudest volume possible is 0.

You can set both values at once using the setSensitivity() method.

For more info, see AnalyserNode.minDecibels.

minDecibels defaults to -85 and maxDecibels defaults to -25.

maxFPS number

Available since v4.2.0

Sets the maximum desired animation frame rate. This can help reducing CPU usage, especially on high refresh rate monitors.

It must be a number, indicating frames per second. A value of 0 means the animation will run at the highest frame rate possible.

Defaults to 0.

maxFreq number

minFreq number

Highest and lowest frequencies represented in the X-axis of the analyzer. Values in Hertz.

The minimum allowed value is 1. Trying to set a lower value will throw an ERR_FREQUENCY_TOO_LOW error.

The maximum allowed value is half the sampling rate (audioCtx.sampleRate), known as the Nyquist frequency. Values higher than that will be capped.

It is preferable to use the setFreqRange() method and set both values at once, to prevent minFreq being higher than the current maxFreq or vice-versa at a given moment.

minFreq defaults to 20 and maxFreq defaults to 22000.

mirror number

Available since v3.3.0

When channelLayout is dual-horizontal, this property controls the orientation of the X-axis (frequencies) on both channels.

For other layouts, it horizontally mirrors the spectrum image to the left or right side.

Valid values are:

mirror Description
-1 Low frequencies meet at the center of the screen (mirror left)
0 No mirror effect or change to axis orientation (default)
1 High frequencies meet at the center of the screen (mirror right)

Note: On radial spectrum with channel layouts other than dual-horizontal, both 1 and -1 have the same effect.

Defaults to 0.

mode number

Visualization mode.

mode description notes
0 Discrete frequencies default
1 1/24th octave bands or 240 bands use 'log' frequencyScale for octave bands
2 1/12th octave bands or 120 bands use 'log' frequencyScale for octave bands
3 1/8th octave bands or 80 bands use 'log' frequencyScale for octave bands
4 1/6th octave bands or 60 bands use 'log' frequencyScale for octave bands
5 1/4th octave bands or 40 bands use 'log' frequencyScale for octave bands
6 1/3rd octave bands or 30 bands use 'log' frequencyScale for octave bands
7 Half octave bands or 20 bands use 'log' frequencyScale for octave bands
8 Full octave bands or 10 bands use 'log' frequencyScale for octave bands
9 (not valid) reserved
10 Graph since v1.1.0
  • Mode 0 provides the highest resolution, allowing you to visualize individual frequencies as provided by the FFT computation;
  • Modes 1 - 8 divide the frequency spectrum in bands; when using the default logarithmic frequencyScale, each band represents the nth part of an octave; otherwise, a fixed number of bands is used for each mode;
  • Mode 10 uses the discrete FFT data points to draw a continuous line and/or a filled area graph (see fillAlpha and lineWidth properties).

See also ansiBands.

Defaults to 0.

noteLabels boolean

Available since v4.0.0

When set to true displays musical note labels instead of frequency values, in the X axis (when showScaleX is also set to true).

For best visualization in octave bands modes, make sure frequencyScale is set to 'log' and ansiBands is set to false, so bands are tuned to the equal temperament musical scale.

Defaults to false.

outlineBars boolean

Available since v3.6.0

When true and mode is set to one of the bands modes, analyzer bars are rendered outlined, with customizable fillAlpha and lineWidth.

For effect priority when combined with other settings, see isOutlineBars.

Defaults to false.

overlay boolean

Available since v2.2.0

Allows the analyzer to be displayed over other content, by making the canvas background transparent, when set to true.

When showBgColor is also true, bgAlpha controls the background opacity.

Defaults to false.

?> In order to keep elements other than the canvas visible in fullscreen, you'll need to set the fsElement property in the constructor options.

peakFadeTime number

Available since v4.5.0

Time in milliseconds for peaks to completely fade out, when fadePeaks is active.

It must be a number greater than or equal to zero. Invalid values are ignored and no error is thrown.

See also peakHoldTime and showPeaks.

Defaults to 750.

peakHoldTime number

Available since v4.5.0

Time in milliseconds for peaks to hold their value before they begin to fall or fade.

It must be a number greater than or equal to zero. Invalid values are ignored and no error is thrown.

See also fadePeaks, gravity, peakFadeTime and showPeaks.

Defaults to 500.

peakLine boolean

Available since v4.2.0

When true and mode is 10 (Graph) and showPeaks is true, peaks are connected into a continuous line. It has no effect in other modes.

Defaults to false.

pixelRatio number (Read only)

Current devicePixelRatio. This is usually 1 for standard displays and 2 for retina / Hi-DPI screens.

When loRes is true, the value of pixelRatio is halved, i.e. 0.5 for standard displays and 1 for retina / Hi-DPI.

You can refer to this value to adjust any additional drawings done in the canvas (via callback function).

radial boolean

Available since v2.4.0

When true, the spectrum analyzer is rendered in a circular shape, with radial frequency bars spreading from its center.

In radial view, ledBars and lumiBars effects are disabled.

When channelLayout is set to 'dual-vertical', graphs for the right channel are rendered towards the center of the screen.

See also radialInvert, radius and spinSpeed.

Defaults to false.

!> See related known issue

radialInvert boolean

Available since v4.4.0

When set to true (and radial is also true) creates a radial spectrum with maximum size and bars growing towards the center of the screen.

This property has no effect when channelLayout is set to 'dual-vertical'.

See also radius.

Defaults to false.

radius number

Available since v4.4.0

Defines the internal radius of radial spectrum. It should be a number between 0 and 1.

This property has no effect when channelLayout is set to 'dual-vertical'.

When radialInvert is true, this property controls how close to the center of the screen the bars can get.

Defaults to 0.3.

reflexAlpha number

Available since v2.1.0

Reflection opacity (when reflexRatio > 0).

It must be a number between 0 (completely transparent) and 1 (completely opaque).

Defaults to 0.15.

reflexBright number

Available since v2.3.0

Reflection brightness (when reflexRatio > 0).

It must be a number. Values below 1 darken the reflection and above 1 make it brighter. A value of 0 will render the reflected image completely black, while a value of 1 will preserve the original brightness.

Defaults to 1.

!> See related known issue

reflexFit boolean

Available since v2.1.0

When true, the reflection will be adjusted (stretched or shrinked) to fit the canvas. If set to false the reflected image may be cut at the bottom (when reflexRatio < 0.5) or not fill the entire canvas (when reflexRatio > 0.5).

Defaults to true.

reflexRatio number

Available since v2.1.0

Percentage of canvas height used for reflection. It must be a number greater than or equal to 0, and less than 1. Trying to set a value out of this range will throw an ERR_REFLEX_OUT_OF_RANGE error.

For a perfect mirrored effect, set reflexRatio to 0.5 and both reflexAlpha and reflexBright to 1.

This has no effect when lumiBars is true.

Defaults to 0 (no reflection).

roundBars boolean

Available since v4.1.0

When true and mode is set to one of the bands modes, analyzer bars are rendered with rounded corners at the top.

In radial view this makes the top and bottom of bars to follow the curvatures of the outer and inner circles, respectivelly, although the effect can be barely noticeable with a band count greater than 20 (half-octave bands).

This has no effect when ledBars or lumiBars are set to true.

See also isRoundBars.

Defaults to false.

showBgColor boolean

Determines whether the canvas background should be painted.

If true, the background color defined by the current gradient will be used. Opacity can be adjusted via bgAlpha property, when overlay is true.

If false, the canvas background will be painted black when overlay is false, or transparent when overlay is true.

See also registerGradient().

Defaults to true.

?> Please note that when overlay is false and ledBars is true, the background color will always be black, and setting showBgColor to true will make the "unlit" LEDs visible instead.

showFPS boolean

true to display the current frame rate. Defaults to false.

showPeaks boolean

true to show amplitude peaks.

See also gravity, peakFadeTime, peakHoldTime and peakLine.

Defaults to true.

showScaleX boolean

Available since v3.0.0; formerly showScale (since v1.0.0)

true to display scale labels on the X axis.

See also noteLabels.

Defaults to true.

showScaleY boolean

Available since v2.4.0

true to display the level/amplitude scale on the Y axis.

This option has no effect when radial or lumiBars are set to true.

When linearAmplitude is set to false (default), labels are shown in decibels (dB); otherwise, values represent a percentage (0-100%) of the maximum amplitude.

See also minDecibels and maxDecibels.

Defaults to false.

smoothing number

Sets the analyzer's smoothingTimeConstant.

It must be a number between 0 and 1. Lower values make the analyzer respond faster to changes.

Defaults to 0.5.

spinSpeed number

Available since v2.4.0

When radial is true, this property defines the analyzer rotation speed, in revolutions per minute.

Positive values will make the analyzer rotate clockwise, while negative values will make it rotate counterclockwise. A value of 0 results in no rotation.

Defaults to 0.

splitGradient boolean

Available since v3.0.0

When set to true and channelLayout is dual-vertical, the gradient will be split between channels.

When false, both channels will use the full gradient. The effect is illustrated below, using the 'classic' gradient.

splitGradient: false splitGradient: true
split-off split-on

This option has no effect on horizontal gradients, except on radial spectrum - see note in registerGradient().

Defaults to false.

stereo (DEPRECATED) boolean

This property will be removed in version 5 - Use channelLayout instead.

trueLeds boolean

Available since v4.1.0

When set to true, LEDs are painted with individual colors from the current gradient, instead of using the gradient itself.

The effect is illustrated below, using the 'classic' gradient.

trueLeds: false trueLeds: true
split-off split-on

The threshold for each color can be adjusted via the level property when registering a gradient. See registerGradient().

This option is only effective for frequency bands modes, when ledBars is true and colorMode is set to 'gradient'.

Defaults to false.

useCanvas boolean

Available since v3.5.0

When set to false, analyzer graphics are not rendered to the canvas. Setting it to false in the constructor options also prevents the canvas from being added to the document/container.

Please note that the analyzer processing runs regardless of the value of useCanvas and any callback defined for onCanvasDraw will still be triggered on every animation frame, so you can use the getBars() method to create your own visualizations.

If you want to completely stop the analyzer's data processing, see stop().

Defaults to true.

volume number

Available since v3.0.0

Read or set the output volume.

A value of 0 (zero) will mute the sound output, while a value of 1 will keep the same input volume. Higher values can be used to amplify the input, but it may cause distortion.

Please note that changing the audio element volume directly will affect the amplitude of analyzer graphs, while this property does not.

Defaults to 1.

weightingFilter string

Available since v4.0.0

Weighting filter applied to frequency data for spectrum visualization.

?> Selecting a weighting filter does NOT affect the audio output.

Each filter applies a different curve of gain/attenuation to specific frequency ranges, but the general idea is to adjust the visualization of frequencies to which the human ear is more or less sensitive.

Refer to the weighting filters viewer tool for response tables and an interactive version of the curves graph seen below.

weightingFilter description
'' (empty string) No weighting applied (default)
'A' A-weighting
'B' B-weighting
'C' C-weighting
'D' D-weighting
'468' ITU-R 468 weighting

Defaults to ''.

Static properties

AudioMotionAnalyzer.version string (Read only)

Available since v3.0.0

Returns the version of the audioMotion-analyzer package.

Since this is a static property, you should always access it as AudioMotionAnalyzer.version - this allows you to check the package version even before instantiating your object.

Callback functions

onCanvasDraw function

If defined, this function will be called after audioMotion-analyzer finishes rendering each animation frame.

The callback function is passed two arguments: an AudioMotionAnalyzer object, and an object with the following properties:

  • timestamp, a DOMHighResTimeStamp which indicates the elapsed time in milliseconds since the analyzer started running;
  • canvasGradients, an array of CanvasGradient objects currently in use on the left (or single) and right analyzer channels.

The canvas properties fillStyle and strokeStyle will be set to the left/single channel gradient before the function is called.

Usage example:

const audioMotion = new AudioMotionAnalyzer(
    document.getElementById('container'),
    {
        source: document.getElementById('audio'),
        onCanvasDraw: drawCallback
    }
);

function drawCallback( instance, info ) {
    const baseSize  = ( instance.isFullscreen ? 40 : 20 ) * instance.pixelRatio,
          canvas    = instance.canvas,
          centerX   = canvas.width / 2,
          centerY   = canvas.height / 2,
          ctx       = instance.canvasCtx,
          maxHeight = centerY / 2,
          maxWidth  = centerX - baseSize * 5,
          time      = info.timestamp / 1e4;

    // the energy value is used here to increase the font size and make the logo pulsate to the beat
    ctx.font = `${ baseSize + instance.getEnergy() * 25 * instance.pixelRatio }px Orbitron, sans-serif`;

    // use the right-channel gradient to fill text
    ctx.fillStyle = info.canvasGradients[1];
    ctx.textAlign = 'center';
    ctx.globalCompositeOperation = 'lighter';

    // the timestamp can be used to create effects and animations based on the elapsed time
    ctx.fillText( 'audioMotion', centerX + maxWidth * Math.cos( time % Math.PI * 2 ), centerY + maxHeight * Math.sin( time % Math.PI * 16 ) );
}

For more examples, see the fluid demo source code or this pen.

onCanvasResize function

If defined, this function will be called whenever the canvas is resized.

The callback function is passed two arguments: a string which indicates the reason that triggered the call (see below) and the AudioMotionAnalyzer object.

Reason Description
'create' canvas created by the audioMotion-analyzer constructor
'fschange' analyzer entered or left fullscreen mode
'lores' low resolution option toggled on or off
'resize' browser window or canvas container element were resized
'user' canvas dimensions changed by user script, via height and width properties, setCanvasSize() or setOptions() methods

?> As of version 2.5.0, the 'resize' reason is no longer sent on fullscreen changes and the callback is triggered only when canvas dimensions effectively change from the previous state.

Usage example:

const audioMotion = new AudioMotionAnalyzer(
    document.getElementById('container'),
    {
        source: document.getElementById('audio'),
        onCanvasResize: ( reason, instance ) => {
            console.log( `[${reason}] canvas size is: ${instance.canvas.width} x ${instance.canvas.height}` );
        }
    }
);

Methods

connectInput( source )

Available since v3.0.0

Connects an HTMLMediaElement or an AudioNode (or any of its descendants) to the analyzer.

If source is an HTMLMediaElement, the method returns a MediaElementAudioSourceNode created for that element; if source is an AudioNode instance, it returns the source object itself; if it's neither an ERR_INVALID_AUDIO_SOURCE error is thrown.

See also disconnectInput() and connectedSources.

connectOutput( [node] )

Available since v3.0.0

This method allows connecting the analyzer output to other audio processing modules that use the Web Audio API.

node must be an AudioNode instance.

By default, the analyzer is connected to the speakers upon instantiation, unless you set connectSpeakers: false in the constructor options.

See also disconnectOutput() and connectedTo.

?> If called with no argument, analyzer output is connected to the speakers (the AudioContext destination node).

destroy()

Available since v4.2.0

Destroys the audioMotion-analyzer instance and release resources. A destroyed analyzer cannot be started again.

This method:

  • Stops the analyzer data processing and animation;
  • Disconnects all input and output nodes;
  • Clears event listeners and callback functions;
  • Stops the AudioContext created by this instance (won't affect context provided to the constructor via audioCtx property or an AudioNode source);
  • Removes the canvas from the DOM.

See usage example in the minimal demo.

See also isDestroyed.

disconnectInput( [node], [stopTracks] )

Available since v3.0.0; stopTracks parameter since v4.2.0

Disconnects audio source nodes previously connected to the analyzer.

node may be an AudioNode instance or an array of such objects. If it's undefined (or any falsy value), all connected sources are disconnected.

stopTracks is a boolean value; if true, permanently stops all audio tracks from any MediaStreams being disconnected, e.g. a microphone. Use it to effectively release the stream if it's no longer needed.

Please note that when you have connected an <audio> or <video> element, you need to disconnect the respective MediaElementAudioSourceNode created for it. The node reference is returned by connectInput(), or can be obtained from connectedSources if the element was connected via source constructor option.

disconnectOutput( [node] )

Available since v3.0.0

Disconnects the analyzer output from previously connected audio nodes.

node must be a connected AudioNode.

See also connectOutput().

?> If called with no argument, analyzer output is disconnected from all nodes, including the speakers!

getBars()

Available since v3.5.0

Returns an array with current data for each analyzer bar. Each array element is an object with the format below:

{
	posX: <number>,   // horizontal position of this bar on the canvas
	freq: <number>,   // center frequency for this bar (added in v4.0.0)
	freqLo: <number>, // lower edge frequency
	freqHi: <number>, // upper edge frequency
	peak: <array>,    // peak values for left and right channels
	hold: <array>,    // peak hold frames for left and right channels - values < 0 mean the peak is falling down
	value: <array>    // current amplitude on left and right channels
}

peak and value elements are floats between 0 and 1, relative to the lowest and highest volume levels defined by minDecibels and maxDecibels.

hold values are integers and indicate the hold time (in frames) for the current peak. The maximum value is 30 and means the peak has just been set, while negative values mean the peak is currently falling down.

Please note that hold and value will have only one element when channelLayout is set to 'single', but peak is always a two-element array.

You can use this method to create your own visualizations using the analyzer data. See this pen for usage example.

getEnergy( [preset | startFreq [, endFreq] ] )

Available since v3.2.0

Returns a number between 0 and 1, representing the amplitude of a specific frequency, or the average energy of a frequency range.

If called with no parameters, it returns the overall spectrum energy obtained by the average of amplitudes of the currently displayed frequency bands.

Preset strings are available for predefined ranges plus the "peak" functionality (see table below), or you can specify the desired frequency and an optional ending frequency for a range. Frequency values must be specified in Hz.

preset description
'peak' peak overall energy value of the last 30 frames (approximately 0.5s)
'bass' average energy between 20 and 250 Hz
'lowMid' average energy between 250 and 500 Hz
'mid' average energy between 500 and 2000 Hz
'highMid' average energy between 2000 and 4000 Hz
'treble' average energy between 4000 and 16000 Hz

Please note that preset names are case-sensitive. If the specified preset is not recognized the method will return null.

Use this method inside your callback function to create additional visual effects. See the fluid demo or this pen for examples.

getOptions( [ignore] )

Available since v4.4.0

Returns an Options object with all the current analyzer settings.

ignore can be a single property name or an array of property names that should not be included in the returned object.

Callbacks and constructor-specific properties are NOT included in the object.

?> If the same gradient is selected for both channels, only the gradient property is included in the object; otherwise, only gradientLeft and gradientRight are included (not gradient). If 'gradient' is added to ignore, none of the gradient properties will be included.

See also setOptions().

registerGradient( name, options )

Registers a custom color gradient.

name must be a non-empty string that will be used to select this gradient, via the gradient property. Names are case sensitive.

options must be an object as shown below:

audioMotion.registerGradient( 'myGradient', {
    bgColor: '#011a35', // background color (optional) - defaults to '#111'
    dir: 'h',           // add this property to create a horizontal gradient (optional)
    colorStops: [       // list your gradient colors in this array (at least one color is required)
        'hsl( 0, 100%, 50% )',        // colors can be defined in any valid CSS format
        { color: 'yellow', pos: .6 }, // in an object, use `pos` to adjust the offset (0 to 1) of a colorStop
        { color: '#0f0', level: .5 }  // use `level` to set the max bar amplitude (0 to 1) to use this color
    ]
});

The dir property has no effect on radial spectrum or when trueLeds is in effect.

Each element of colorStops may be either a string (color only), or an object with at least a color property and optional pos and level properties.

  • pos defines the relative position of a color in the gradient, when colorMode is set to 'gradient'. It must be a number between 0 and 1, where 0 represents the top of the screen and 1 the bottom (or left and right sides, for horizontal gradients);
  • level defines the level threshold of a color, when colorMode is set to 'bar-level' or trueLeds is active. The color will be applied to bars (or LED elements) with amplitude less than or equal to level. It must be a number between 0 and 1, where 1 is the maximum amplitude (top of screen);
  • If pos or level are not explicitly defined, colors will be evenly distributed across the gradient or amplitude range;
  • Defining level: 0 for a colorStop will effectively prevent that color from being used for 'bar-level' colorMode and trueLeds effect.

?> Any gradient, including the built-in ones, may be modified at any time by (re-)registering the same gradient name.

setCanvasSize( width, height )

Sets the analyzer nominal dimensions in pixels. See height and width properties for details.

setFreqRange( minFreq, maxFreq )

Sets the desired frequency range. Values are expressed in Hz (Hertz).

See minFreq and maxFreq for lower and upper limit values.

setLedParams( [params] )

Available since v3.2.0

Customize parameters used to create the ledBars effect.

params should be an object with the following structure:

const params = {
    maxLeds: 128, // integer, > 0
    spaceV: 1,    // > 0
    spaceH: .5    // >= 0
}
property description
maxLeds maximum desired number of LED elements per analyzer bar
spaceV vertical spacing ratio, relative to the LED height (1 means spacing is the same as the LED height)
spaceH minimum horizontal spacing ratio, relative to the available width for each band, or a literal pixel value if >= 1;
this behaves exactly like barSpace and the largest spacing (resulting from either barSpace or spaceH) will prevail.

The available canvas height is initially divided by maxLeds and vertical spacing is calculated observing the spaceV ratio; if necessary, the led count is decreased until both the led segment and the vertical spacing are at least 2px tall.

You can try different values in the fluid demo.

?> If called with no arguments or any invalid property, clears custom parameters previously set.

setOptions( [options] )

Shorthand method for setting several analyzer properties at once.

options must be an Options object.

?> If called with no argument (or options is undefined), resets all configuration options to their default values.

See also getOptions().

setSensitivity( minDecibels, maxDecibels )

Adjust the analyzer's sensitivity. See minDecibels and maxDecibels properties.

start()

Available since v4.2.0

Starts the analyzer data processing and animation.

The analyzer is started by default after initialization, unless you specify start: false in the constructor options.

See also stop(), toggleAnalyzer() and isOn.

stop()

Available since v4.2.0

Stops the analyzer process.

When the analyzer is off, no audio data is processed and no callbacks to onCanvasDraw will be triggered.

The analyzer can be resumed with start() or toggleAnalyzer().

See also destroy() and isOn.

toggleAnalyzer( [boolean] )

Toggles the analyzer data processing and animation. The boolean argument can be used to force the desired state: true to start or false to stop the analyzer.

Returns the resulting state.

See also start(), stop() and isOn.

toggleFullscreen()

Toggles fullscreen mode on / off.

By default, only the canvas is sent to fullscreen. You can set the fsElement constructor option to a parent container, to keep desired interface elements visible during fullscreen.

?> Fullscreen requests must be triggered by user action, like a key press or mouse click, so you must call this method from within a user-generated event handler.

Custom Errors

Available since v2.0.0

audioMotion-analyzer uses a custom error object to throw errors for some critical operations.

The code property is a string label that can be checked to identify the specific error in a reliable way.

code Error description
ERR_AUDIO_CONTEXT_FAIL Could not create audio context. The user agent may lack support for the Web Audio API.
ERR_INVALID_AUDIO_CONTEXT Audio context provided by user is not valid.
ERR_INVALID_AUDIO_SOURCE Audio source provided in source option or connectInput() method is not an instance of HTMLMediaElement or AudioNode.
ERR_INVALID_MODE User tried to set the visualization mode to an invalid value.
ERR_FREQUENCY_TOO_LOW User tried to set the minFreq or maxFreq properties to a value lower than 1.
ERR_GRADIENT_INVALID_NAME The name parameter for registerGradient() must be a non-empty string.
ERR_GRADIENT_NOT_AN_OBJECT The options parameter for registerGradient() must be an object.
ERR_GRADIENT_MISSING_COLOR The options parameter for registerGradient() must define at least two color-stops.
ERR_REFLEX_OUT_OF_RANGE Tried to assign a value < 0 or >= 1 to reflexRatio property.
ERR_UNKNOWN_GRADIENT User tried to select a gradient not previously registered.

Known Issues

reflexBright won't work on some browsers

reflexBright feature relies on the filter property of the Canvas API, which is currently not supported in some browsers (notably, Opera and Safari).

alphaBars and fillAlpha won't work with Radial on Firefox

On Firefox, alphaBars and fillAlpha won't work with radial spectrum when using hardware acceleration, due to this bug.

Visualization of live streams won't work on Safari {docsify-ignore}

Safari's implementation of Web Audio won't return analyzer data for live streams, as documented in this bug report.

Troubleshooting

Common problems and solutions. Remember to check the browser console for error messages.

Error message: Cannot use import statement outside a module

The import statement must be inside a script which has the type="module" property (and no type="text/javascript"), like so:

  <script type="module">
    import AudioMotionAnalyzer from 'https://cdn.skypack.dev/audiomotion-analyzer?min';

    // your code here
  </script>

Or

  <script src="main.js" type="module"></script>

Error message: MediaElementAudioSource outputs zeroes due to CORS access restrictions

Make sure the media element (audio or video tag) connected to audioMotion-analyzer has the crossorigin = "anonymous" property, like so:

<audio id="myAudio" src="https://example.com/stream" controls crossorigin="anonymous"></audio>

You can also set the crossOrigin (mind the uppercase "O") property via JavaScript, like so:

myAudio.crossOrigin = 'anonymous';

Sound only plays after the user clicks somewhere on the page.

Browser autoplay policy dictates that audio output can only be initiated by a user gesture, and this policy is enforced by Web Audio API by putting AudioContext objects into suspended mode if they're not created on user action.

audioMotion-analyzer tries to automatically start its AudioContext on the first click on the page. However, if you're using an audio or video element with the controls property, clicks on those native media controls cannot be detected by JavaScript, so the audio will only be enabled if/when the user clicks somewhere else.

Possible solutions are:

  1. Ensure your users have to click somewhere else before using the native media controls, like a "power on" button;

  2. Don't use the native controls at all, and create your own custom play and stop buttons;

  3. Even better, instantiate your audioMotion-analyzer object within a function triggered by a user click. This will allow the AudioContext to be started right away and will also prevent the "The AudioContext was not allowed to start" warning message from appearing in the browser console.

See the minimal demo code for an example.

References and acknowledgments

Changelog

See Changelog.md

Contributing

I kindly request that you only open an issue for submitting bug reports.

If you need help integrating audioMotion-analyzer with your project, have ideas for new features or any other questions or feedback, please use the Discussions section on GitHub.

Additionally, I would love it if you could showcase your project using audioMotion-analyzer in Show and Tell, and share your custom gradients with the community in Gradients!

When submitting a Pull Request, please branch it off the project's develop branch.

And if you're feeling generous, maybe:

License

audioMotion-analyzer copyright (c) 2018-2024 Henrique Avila Vianna
Licensed under the GNU Affero General Public License, version 3 or later.

audiomotion-analyzer's People

Contributors

alex-greff avatar cprussin avatar hvianna avatar shahkashani avatar staijn1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

audiomotion-analyzer's Issues

Use without canvas

I want to use the library in my vue js project. I need the 1/12 octave bands get generated out of the fft data, but I'm going to visualize them with reactive svg. So I just need the bars on evey onCanvasDraw, but I don't need a canvas.

I'm using it in a composable function, so I don't want to create a hidden canvas. Is there an option to totally omit canvas?

So we can use the lib as simple as that:

  const audioMotion = new AudioMotionAnalyzer(null, {
    source: chain.analyse,
    onCanvasDraw: analyse,
  })

  function analyse(AM) {
    state.bars = AM.bars  // state is reactive and is the carrier to get the data to the UI
  }

How to switch audio sources?

I have an <audio>-tag. I wanna change the src of that tag. How can audioMotion-analyer pick up this change?
When I try to assing my tag via audioMotion.connectAudio(document.getElementById("audio") I get this error:

Failed to execute 'createMediaElementSource' on 'AudioContext': HTMLMediaElement already connected previously to a different MediaElementSourceNode.

Do I have to "disconnect" something? Reconnect? reload?
My page is a React page and I am very new to the Audio API

I have to "connect" after the instanciation. Analyzer should "pick up" changing tracks.

Here is my complete code (it's a React app):
(I also created a CodeSandbox with the exact code below)
https://codesandbox.io/s/strange-moon-0jjx8?file=/src/MusicList.jsx
Feel free to change whatever it needs.

Thank you very much

import React, { useEffect, useRef, useState } from "react";
import AudioMotionAnalyzer from "audiomotion-analyzer";

import track1 from "./tracks/track1.mp3";
import track2 from "./tracks/track2.mp3";

const tracks = [
  { source: track1, title: "Zimt" },
  { source: track2, title: "Ingwer" }
];

const MusicList = () => {
  const audioRef = useRef(); // will hold the <audio>-tag
  const [track, setTrack] = useState(tracks[1]);

  const changeTrack = () => {
    if (track === tracks[0]) {
      setTrack(tracks[1]);
    } else {
      setTrack(tracks[0]);
    }
  };

  useEffect(() => {
    player = document.getElementById("player");
    analyzer = document.getElementById("analyzer");
    const audioMotion = new AudioMotionAnalyzer(analyzer);

    audioRef.current.volume = 0.05
    audioRef.current.load();
    audioRef.current.play();
    audioMotion.connectAudio(audioRef.current);
    console.log("audioRef", audioRef.current);
  });

  return (
    <>
      <button
        onClick={() => {
          changeTrack();
        }}
      >
        Click to alter track
      </button>
      <audio
        id="player"
        controls
        ref={audioRef}
        src={track.source}
        type="audio"
      />
      <div id="analyzer"></div>
    </>
  );
};
export default MusicList;

Use analyzer with existing FFT stream

Would it be practically possible to use the analyzer/visualization with an already existing FFT stream, which by the way is not necessarily audio, but other spectrum data.

It seems like the AudioConcext and the build in FFT is an integral part of the library.

beginner question

Hi

Can i use the script in a webbrowser locally?
Is there an example file I can look at?

I just want the script to work on audio I play on my computer, using an offlind html file.

Is this possible?

Option to bypass connecting the output

It seems that audioMotion is intended to be a permanent component of the Web Audio signal path, and one instance is created and kept for the lifetime of the app. In my case, I already have a player with its own AudioContext, and input and output nodes, and I want to create an audioMotion instance on demand in a modal window, so that when the modal is opened, I connect the output node from my player as the source for audioMotion to analyze the signal. The problem is that audioMotion is connecting its own output to the same AudioContext destination, so both my player, and audioMotion, are sending their outputs to the speakers together, resulting in boosted volume:

(source) ---> player ---> output ---> speakers
                            |
                            +---> analyzer ---> output ---> speakers

As a workaround I can set audioMotion.volume = 0 after creating an instance:

    const audioMotion = new AudioMotionAnalyzer(container, {
        audioCtx: myApp.player.audioContext,
        source: myApp.player.outputNode,
    });

    audioMotion.volume = 0;

Can I suggest adding a boolean connectOutput (default: true) option in the constructor to allow for bypassing the connecting of the output?

[Question] createMediaStreamSource with stereo stream

Hello, I'm trying to use a stereo microphone with chrome (on my mac I created an aggregate of the two entries of my sound interface).
CleanShot 2023-01-24 at 23 30 43@2x

I wanted to know if, if I manage to get a stereo stream from my two microphones inside chrome, could I use it with createMediaStreamSource to handle this stereo stream as if I was passing a stereo track from my computer?
Right now I have not found much about handling a stereo microphone source. Is there a limitation somewhere?

Thank you so much for your library it is life changing!

Support: Is it possible to change visualization color during play?

Hi, hvianna!!!

I'm trying to change the hsl color while the stream is playing! Thank you for your past support! I just need to know how to approach changing the color!

I'm trying to do:

audioMotion.options = {

	// Gradient definitions
		gradients: {
		classic: {
			bgColor: '#000',
			colorStops: [
			 // 'hsl( 0, 100%, 50% )',
			'hsl( 0, 0%, 100% )',
	
			 // 'hsl( 29, 100%, 50% )',
				hslcolor,
			]
		},
	}
	}

Thank you!

I also tried:
audioMotion.options.gradients {}

I'm trying to access: audioMotion -> options -> gradients

Gain keeps getting higher every time a AudioMotionAnalyzer is initialized

Hi,

First of all: great software! I'm implementing this with Lit with a web component that is only added to the DOM if needed.
I have some hacky work-around to bypass having to disconnect on destroy of the component

let audioCtx;
let source;
if (!window._audioCtx) {
          audioCtx = new AudioContext();
          source = audioCtx.createMediaElementSource(window._player);

          // store in memory for reuse
          window._audioCtx = audioCtx;
          window._source = source;
        } else {
          audioCtx = window._audioCtx;
          source = window._source;
        }

Every time I call new AudioMotionAnalyzer() the gain of this chain is increased and it starts clipping. Setting the volume has no effect.

const visualizer = new AudioMotionAnalyzer(canvas, {
          audioCtx,
          source,
});

Behaviour when container element is not found

Hi,

I was running into an issue today where the element I wanted the visualizer in was not loaded yet.
I did not receive an error, it just didn't work.

So I went into the sourcecode and saw in the constructor:

this._container = container || document.body;

I am wondering if this is the best way to handle this, because in my environment (angular) it didn't put the visualizer in the body, but also didn't throw an error.

It may be easier to just throw an error because if, for some reason, you want the visualizer in the body element you can always get the element yourself.

I can create a pull request for it, because it ain't hard to implement, but I wanted to discuss it first if this is desired.

Greetings,
Staijn

Error setting reflexRatio in Angular / Electron app

ERROR TypeError: Failed to execute 'setTransform' on 'CanvasRenderingContext2D': 6 arguments required, but only 0 present.
    at AudioMotionAnalyzer._draw (audioMotion-analyzer.js:910)
    at audioMotion-analyzer.js:501
    at ZoneDelegate.invokeTask (zone.js:421)
    at Object.onInvokeTask (core.js:28183)
    at ZoneDelegate.invokeTask (zone.js:420)
    at Zone.runTask (zone.js:188)
    at ZoneTask.invokeTask (zone.js:503)
    at ZoneTask.invoke (zone.js:492)
    at timer (zone.js:3034)

Original report: hvianna/audioMotion.js#12

Typescript error when building with angular project: Version 3.5.0

Version: 3.5.0
Last known working version: 3.3.0

When using audiomotion-analyzer in an angular project, running dist to pack for production fails with the following error.

`ERROR in node_modules/audiomotion-analyzer/src/index.d.ts:53:23 - error TS1005: ',' expected.

53 hold: [ mono_or_left: number, right?: number ];`

Reverted to 3.3.0 and it is working fine.

Feature: Please consider create time domain graphs

Now everything works fine, thank you so much for develop this tool. Its great no doubts.

Can you construct the audioCtx whitout Logarithmic graphs? The thing is include Linear mode, and i can help if is neccesary.

CommonJS?

Hi there! Thank you so much for this amazing library ☺️

I was wondering if there's any chance you'd consider including a CommonJS build? I'm running into a lot of issues integrating this library into an existing application (especially with the jest tests) and it would make life a lot easier.

Many thanks in advance!

[Feature Request] Dynamic gradients, spectrogram display, and combined channel spectrum

A dynamic gradient like in WhiteCap visualizer (like changing colors based on magnitude of the FFT, the current time, the center frequency for each band, or even stereo differences) would be cool (especially on spectrogram displays), but here's a caveat, setting colors for each frequency band can only apply to frequency bars display, but it can be easily done using color gradients that change over time, which works best on linear spectrum.

As for the spectrogram display, I prefer the showcqt-style spectrum/spectrogram display (where the top is the spectrum and the bottom is spectrogram) and I think it pairs well with dynamic gradients.

I'd like to see combined channel spectrum display, because I think it is easier to see stereo differences simply based on different colors for each channel.

Google chrome won't play the audio. It works on other browsers.

Hi, I'm getting the following error on Google Chrome:

"Uncaught (in promise) DOMException: The play() request was interrupted by a new load request"

It won't play after I click the button. I'm forced to go site:settings on chrome to enable audio. However the stream URLs play if I don't use AudioMotion Analyzer.

It only happens on google chrome. Is there a fix or workaround I can try?

Thank you!

AudioMotion and Pizzicato

Hi

audioMotion demos are impressive !

I'm trying to use both https://github.com/alemangui/pizzicato and audioMotion.

I've tested many things (i'm completely new with Web Audio). No luck. My last try was

const el = document.getElementById('container');
const audioMotion = new AudioMotionAnalyzer(el, {});
audioMotion.connectInput(Pizzicato.context.destination);

const soundPath = '/path/to/sound.ogg';
const mySound = new Pizzicato.Sound(soundPath, () => {
    mySound.play();
});

Error

Uncaught DOMException: Failed to execute 'connect' on 'AudioNode': output index (0) exceeds number of outputs (0)

I have no idea how to "connect" Pizzicato with AudioMotion :(

Any help is welcome, thank you !

Strange type definition for GradientColorStop

The type definition for GradientColorStop is giving me troubles when trying to define a simple gradient using a string array:

    const gradientOptions = {
      bgColor: '#011a35', 
      colorStops: [      
        '#dadfff',         
        '#f002c7'
      ]
    };
    audioMotion.registerGradient('fsr', gradientOptions);

error TS2345: Argument of type '{ bgColor: string; colorStops: string[]; }' is not assignable to parameter of type 'GradientOptions'.
Types of property 'colorStops' are incompatible.
Type 'string[]' is not assignable to type 'ArrayTwoOrMore'.
Type 'string[]' is missing the following properties from type '{ 0: GradientColorStop; 1: GradientColorStop; }': 0, 1

Edit: Likely due to following type definition in src/index.d.ts:

type ArrayTwoOrMore<T> = {
  0: T
  1: T
} & Array<T>;

"No Canvas" Example Does Not Work on Mobile

Thank you for creating this wonderful library. I found it yesterday and was up and running it in minutes!

While building a small 3D visualizer, I observed unexpected behavior on mobile (iOS). Debugging on mobile is a little difficult, but it appears like the the values in getBars()are all 0s when called from the onCanvasDraw callback.

I have verified this behavior using the provided No Canvas example codepen, modifying the generated html as a hacky method of debugging on mobile:

const audioMotion = new AudioMotionAnalyzer( null, {
  source: audioEl,
  mode: 2,
  useCanvas: false, // don't use the canvas
  onCanvasDraw: instance => {   
    let html = '';
    // shows all zeros on mobile
    for ( const bar of instance.getBars() ) {
      html += bar.value[0].toFixed(1) + ", "
    }
    container.innerHTML = html;
  }
});

For whatever reason, the Using Microphone Input example seems to work as expected on mobile, while the other examples are having issues when playing from other audio sources. Given that the mic stream seems to fine, this may indicate that the DSP is working as expected, but mobile is presenting some issues in streaming from other audio sources.

[Feature Request] Constant-Q Transform, custom FFT and perceptual frequency scales

Although FFTs are fine, it gets really boring for me, so the constant-Q transform (actually the variable-Q transform) is preferred over FFT for octave band analysis, but my implementation of CQT (implemented using bunch of Goertzel algorithm) is slow and it needs to use a sliding DFT to do the real-time CQT

I also aware that spectrum analyzers on Web Audio API doesn't need to use AnalyserNode.getByteFrequencyData, you can just use any FFT library and getFloatTimeDomainData as an input just like my sketch does that, but beware you need to window it using Hann window or something before using FFT, see #3

I think perceptual frequency scales like Mel and Bark should be added because the bass frequencies are less shown than logarithmic scale and more shown than linear scale

The microphone icon in Chrome stays open when disconnected

After stopping the use of the microphone, in Chrome, the mic icon stays on. This happens in my project and the examples on the websites as well. A solution is to also close the microphone track that is open when disconnecting.

An example solution is to add an option to the disconnect method to optionally close the track. This made the icon go away when used and should be backward compatible for anyone still needing it to not be closed when disconnecting.

/**
	 * Disconnects audio sources from the analyzer
	 *
	 * @param [{object|array}] a connected AudioNode object or an array of such objects; if undefined, all connected nodes are disconnected
	 * @param {boolean} [stopTracks] if true, stops the track when disconnected
	 */
	disconnectInput( sources, stopTracks ) {
		if ( ! sources )
			sources = Array.from( this._sources );
		else if ( ! Array.isArray( sources ) )
			sources = [ sources ];
		for ( const node of sources ) {
			const idx = this._sources.indexOf( node );			
			if (stopTracks && node.mediaStream){
				for ( const ats of node.mediaStream.getAudioTracks() ){
					ats.stop();
				}
			}
			if ( idx >= 0 ) {
				node.disconnect( this._input );
				this._sources.splice( idx, 1 );
			}
		}
	}

Naturally can still be called with default sources:
audioMotion.disconnectInput(false, true);

Precedence of gradient, gradientLeft and gradientRight in setOptions()

Hi there!

I'm implementing a new feature in my application that creates and sets a new gradient, with the colors of the album cover of the song you are listening to in Spotify.
In this application, the visualizer options are stored with Redux, meaning I don't have direct control of what the order is of attributes within this object.

To show that the order of attributes in the options object matter, I've created the following codepen based on the Microphone Input demo: https://codepen.io/Staijn1/pen/QWzPNLq

In this codepen you can see two functions that set the gradient property.
One function sets the gradientLeft before the gradient, resulting in both channels being updated.
The other function sets the gradientLeft after the gradient property, resulting in only the right channel being updated because the gradientLeft overwrites the value set by gradient.

I fixed my issue by setting gradientLeft and gradientRight to undefined so they will not interfere, because I can't control the order of attributes due to redux. This would be another solution.

In hindsight this is a pretty obvious issue but it still took me some time. That's why I created this issue to discuss if anything can be done to make this more explicit. Maybe by documenting or extending the API (but I'm not sure what this would look like).

I'd love to hear your thoughts.

Greetings,
Stein

audio dropouts with mobile browsers

Hi,
first of all thank you for this great tool, really impressing!!

Unfortunately I am experiencing little dropouts (glitches) when listening to music with FFT analysis on my mobile phone (Samsung S22, newest Android version) on browsers like Chrome and even worse on Firefox. Do you have any experience with this?

One interesting thing: The glitches continue, even if I switch off the FFT analysis via toggleAnalyzer(). So it seems to be a problem with the audio node, not with the FFT analysis or the canvas.

Without the Analyser plugin, the music plays smoothly.

Thank you for your help,
Holger

What scale does the y-axis display (showScaleY = true) ?

Hi there,

questions about the y-scale when

        mode: 6,
        alphaBars: false,
        ansiBands: true,
        barSpace: .5,
        channelLayout: 'single',
        colorMode: 'gradient',
        frequencyScale: 'log',
        gradient: 'classic',
        ledBars: true,
        lumiBars: false,
        maxFreq: 20000,
        minFreq: 25,
        mirror: 0,
        radial: false,
        reflexRatio: 0,
        showBgColor: true,
        showPeaks: true,
        trueLeds: true,
        showScaleY: true,
        minDecibels: -85,
        maxDecibels: 0

I have song that peaks around 0dBFS all the time.

With the given settings above the meter peaks at around -20dB.

I was wondering what exactly the y-axis displays? Is this Peak or like a LUFS ?

How can I configure the scale so it actually displays the actually peak value of like 0dBFS?

Thanks for your help,
Moz

[feature request] add the ability to "listen" to a particular band

Not sure if this is possible, but it would be interesting if you could create a real time filter, which allowed you to isolate a selected band and send it to either a file (for later playback) or an output device (sound card).

for example a function setAudioFilterIndex() which takes an integer to indicate which band to pass to the output (or null/undefined to indicate it should pass the unfiltered input to the output - the default situation)

getAudioFilterIndex() would return the current value

[Question] Can we seek and draw audio visualizer on a particular time?

Hi,

Can we draw frames of the audio visualizer and let it stay on that? I am trying to export the audio visualizer as a video file. For that, I need the ability to seek the audio visualizer at particular times and wait so that I can capture a screenshot. I can see there is an onCanvasDraw callback, however, is there any way to make it draw the visualizer with the exact preset that I have set on audiomotion object?

Audio Feedback loop when using a stream

Whenever I try to get audio from a stream, there is awful feedback which made this unusable.

I recreated this problem on this sandbox.

You'll need to make sure you select "Share Audio" in the popup.

I managed to fix it for my use case by removing line 91 -

this._analyzer.connect( this._audioCtx.destination );

but I'm sure this would have broken some other use case.

I'm far from an expert when it comes to the audio API but it looked like we were feeding the audio back into itself?

There wasn't much really any examples for integrating with streams in the docs as well, so it would have been great if we had a stream implementation example in the docs.

Thank you very much for the work you've done one this by the way. It's sure to make my life easier!

Adjustable ScaleX Precision/Detail

Is there any way that the precision/detail of the ScaleX could be adjustable to make it more precise/detailed? For example, if a frequency range of 20 kHz to 22 kHz is used in order to zoom in on the ultrasonics, currently the most precise/detailed mode is linear, but the markers jump from 20 kHz to 22 kHz and does not display anything in between. Is there a way to increase the frequency of markers to, for example, every 100 Hz, instead of every 2 KHz, when linear mode is being used?

Also, I know that 20 kHz to 22 kHz is not a range currently available in the tool, as the lowest lower limit is currently 16 Hz. However, modifying that is not an issue. I'm just having a problem modifying the precision/detail of the ScaleX.

canvas height growing indefinitely while using <div> but not when using <span> (setup: bun+vite)

I cannot understand what's going on here. When I use a

the canvas height is growing indefinitely. When I use a it is not.
I have a setup with bun and vite and I run the project using bunx --bun vite
I have tried with firefox and edge, same behavior. I have attached a video for you to see, at the end.

Thanks for the help!

here are my files to reproduce:

index.html

<!doctype html>
<html lang="en">

<head>
  <meta charset="UTF-8" />
  <script type="module" src="/main2.js"></script>
</head>

<body>
  <span id="ai"></span>
</body>

</html>

my main2.js

import AudioMotionAnalyzer from "audiomotion-analyzer";
let ai = document.getElementById("ai");
if (ai) {
  const audioMotion = new AudioMotionAnalyzer(ai);
}

and my package.json

{
  "name": "neocertif",
  "private": true,
  "version": "0.0.1",
  "type": "module",
  "scripts": {
    "dev": "bunx --bun vite",
    "build": "bunx --bun vite build",
    "preview": "vite preview"
  },
  "devDependencies": {
    "vite": "^5.2.0"
  },
  "dependencies": {
    "@codemirror/lang-python": "^6.1.6",
    "audiomotion-analyzer": "^4.4.0",
    "codemirror": "^6.0.1",
    "prismjs": "^1.29.0"
  }
}
Enregistrement.2024-05-03.041714Compressed.mp4

[Feature Request] Linear spectrum and bins configurable

I have a feature request concerning the spectrum.

Would it be possible to make the frequency spectrum configurable between linear and logarithmic representation?
Also, would it be possible to make the bins independently configurable between linear and logarithmic representation?

Currently, I guess both are fixed in logarithmic scale. With this, the visual effect as it is possible in the abadonware MusicScope is not possible, see for example some images here (only the frequency spectrum) and the software as download here.

The setting could be done as in the software SonicVisualizer, see documentation and example here. The bins and the scale (visual) can be set independently.

The main idea for me would be to combine linear bins with logarithmic frequency spectrum visualization so that the bars become equally distributed in the distinct case without loosing and instead increasing the details of the low frequencies. Also the line and area visualization would become clearer in the low frequency area. See for example a YouTube video of MusicScope how this can look here (in the video, the spectrum is set to linear mode, when you download the software yourself, you can click on the text and it changes to logarithmic mode but the bins are linear in both cases as it seems, see the images linked above from that blog post where the spectrum is in logarithmic mode but clearly has linear bins as the low frequencies are high in resolution).

If you have further questions about this, please feel free to ask!

PS: Maybe with linear bins and discrete frequencies, a setting for making the bars wider than 1px is also needed, for example to make them similar to the octave bands representation, or also additional octave bands, such as 1/48th or more? Just some notes if it does not look great from the beginning...

Thank you!

Best regards!

Audio gets stuck on mobile IOS safari

When mobile Safari browser goes on background then audio stops and when it comes to foreground and user hits play audio gets stuck and waveform stops generating

roundBars not working in radial mode

In the fluid demo I noticed roundBars does not work when in radial mode.

I selected half octave / 20 bands mode, enabled round bars (working) and then enabled radial. Then the bars are no longer rounded.

Is this by design?

good job

Dear @hvianna

Good job for this repository.

Do you think you will do :

  • Vu and ppm metering :

whats the difference between PPM and VU meter

  • loudness metering :

loudness

  • correlation metering :

ppm correlation

I have a data model where I can stream all the data from an audio console :

                        {
                            "number": 3,
                            "path": "2.20.3",
                            "_subscribers": {},
                            "identifier": "Main Level",
                            "description": "Main Level",
                            "value": 480,
                            "minimum": -4096,
                            "maximum": 480,
                            "access": 1,
                            "format": "%8.2f°\ndB",
                            "enumeration": "",
                            "factor": 32,
                            "isOnline": true,
                            "default": -1024,
                            "type": 1,
                            "streamIdentifier": 281904
                        }

Do you think I could link your audio motion analyser with my data model ?

Best Regards,
Youssef

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.