Code Monkey home page Code Monkey logo

beamcoder's People

Contributors

danfragoso avatar dependabot[bot] avatar huikaihoo avatar jeremy-j-ackso avatar marload avatar mint-dewit avatar paulgrammer avatar piercus avatar scriptorian avatar skadisch avatar sparkpunkd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

beamcoder's Issues

demuxer.read() performance for a large number of streams

With the demuxer.read() method moving sequentially through the available streams, when you are working with a piece of media that has a (relatively) large number of streams it can have a performance impact while you await each read() call to cycle through to the stream index you are looking for. For example, in a file that has one video stream and eight audio streams in order to access the data on audio track 8 you'll have made 9 await read() calls before you get there.

Is there something we can do to short circuit this somehow? Perhaps a method or additional property of the read() call that can move the read() pointer internally? For the 9 stream example above, if we were to know the index offsets to the stream_index we are looking to work with, we could potentially call read(9) instead which would be the equivalent of 9x await demuxer.read() calls, but only needing to return data for the last call so it should be much faster to do it on the native side if you can just move the read index pointer.

Does this sound like it makes sense? Or is there something else that may be a better solution?

Governor used in MuxerStream doesn't allow node process to exit

I'm having an issue where sometimes my node process won't exit, even after calling process.exit.

The reason is that node won't execute if there are still outstanding async tasks in its thread pool. I tracked the outstanding task down to the readExecute call in governor.cc.

Here is a simple program to illustrate the problem:

const beamcoder = require("beamcoder");

const muxerStream = beamcoder.muxerStream({ highwaterMark: 65536 });

muxerStream.on('data', () => { console.log('data'); })

// At this point, the process should exit, but it won't because of the outstanding readExecute call in governor.cc
process.exit(0) // <- won't work

The workaround I've found it to force muxerStream to emit an "end" event by pushing "null". This triggers an event handler in beamstreams.js that calls governor.finish(). There is probably a better way.

muxerStream.push(null);

This is never an issue when the muxes is allowed to run to completion. However, often times error cases mean I need to abort the process early. In this case, I need a clean way to shut down the governor queue.

Muxing aac from pcm into mp4 file

I'm trying to make a mp4 from raw video yuv frames and pcm_f32l as input.
Video it's ok but audio doesn't work. Silence.

` let encParamsVideo = {
name: 'h264_nvenc',
width: 1280,
height: 720,
bit_rate: 20000000,
time_base: [1, 30],
framerate: [30, 1],
gop_size: 10,
max_b_frames: 0,
pix_fmt: 'yuv420p',
};
let encParamsAudio = {

    name: 'aac',
    time_base: [1, 48000],
    sample_fmt: 'fltp',
    sample_rate: 48000,
    bit_rate: 48000,
    channel_layout: 'mono',
    channels: 1

};

let encoderVideo = await beamcoder.encoder(encParamsVideo);
let encoderAudio = await beamcoder.encoder(encParamsAudio);

encoderAudio.priv_data = { preset: 'LC' };

const muxer = beamcoder.muxer({ format_name: 'mp4' });

let vstr = muxer.newStream({
    name: 'h264_nvenc',
    time_base: [1, 90000],
    interleaved: false
});
Object.assign(vstr.codecpar, {
    width: 1280,
    height: 720,
    format: 'yuv420p',
    bit_rate: 20000000
});

let astr = muxer.newStream({
    name: 'aac',
    time_base: [1, 48000],
    interleaved: false
});
Object.assign(astr.codecpar, {
    name: 'aac',
    sample_rate: 48000,
    format: 'fltp',
    frame_size: 1024,
    channels: 1,
    channel_layout: 'mono',
    bit_rate: 48000,
    bits_per_sample: 32
});

`

audio frame structure

let destFrameAudio = beamcoder.frame({ channels: 1, nb_samples: 1024, format: 'fltp', channel_layout: 'mono', sample_rate: 48000, }).alloc();

than... if I try to use ffprobe "profile" field is unknown and bit rate is very high.. 75777075... something is wrong...

index=1 codec_name=aac codec_long_name=AAC (Advanced Audio Coding) profile=unknown codec_type=audio codec_time_base=1/48000 codec_tag_string=mp4a codec_tag=0x6134706d sample_fmt=fltp sample_rate=48000 channels=2 channel_layout=stereo bits_per_sample=0 id=N/A r_frame_rate=0/0 avg_frame_rate=0/0 time_base=1/48000 start_pts=4464 start_time=0.093000 duration_ts=476 duration=0.009917 bit_rate=75777075 max_bit_rate=75777075 bits_per_raw_sample=N/A nb_frames=476 nb_read_frames=N/A nb_read_packets=N/A

What can I do?

Audio Encoder Error: Specified sample format -1 is invalid or not supported / No codec provided to avcodec_open2()

No matter what I try, when I try encoding an audio stream along with a video, I get this:

[aac @ 0x102808e00] Specified sample format -1 is invalid or not supported
[NULL @ 0x102808e00] No codec provided to avcodec_open2()

TL;DR: See the gist at https://gist.github.com/josiahbryan/5c149b97c8f350c7ff10a66a42ff8d2f - just run it and you (should) get those errors above.

I'm using the encoder/muxer params right from beamcoder. For example:

const mxs = beamcoder.muxers();
const muxerSpecs = mxs.mp4;
const mux = beamcoder.muxer({ format_name: muxerSpecs.name });

const encoders = beamcoder.encoders();
const audioEncoderSpecs = encoders[muxerSpecs.audio_codec];

const audioEncParams = {
	name:          muxerSpecs.audio_codec,
	sampleFormat:  audioEncoderSpecs.sample_fmts[0],
	channelLayout: 'stereo',
	sampleRate:    audioEncoderSpecs.supported_samplerates ? audioEncoderSpecs.supported_samplerates[0] : 44100,
	timeBase:     [1, 90000],
};

const audioEncoder = beamcoder.encoder({
	name:           audioEncParams.name,
	format:         audioEncParams.sampleFormat,
	channel_layout: audioEncParams.channelLayout,
	time_base:      audioEncParams.timeBase,
	sample_rate:    audioEncParams.sampleRate,
}); 

const audioStream = mux.newStream({
	name:        audioEncParams.name,
	time_base:   audioEncParams.timeBase,
	interleaved: false
});

Object.assign(audioStream.codecpar, { // Object.assign copies over all properties
	channels:       2,
	sample_rate:    audioEncParams.sampleRate,
	format:         audioEncParams.sampleFormat,
	channel_layout: audioEncParams.channelLayout,
	bit_rate:       audioEncParams.sampleRate * 4, //48000*4
	frame_size: 1024,
});

const audioFrame = ...;

let packets = await audioEncoder.encode(audioFrame);

As soon as the code hits the audioEncoder.encode(), the console shows:
[aac @ 0x102808e00] Specified sample format -1 is invalid or not supported (the first time) - subsequent frames give the No codec provided to avcodec_open2() errors.

Self-contained small example that reproduces this problem is https://gist.github.com/josiahbryan/5c149b97c8f350c7ff10a66a42ff8d2f

Any ideas on how to fix this? Please? :)

beamcoder on mac

Hi,

I saw that the documentation mentions macOS support is not there yet. However, is there a way to install and use it manually anyway? Also is there an ETA for official macOS support? We're very interested in beamcoder's capabilities and would like to explore it.

Thanks!

Using streamDemuxer for rawvideo streams

I am trying to pipe a rawvideo stream that I get from avfoundation screencapture into a streamDemuxer:

const demuxerStream=beamcoder.demuxerStream({highwaterMark:65536});

rawVideoStream.pipe(demuxerStream);

let demuxer=await demuxerStream.demuxer({
     iformat:beamcoder.demuxers()["rawvideo"]
});

And i get this error:

[IMGUTILS @ 0x70000864f968] Picture size 0x0 is invalid
(node:3083) UnhandledPromiseRejectionWarning: Error: In file ../src/demux.cc on line 76, found error: Problem opening input format: Invalid argument
(node:3083) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:3083) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

Here is my ffprobe result from running ffprobe -f avfoundation -i "1:":

Stream #0:0: Video: rawvideo (UYVY / 0x59565955), uyvy422, 2560x1600, 1000k tbr, 1000k tbn, 1000k tbc

This is the rawvideo stream that I am piping to the demuxer

What other options should I pass to the .demuxer() function?
Also, how can I calculate what highwaterMark to use for my specific stream

GPU accelerated scale_npp, hw_frames context missing

Hi,

I'm trying to recreate the following ffmpeg graph in beamcoder:
./ffmpeg -loglevel verbose -y -c:v h264_cuvid -i input.ts -filter_complex "fps=30000/1001,hwupload_cuda,scale_npp=1920:1080:interp_algo=lanczos" -c:a copy -c:v h264_nvenc -f mpegts out.ts

The ffmpeg was built according to the instructions below, it works fine with ffmpeg itself.
https://devblogs.nvidia.com/nvidia-ffmpeg-transcoding-guid

But with beamcoder filter having pixfmt nv12 at input and cuda at output + filter string:
const scale_str = "hwupload_cuda,scale_npp=" + dst_width + ":" + dst_height + ':interp_algo=lanczos'

...it doesn't work:

[h264_nvenc @ 0x4ac5900] hw_frames_ctx must be set when using GPU frames as input
[h264_nvenc @ 0x4ac5900] Nvenc unloaded

Adding hwdownload and converting the pixfmt back to nv12 is fine, but the performance is degraded to a similar level as a regular cpu libav scale filter. Plain decode+encode (without scaling) works fine in Beamcoder too. Seems like it's not yet implemented in BC?

// TODO hw_frames_ctx
// TODO hw_device_ctx
    { "hwaccel_flags", nullptr, nullptr,
      encoding ? nullptr : getCodecCtxHwAccelFlags,
      encoding ? failEncoding : setCodecCtxHwAccelFlags, nullptr,
      encoding ? napi_default : (napi_property_attributes) (napi_writable | napi_enumerable), codec},

Any hints on how to port if from ffmpeg? At minimum I would like to add support for copying hw_frames_ctx of scale_npp and pass it to the encoder.

Thanks in advance!
JP.

-re equivalent in beamcoder

In FFMPEG I use -re parameter to read in real time a mp4 file, extract frame and play in un my system
Something like this

ffmpeg -re -i http://example.com/video,mp4 -ar 48000 -ac 1 -f f32le -c:a pcm_f32le pipe:0 -map 0:v:0 -pix_fmt rgb0 -f rawvideo -vcodec rawvideo -r 30 pipe:1

How can I do the same with Beamcoder? Without this parameters, libav read as fast as it can the file but I cant' store the converted rawvideo in a buffer beacuse it's too big.. I need to read frame by frame in real time.. is it possible?

help me about rgb frame.data

hi i used this code to make a red picture movie but not working

const beamcoder = require("beamcoder");
let fs = require("fs");

(async function(){
	let outFile = fs.createWriteStream("./out3.mp4");
	let encParams = {
		name:         "libx264rgb",
		width:        400,
		height:       400,
		bit_rate:     2000000,
		time_base:    [1, 25],
		framerate:    [25, 1],
		gop_size:     10,
		max_b_frames: 1,
		pix_fmt:      "rgb24",
		priv_data:    {preset: "slow"}
	};
	let endcode = Buffer.from([0, 0, 1, 0xb7]);
	let encoder = await beamcoder.encoder(encParams);
	for(let i = 0; i < 30; i++){
		let frame = beamcoder.frame({
			width:  encParams.width,
			height: encParams.height,
			format: encParams.pix_fmt
		}).alloc();
		let ydata = frame.data;
		frame.pts = i;
		for(let y = 0; y < frame.height; y++){
			for(let x = 0; x < frame.width; x++){
				let offset = 3 * (x + (y * frame.width));
				ydata[offset + 0] = 255;
				ydata[offset + 1] = 0;
				ydata[offset + 2] = 0;
			}
		}
		let packets = await encoder.encode(frame);
		if(i % 10 === 0) console.log("Encoding frame", i);
		packets.packets.forEach(x => outFile.write(x.data));
	}
	let p2 = await encoder.flush();
	console.log("Flushing", p2.packets.length, "frames.");
	p2.packets.forEach(x => outFile.write(x.data));
	outFile.end(endcode);
})();

SigSegv upon calling encode()

Hi, I'm experimenting with this library trying to understand it better and decided to try and process Macadam frames:

const macadam = require('macadam')
const beamcoder = require('beamcoder')

console.log('Starting sdi-http')
console.log('Decklink API version:', macadam.deckLinkVersion())

macadam.capture({
    deviceInfo: 0, // Index relative to the 'macadam.getDeviceInfo()' array
    displayMode: macadam.bmdModeHD720p50,
    pixelFormat: macadam.bmdFormat8BitYUV
}).then(async capture => {
    console.log(capture)
    const f = await capture.frame()
    capture.stop()
    console.log(f)

    let enc = await beamcoder.encoder({ // Create an encoder for JPEG data
        name : 'mjpeg', // FFmpeg does not have an encoder called 'jpeg'
        width : f.video.width,
        height: f.video.height,
        pix_fmt: 'yuvj422p',
        time_base: [1, 1] })

    let frame = await beamcoder.frame({ // Create a frame for the encoder
        width : f.video.width,
        height: f.video.height,
        format: 'uyvy422', // format received from macadam
        pts: 0
    }).alloc()

    f.video.data.copy(frame.data[0]) // copy data from macadam
    let jpg = await enc.encode(frame)
    await enc.flush()

    console.log(jpg) // eventually do something useful here
})

However, when I call the enc.encode I get the following SIGSEGV error:

PID 26452 received SIGSEGV for address: 0x2da744da
SymInit: Symbol-SearchPath: '.;C:\Users\balte\Documents\devProjects\sdi-http;C:\Program Files\nodejs;C:\WINDOWS;C:\WINDOWS\system32;SRV*C:\websymbols*http://msdl.microsoft.com/download/symbols;', symOptions: 530, UserName: 'balte'
OS-Version: 10.0.17134 () 0x100-0x1
c:\users\balte\documents\devprojects\sdi-http\node_modules\segfault-handler\src\stackwalker.cpp (941): StackWalker::ShowCallstack
c:\users\balte\documents\devprojects\sdi-http\node_modules\segfault-handler\src\segfault-handler.cpp (235): segfault_handler
00007FFF2E6978D8 (ntdll): (filename not available): RtlInitializeCriticalSection
00007FFF2E63D4FA (ntdll): (filename not available): RtlWalkFrameChain
00007FFF2E6CE70E (ntdll): (filename not available): KiUserExceptionDispatcher
00007FFF2DA744DA (msvcrt): (filename not available): memmove
00007FFECC9F0426 (avcodec-58): (filename not available): avpriv_mpegaudio_decode_header
00007FFECC70C189 (avcodec-58): (filename not available): avcodec_encode_video2
00007FFECC70C506 (avcodec-58): (filename not available): avcodec_encode_video2
00007FFECC70C643 (avcodec-58): (filename not available): avcodec_send_frame
c:\users\balte\documents\devprojects\sdi-http\node_modules\beamcoder\src\encode.cc (162): encodeExecute
00007FF717FF54DF (node): (filename not available): uv_timer_set_repeat
00007FF717FE4058 (node): (filename not available): uv_once
00007FF718A03900 (node): (filename not available): v8::internal::compiler::OperationTyper::ToBoolean
00007FFF2BC83DC4 (KERNEL32): (filename not available): BaseThreadInitThunk
00007FFF2E6A3691 (ntdll): (filename not available): RtlUserThreadStart

It's a bit above my head to debug this, so hopefully someone is able to give me some pointers on how to make this work.

Cannot download dev header of ffmpeg on WIndows

The install_ffmpeg.js script will download https://ffmpeg.zeranoe.com/builds/win64/dev/ffmpeg-4.2.1-win64-dev.zip on windows systems. For now this zip file does not exist on the server (returns a status code of 404).

The website also says that it will stop operating on Sep.18. Maybe we need to find a way to host the binary file.

node SIGABRT when calling demuxer.read without await

Using:

demuxer.read().then();
demuxer.read().then();

Nodejs proccess finishes with Process finished with exit code 134 (interrupted by signal 6: SIGABRT)

Using await I cannot read enough frames from a demuxerStream, so I can't stack them using multiple promises.

Is it possible to compile beamcoder with dynamically linked FFmpeg?

Is it possible to dynamically link FFmpeg? The main reason - is to avoid any LGPL issues(so the user will easily be able to replace dll/dylib files as LGPL requires).

Also, is there any possibilities that this project will be not under GPL, but maybe under LGPL?

Thanks!

Can't pass hwaccel flag, cuda overlay filter broken

Hi,

I'm experimenting with beamcoder and cuda. Previously got gpu scaling and nvenc up and running (#43), but no luck so far with overlay_cuda filter. The working ffmpeg overlay cmd (as described in this ffmpeg patch)

ffmpeg -y -hwaccel cuvid -c:v h264_cuvid -i v.ts -i o.png -filter_complex "[1:v]format=yuva420p,hwupload_cuda[overlay],[0:v]scale_npp=format=yuv420p[video],[video][overlay]overlay_cuda" -an -c:v h264_nvenc /data/out.mp4

I haven't found a way to pass the -hwaccel cuvid option to beamcoder, but assumed that hwuploading the nv-decoded frames would work:

[1:v]format=yuva420p,hwupload_cuda[overlay],[0:v]hwupload_cuda,scale_npp=format=yuv420p[video]

Unfortunately this breaks the alpha blending of the overlay, which is perfectly fine with the global hwaccel flag. No luck with format=yuv420p,hwupload_cuda too. I would be super grateful for any hints on how to get it done with beamcoder.

thanks in advance,
Janek

Beamcoder resampling filter crashing from MP3 decoded frames, works fine with FLAC decoded frames

I am making a universal audio converter using Beamcoder, and I have successfully managed to set up the audio decoder. The idea is also to take these decoded frames and run them through a resampler before encoding them in the final step (not shown here).

I am having trouble with Beamcoder crashing with some cryptic errors if the decoded frames come from an MP3 file. Frames from a FLAC file don't give me any issues.

Here's the code:

const fs = require("fs");
const beamcoder = require("beamcoder");

let filterer = null;
let outSampleRate = 48000;
let outFormat = "s16";
let bytesPerSample = 2;
let outFile = fs.createWriteStream("resampled.raw");

decodeAudioFile("tri.mp3", async (metadata) => {
    filterer = await beamcoder.filterer({
        filterType: 'audio',
        inputParams: [
          {
            sampleRate: metadata.sampleRate,
            sampleFormat: outFormat,
            channelLayout: metadata.channelLayout,
            timeBase: metadata.timeBase
          }
        ],
        outputParams: [
          {
            sampleRate: outSampleRate,
            sampleFormat: outFormat,
            channelLayout: metadata.channelLayout
          }
        ],
        filterSpec: `aresample=isr=${metadata.sampleRate}:osr=${outSampleRate}:async=1, aformat=sample_fmts=${outFormat}:channel_layouts=${metadata.channelLayout}`
      });
}, async (frameData) => {
    let filteredData = await filterer.filter([frameData]);
    for(var i in filteredData) {
        for(var j in filteredData[i].frames) {
            for(var k in filteredData[i].frames[j].data) {
                let rawData = filteredData[i].frames[j].data[k];
                let cutData = rawData.slice(0, filteredData[i].frames[j].nb_samples*bytesPerSample*filteredData[i].frames[j].channels);
                outFile.write(cutData);
            }
        }
    }   
});


  async function decodeAudioFile(filePath, onInputMetadata, onFrame) {
    let data = {};
    try {
        let demuxer = await beamcoder.demuxer(filePath);
        let decoder = beamcoder.decoder({ demuxer, stream_index: 0, request_sample_fmt: outFormat });
        await onInputMetadata({
            sampleRate: decoder.sample_rate,
            channels: decoder.channels,
            timeBase: decoder.time_base,
            sampleFormat: decoder.sample_fmt,
            channelLayout: decoder.channel_layout
        });
        try {
            packet = await demuxer.read();
        }
        catch(err) {
            console.error(err);
        }
        while(packet != null) {
            try {
                let dec_result = await decoder.decode(packet);
                for(var i in dec_result.frames) {
                    for(var j in dec_result.frames[i].data) {
                        let data = dec_result.frames[i];
                        onFrame(data);
                    }
                }
            }
            catch(err) {
                console.error(err);
            }
            packet = await demuxer.read();
        }
        let frames = await decoder.flush();
        if(frames.frames.length > 0) {
            for(var j in frames.frames[i].data) {
                onFrame(frames.frames[i]);
            }
        }
    }
    catch(err) {
        data.err = err;
    }
    return data;
}

The decodeAudioFile function works properly, as it decodes the audio from MP3 or FLAC into PCM frames. When I put these PCM frames on the resampler, FLAC works fine but MP3 fails with the following console output:

Aerostat Beam Coder  Copyright (C) 2019  Streampunk Media Ltd
GPL v3.0 or later license. This program comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to redistribute it
under certain conditions. Conditions and warranty at:
https://github.com/Streampunk/beamcoder/blob/master/LICENSE
[mp3 @ 0000028017c20900] Could not update timestamps for skipped samples.
PID 18832 received SIGSEGV for address: 0xc34949e6
PID 18832 received SIGSEGV for address: 0xc46b9269
PID 18832 received SIGSEGV for address: 0xc349491a
SymInit: Symbol-SearchPath: '.;C:\Dev\audio-conv;C:\Program Files\nodejs;C:\WINDOWS;C:\WINDOWS\system32;SRV*C:\websymbols*http://msdl.microsoft.com/download/symbols;', symOptions: 530, UserName: 'Ermir Desktop'
SymInit: Symbol-SearchPath: '.;C:\Dev\audio-conv;C:\Program Files\nodejs;C:\WINDOWS;C:\WINDOWS\system32;SRV*C:\websymbols*http://msdl.microsoft.com/download/symbols;', symOptions: 530, UserName: 'Ermir Desktop'
SymInit: Symbol-SearchPath: '.;C:\Dev\audio-conv;C:\Program Files\nodejs;C:\WINDOWS;C:\WINDOWS\system32;SRV*C:\websymbols*http://msdl.microsoft.com/download/symbols;', symOptions: 530, UserName: 'Ermir Desktop'
OS-Version: 10.0.18362 () 0x100-0x1
OS-Version: 10.0.18362 () 0x100-0x1
OS-Version: 10.0.18362 () 0x100-0x1
c:\dev\audio-conv\node_modules\segfault-handler\src\stackwalker.cpp (924): StackWalker::ShowCallstack
c:\dev\audio-conv\node_modules\segfault-handler\src\stackwalker.cpp (924): StackWalker::ShowCallstack
c:\dev\audio-conv\node_modules\segfault-handler\src\stackwalker.cpp (924): StackWalker::ShowCallstack
c:\dev\audio-conv\node_modules\segfault-handler\src\segfault-handler.cpp (242): segfault_handler
c:\dev\audio-conv\node_modules\segfault-handler\src\segfault-handler.cpp (242): segfault_handler
c:\dev\audio-conv\node_modules\segfault-handler\src\segfault-handler.cpp (242): segfault_handler
00007FFBC4638636 (ntdll): (filename not available): RtlIsGenericTableEmpty
00007FFBC4638636 (ntdll): (filename not available): RtlIsGenericTableEmpty
00007FFBC4638636 (ntdll): (filename not available): RtlIsGenericTableEmpty
00007FFBC462A0D6 (ntdll): (filename not available): RtlRaiseException
00007FFBC462A0D6 (ntdll): (filename not available): RtlRaiseException
00007FFBC462A0D6 (ntdll): (filename not available): RtlRaiseException
00007FFBC465FE6E (ntdll): (filename not available): KiUserExceptionDispatcher
00007FFBC465FE6E (ntdll): (filename not available): KiUserExceptionDispatcher
00007FFBC462A043 (ntdll): (filename not available): RtlRaiseException
00007FFBC349491A (msvcrt): (filename not available): memcpy
00007FFBC34949E6 (msvcrt): (filename not available): memcpy
00007FFBC46B9269 (ntdll): (filename not available): RtlIsNonEmptyDirectoryReparsePointAllowed
00007FFBA03493A1 (swresample-3): (filename not available): swr_alloc_set_opts
00007FFBA03493A1 (swresample-3): (filename not available): swr_alloc_set_opts
00007FFBC46B9233 (ntdll): (filename not available): RtlIsNonEmptyDirectoryReparsePointAllowed
00007FFBA03496C9 (swresample-3): (filename not available): swr_alloc_set_opts
00007FFBA03496C9 (swresample-3): (filename not available): swr_alloc_set_opts
00007FFBC46C1622 (ntdll): (filename not available): RtlpNtMakeTemporaryKey
00007FFBA0349FD1 (swresample-3): (filename not available): swr_alloc_set_opts
00007FFBA0349FD1 (swresample-3): (filename not available): swr_alloc_set_opts
00007FFBC46C192A (ntdll): (filename not available): RtlpNtMakeTemporaryKey
00007FFBA034AA75 (swresample-3): (filename not available): swr_convert
00007FFBA034AA75 (swresample-3): (filename not available): swr_convert
00007FFBC46CA8E9 (ntdll): (filename not available): RtlpNtMakeTemporaryKey
00007FFBA034B105 (swresample-3): (filename not available): swr_convert
00007FFBA034B105 (swresample-3): (filename not available): swr_convert
00007FFBC46664AD (ntdll): (filename not available): memset
00007FFBA034AFF7 (swresample-3): (filename not available): swr_convert
00007FFBA034AFF7 (swresample-3): (filename not available): swr_convert
00007FFBC4606139 (ntdll): (filename not available): RtlReAllocateHeap
00007FFBA034B59C (swresample-3): (filename not available): swr_next_pts
00007FFBA034B59C (swresample-3): (filename not available): swr_next_pts
00007FFBC460533B (ntdll): (filename not available): RtlReAllocateHeap
00007FFB40571C08 (avfilter-7): (filename not available): (function-name not available)
00007FFB40571C08 (avfilter-7): (filename not available): (function-name not available)
00007FFBC4604C99 (ntdll): (filename not available): RtlReAllocateHeap
00007FFB405C25DD (avfilter-7): (filename not available): avfilter_pad_get_type
00007FFB405C25DD (avfilter-7): (filename not available): avfilter_pad_get_type
00007FFBC4601EB3 (ntdll): (filename not available): RtlGetCurrentServiceSessionId
00007FFB405C67B1 (avfilter-7): (filename not available): avfilter_graph_request_oldest
00007FFB405C67B1 (avfilter-7): (filename not available): avfilter_graph_request_oldest
00007FFBC4600810 (ntdll): (filename not available): RtlGetCurrentServiceSessionId
c:\dev\audio-conv\node_modules\beamcoder\src\filter.cc (1518): filterExecute
c:\dev\audio-conv\node_modules\beamcoder\src\filter.cc (1518): filterExecute
00007FFBC45FFC11 (ntdll): (filename not available): RtlFreeHeap
00007FF76BE9DF1E (node): (filename not available): uv_queue_work
00007FF76BE9DF1E (node): (filename not available): uv_queue_work
00007FFBC3439CFC (msvcrt): (filename not available): free
00007FF76BE8B71D (node): (filename not available): uv_poll_stop
00007FF76BE8B71D (node): (filename not available): uv_poll_stop
00007FFBC34393D6 (msvcrt): (filename not available): aligned_free
00007FF76CB0FD90 (node): (filename not available): v8::internal::SetupIsolateDelegate::SetupHeap
00007FF76CB0FD90 (node): (filename not available): v8::internal::SetupIsolateDelegate::SetupHeap
00007FFBA03493CA (swresample-3): (filename not available): swr_alloc_set_opts
00007FFBC2B57BD4 (KERNEL32): (filename not available): BaseThreadInitThunk
00007FFBC2B57BD4 (KERNEL32): (filename not available): BaseThreadInitThunk
00007FFBA03496C9 (swresample-3): (filename not available): swr_alloc_set_opts
00007FFBC462CED1 (ntdll): (filename not available): RtlUserThreadStart
00007FFBC462CED1 (ntdll): (filename not available): RtlUserThreadStart
00007FFBA0349FD1 (swresample-3): (filename not available): swr_alloc_set_opts
00007FFBA034AA75 (swresample-3): (filename not available): swr_convert
00007FFBA034B105 (swresample-3): (filename not available): swr_convert
00007FFBA034AFF7 (swresample-3): (filename not available): swr_convert
00007FFBA034B59C (swresample-3): (filename not available): swr_next_pts
00007FFB40571C08 (avfilter-7): (filename not available): (function-name not

The output gets cut off at the end in the middle of the line, I made no mistake when copying.

Round-tripping MPEG-TS not working.

Before I do anything more complicated, I'm trying to just demux and decode an MPEG-TS file, and then re-encode and remux to get an equivalent file back out, using fs.createReadStream() for input and a PassThrough stream for output. (The end goal is to read from a network socket and write to another socket, with no file system involvement.)

My test code is here: https://gist.github.com/gliese1337/3e123764fec1e8792094e082a258d798

It creates a demuxer and a remuxer, creates an output stream for each input stream, then just loops over packets, decodes them, re-encodes them, and writes them back out to the remuxer.

Rather than actually producing an output file, through, I just get the following console output:

{ INIT_IN: 'INIT_OUTPUT' }
{ INIT_IN: 'INIT_OUTPUT' }
[aac @ 0000025ed8a2d300] The encoder timebase is not set.
[NULL @ 0000025ed8a2d300] No codec provided to avcodec_open2()
[NULL @ 0000025ed8a2d300] No codec provided to avcodec_open2()
[NULL @ 0000025ed8a2d300] No codec provided to avcodec_open2()
[libx264 @ 0000025ed8838100] The encoder timebase is not set.
[NULL @ 0000025ed8a2d300] No codec provided to avcodec_open2()
[NULL @ 0000025ed8838100] No codec provided to avcodec_open2()
{ a few hundred similar lines }
[NULL @ 0000025ed8838100] No codec provided to avcodec_open2()
[NULL @ 0000025ed8838100] No codec provided to avcodec_open2()
Flushing
[NULL @ 0000025ed8838100] No codec provided to avcodec_open2()
[NULL @ 0000025ed8838100] No codec provided to avcodec_open2()

I have checked that the input streams do have a timebase set.

Manually setting the output stream parameters doesn't fix it.

Any idea what I'm doing wrong?

Audio/Video encoding example

I have raw frames in yuv420p and audio chunks in pcm. I need to generate a mp4 file of this raw video with audio

The example folder contains encoding just for video .. muxing just for audio..

Someone can share ad example to mix audio and video from raw frames and chunks to generate an mp4 file?

Thank you

watermark

Hi All!

I want to add watermak (really other type of processing) on each frame end encode to same codec and container.

Is it possible with beamcoder? Maybe somebody have an example?

Thanks!

Encode as RTP stream

What would be the equivalent code for the following command? We plan to use beamcoder in a WebRTC project if is possible.

ffmpeg \
  -re \
  -v info \
  -i video.mp4 \
  -map 0:v:0 \
  -pix_fmt yuv420p -c:v libvpx -b:v 1000k -deadline realtime -cpu-used 4 \
  -f tee \
  "[select=v:f=rtp:ssrc=2222:payload_type=101]rtp://192.168.1.100:10090"

How to set movflags?

Hi.. I'm trying to stream via rtmp to social networks.
Everything works with facebook and node-media-stream but I have troubles with youtube and twitch.

I tested with classic ffmpeg and I saw that if I put -movflags faststart works well. If I miss movflags parameter doesn't work.

So I tried to set movflags with this function
await muxer.initOutput({ movflags: "faststart" })

but I'm not shure if the syntax is correct. Nothing changes about youtube streaming and I don't know how can I check if this flag is actually setted

Anyone can help me?

init error on Debian

node: symbol lookup error: /root/test/node_modules/beamcoder/build/Release/beamcoder.node: undefined symbol: av_codec_iterate

ffmpeg version 4.2.1
Debian 9.0

Only black screen appear when playing videos generated in Safari / QuickTime

I've tried to use makeStreams(params) to generate mp4 video.
The result video can play on Chrome / Firefox, but when playing in Safari / QuickTime, only black screen appear.

Then I am trying to run examples/make_mp4.js. The test.mp4 generated also behave the same (black screen when playing on Safari / QuickTime).

Play in Safari:
Screenshot 2019-07-12 at 12 49 32
Play in Chrome:
Screenshot 2019-07-12 at 12 55 47

typescript definitions

quick googling didn't bring anything

are there typescript definitions of some sort?

if no, any plans to add them?

crash with "pointer being freed was not allocated"

When using demuxStream, node crashes after reading some packets.

node(37174,0x700007087000) malloc: *** error for object 0x10ac00000: pointer being freed was not allocated
node(37174,0x700007087000) malloc: *** set a breakpoint in malloc_error_break to debug

Beamcoder: 0.3.1
Node.js: 12.4.0
Happens on macOS 10.14.5 and Ubuntu 18.04.1 LTS

Sample code:

const fs = require('fs')
const beamcoder = require('beamcoder')

;(async function() {
    const rs = fs.createReadStream(/* a local media file  */)
    const demuxStream = await beamcoder.demuxerStream({ highwaterMark: 8*1024*1024 })
    rs.pipe(demuxStream)

    const demux = await demuxStream.demuxer()
    let packet = null
    while (packet = await demux.read()) {
        console.log('p')
    }
})()

Maybe it's related to #8 ?
Maybe it's introduced by ceae1eb , this issue does not occur in 0.2.0

Mp4 muxer broken - missing h264 extradata (AVCC)

I'm struggling to get the mp4 muxer output working on Windows10 default video stack, thus Adobe Premiere project bin (on third party players like VLC or Chrome it's playable). The video stream is not recognized at all, no metadata, nothing. Audio is fine.

The muxer is set up similarly as in the mp4 example:

        let vstr = this.mux.newStream({
            name: 'h264',
            encoderName: 'libx264',
            time_base: [1, 90000],
            interleaved: true,
            sample_aspect_ratio: [1, 1],
        });

        Object.assign(vstr.codecpar, {
            width: this.clip.dst_probe.width,
            height: this.clip.dst_probe.height,
            format: 'yuv420p',
            color_space: 'bt709',
            sample_aspect_ratio: [1, 1]
        });

MP4Box -info output, beamcoder muxer:

[iso file] Box "avcC" size 8 invalid (read 17
(...)
Track # 1 Info - TrackID 1 - TimeScale 90000 - Media Duration 00:00:00.840
Track has 2 edit lists: track duration is 00:00:00.920
Media Info: Language "und (und)" - Type "vide:avc1" - 21 samples
Visual Track layout: x=0 y=0 width=1920 height=1080
MPEG-4 Config: Visual Stream - ObjectTypeIndication 0x21
AVC/H264 Video - Visual Size 1920 x 1080
        AVC Info: 1 SPS - 0 PPS - Profile Unknown @ Level 1.6
        NAL Unit length bits: 8
[AVC/HEVC] Not enough bits in bitstream !!
[AVC/HEVC] Not enough bits in bitstream !!
[AVC/HEVC] Not enough bits in bitstream !!
[AVC/HEVC] Not enough bits in bitstream !!
[AVC/HEVC] Not enough bits in bitstream !!
[AVC/HEVC] Not enough bits in bitstream !!
[AVC/HEVC] Not enough bits in bitstream !!
        SPS#1 hash: 2C6E8EB7626F90A283FDE68C6FBE67EE19A48284
Self-synchronized
        RFC6381 Codec Parameters: avc1.000010
        Average GOP length: 21 samples

I've finally figured out that the problem is with missing h264 extradata (AVCC) in stream->codecpar. The source of the video is MPEGTS, which does not provide it (null in js object). I guess the ffmpeg automaticaly adds a proper bitstream filter while transmuxing ts->mp4 (?).

Any idea how to achieve a similar to ffmpeg behaviour with beamcoder? I've tried generating the AVCC bytes on my own, but got it a bit wrong, having some missing frames in the video. Would be easier to get it done with libav itself.

How to close an SDP/RTP demuxers early

I'm using beamcoder to save an RTP-OPUS stream into an OGG-OPUS file.

I have an SDP/RTP demuxer that is feeding into an OGG-OPUS muxer. It's using a very simple demuxer.read / muxer.writeFrame loop. It's all working fine, but I'm running into problems shutting it down when I'm done saving.

My first approach was to call demuxer.forceClose to try and unblock the unresolved demuxer.read promise. This, however, is generating a segmentation fault. I suspect it is related to this issue: #53 If you'd like, I can provide more debugging information, but it's probably redundant.

My next idea is to try and set the interrupt_callback in the AVFormatContext struct. See https://github.com/FFmpeg/FFmpeg/blob/3da35b7cc7e2f80ca4af89e55fef9a7bcb19c128/libavformat/avformat.h#L1503

This would require patching beamcoder. I think it would probably go before avformat_open_input is called. Maybe around here:

if ((ret = avformat_open_input(&c->format, c->filename, c->iformat, &c->options))) {

Could you let me know if I'm on the right path? Is there a better way to close an RTP demuxer early?

Thanks for your help! beamcoder has been a lifesaver!

make ogg

Hi!

I have opus packets stream. I want to save it to ogg container without reencode.
I know original sample rate (16000), channels number (1), frame size (60ms), frames in packet (2) and size in bytes of course(variable length).

My prototype code is

const sampleRate = 16000;
const channels = 1;
const frameSize = 60;
const framesInPacket = 2;

const muxer = beamcoder.muxer({ filename: 'output.ogg' });
const stream = muxer.newStream({ ... }); // how?
Object.assign(stream.coderpar, { ... }); // what?

await muxer.openIO();
await muxer.writeHeader({ ... }); // how?

while (...) {
    const data = Buffer.from([...]); // really got it from network
    const bytes = data.length;

    const frame = beamcoder.frame({ ... }); // how to put data, pts, dts, etc. properly?
    await muxer.writeFrame(frame);
}

await muxer.writeTrailer();

Please help me fill gaps properly.

P.S.: your project is cool

How to get log messages in a string to parse

I need to constantly check if everything is ok, control conversion progress and other stuffs

How can I get the log output from libav width beamcoder, than parse and do what I have to do?

"movflags" for segmented mp4 support

At the moment I'm trying to a setup where build a setup where beamcoder can be used to dynamically create portions of a fragmented mp4 stream for HLS and DASH playback. It seems I have encountered a potential non-started, however -- in order to correctly format the segmented mp4 container, movflags needs to be set to +frag_keyframe+empty_moov+default_base_moof.

It seems that at the moment, trying to set movflags in beamcoder returns unmapped type: flags to the muxer object. Is this not currently supported or is there a trick to being able to set the flags on the underlying muxer instance?

flv and rtmp

Anyone tried to mux in flv format and stream to rtmp? (like facebook & youtube)
Any example?

macOS build failed, due to didn't link to ffmpeg installed by homebrew

os: macOS 10.15.3

error message

CXX(target) Release/obj.target/beamcoder/src/beamcoder.o
In file included from ../src/beamcoder.cc:23:
../src/beamcoder_util.h:33:12: fatal error: 'libavutil/error.h' file not found
  #include <libavutil/error.h>
           ^~~~~~~~~~~~~~~~~~~
1 error generated.
make: *** [Release/obj.target/beamcoder/src/beamcoder.o] Error 1
gyp ERR! build error 

or

ld: library not found for -lavcodec
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make: *** [Release/beamcoder.node] Error 1
gyp ERR! build error 

after adding include_dirs and library_dirs for 'OS=="mac" to the binding.gyp.

I can build succcess.

{
  "targets": [{
    "target_name" : "beamcoder",
    "sources" : [ "src/beamcoder.cc", "src/beamcoder_util.cc",
                  "src/governor.cc", "src/demux.cc",
                  "src/decode.cc", "src/filter.cc",
                  "src/encode.cc", "src/mux.cc",
                  "src/packet.cc", "src/frame.cc",
                  "src/codec_par.cc", "src/format.cc",
                  "src/codec.cc" ],
    "conditions": [
+      ['OS=="mac"', {
+        "include_dirs" : [
+          "/usr/local/Cellar/ffmpeg/4.2.2/include"
+        ],
+        "library_dirs": [
+          "/usr/local/Cellar/ffmpeg/4.2.2/lib",
+        ]
+      }],
      ['OS!="win"', {
        "defines": [
          "__STDC_CONSTANT_MACROS"
        ],
        "cflags_cc!": [
          "-fno-rtti",
          "-fno-exceptions"
        ],
        "cflags_cc": [
          "-std=c++11",
          "-fexceptions"
        ],
        "link_settings": {
          "libraries": [
            "-lavcodec",
            "-lavdevice",
            "-lavfilter",
            "-lavformat",
            "-lavutil",
            "-lpostproc",
            "-lswresample",
            "-lswscale"
          ]
        }
      }],
      ['OS=="win"', {
        "configurations": {
          "Release": {
            "msvs_settings": {
              "VCCLCompilerTool": {
                "RuntimeTypeInfo": "true"
              }
            }
          }
        },
        "include_dirs" : [
          "ffmpeg/ffmpeg-4.2.1-win64-dev/include"
        ],
        "libraries": [
          "-l../ffmpeg/ffmpeg-4.2.1-win64-dev/lib/avcodec",
          "-l../ffmpeg/ffmpeg-4.2.1-win64-dev/lib/avdevice",
          "-l../ffmpeg/ffmpeg-4.2.1-win64-dev/lib/avfilter",
          "-l../ffmpeg/ffmpeg-4.2.1-win64-dev/lib/avformat",
          "-l../ffmpeg/ffmpeg-4.2.1-win64-dev/lib/avutil",
          "-l../ffmpeg/ffmpeg-4.2.1-win64-dev/lib/postproc",
          "-l../ffmpeg/ffmpeg-4.2.1-win64-dev/lib/swresample",
          "-l../ffmpeg/ffmpeg-4.2.1-win64-dev/lib/swscale"
        ],
        "copies": [
            {
              "destination": "build/Release/",
              "files": [
                "ffmpeg/ffmpeg-4.2.1-win64-shared/bin/avcodec-58.dll",
                "ffmpeg/ffmpeg-4.2.1-win64-shared/bin/avdevice-58.dll",
                "ffmpeg/ffmpeg-4.2.1-win64-shared/bin/avfilter-7.dll",
                "ffmpeg/ffmpeg-4.2.1-win64-shared/bin/avformat-58.dll",
                "ffmpeg/ffmpeg-4.2.1-win64-shared/bin/avutil-56.dll",
                "ffmpeg/ffmpeg-4.2.1-win64-shared/bin/postproc-55.dll",
                "ffmpeg/ffmpeg-4.2.1-win64-shared/bin/swresample-3.dll",
                "ffmpeg/ffmpeg-4.2.1-win64-shared/bin/swscale-5.dll"
              ]
            }
          ]
    }]
  ]
}]
}

Same issue may also happen in Linux. #21 Add include dirs on linux

muxer.writeFrame not working with Audio Frames. Only working with Audio Packets

I have been trying to get the muxer.writeFrame function to accept uncompressed audio Frames instead of compressed audio Packets.

Performing await muxer.writeFrame(packet); works. No errors.
Performing await muxer.writeFrame({frame: dec_result.frames[0]}); does not work. It leads to a segfault.

https://github.com/Streampunk/beamcoder/blob/20e28d29d3bf7933b8a250005c7f43fc6c73cc9d/types/Muxer.d.ts

	 * Write media data to the file by sending a packet containing data for a media stream.
	 * @param options Object containing a stream index property and a frame property of an
	 * uncompressed Frame, which must contain the timestamps measured in the `time_base` of the stream.
	 * @returns Promise that resolves to _undefined_ on success
	 */
	writeFrame(options: { frame: Frame, stream_index: number }) : Promise<undefined>

The documentation suggests muxer.writeFrame should work with frames unless I'm missing something. Maybe it actually means video frames? Any input or sample code for using muxer.writeFrame with an audio frame object would be much appreciated.

	let audioDemuxer = await beamcoder.demuxer('file:input.mp3');

	let muxer = beamcoder.muxer({ filename: 'file:output.mp3' });

	let stream = muxer.newStream({
	  name: 'mp3',
	  time_base: [1, 44100 ],
	  interleaved: true });
	Object.assign(stream.codecpar, {
	  channels: 2,
	  sample_rate: 44100,
	  frame_size:1,
	  format: 's16p',
	  channel_layout: 'stereo',
	  bit_rate: 128000
	});

	await muxer.openIO();
	await muxer.writeHeader();

	let packet = {};
	let decoder = beamcoder.decoder({ demuxer: audioDemuxer, stream_index: 0 });
	while (true) {
		packet = await audioDemuxer.read();
		if(packet == null){
			break;
		}
		let dec_result = await decoder.decode(packet);
		 if (dec_result.frames.length === 0){ // Frame may be buffered, so flush it out
    		dec_result = await decoder.flush();
   		 }

		//Frame Option: does not work even with variation on frame object/frame array passed
		for(var i=0; i < dec_result.frames.length; i++){
			//await muxer.writeFrame({frame: dec_result.frames[0], stream_index: 0 });
		}
		//Packet Option:works
		//await muxer.writeFrame(packet);
	}

How to use not-built in codecs in beamcoder?

I need to use libfdk_aac codec but I can't see it in the available list.

How can I use it? If re-compile avcodec-58.dll with libfdk_aac is enough? Or I need to do something on the beamcoder library?

If the codec exist in dll it magically appears on the available codecs list?

PTS Mystery - How to reencode video properly? - With Gist

Gist

Gist showing the issue: https://gist.github.com/josiahbryan/2292b7b33860367755513a63f8228282

Steps:

  1. Download Gist https://gist.github.com/josiahbryan/2292b7b33860367755513a63f8228282
  2. Download http://dl5.webmfiles.org/big-buck-bunny_trailer.webm
  3. Run Gist
  4. Run ffplay reencode.mp4

What it does:

Reads the trailer .webm and sets up decoder/encoder to decode the file then reencode it as mp4. Runs for 200 frames then writes out the file.

Expected:

Expected result is that ffplay reencode.mp4 plays the file at normal speed (same as you would see if you did ffplay with the original .webm file

What actually happens:

ffplay plays the reencode.mp4 file at what seems like some incredibly small fraction of the original speed. Pressing right arrow to fast forward thru the file shows the frames are there, just not playing at the right speed

Help!

I suspect this has something to do with how the .pts is set on the outgoing packets, but no clue. I don't generate the pts, I just use the .pts on the frames. I based the encoding code on https://github.com/Streampunk/beamcoder/blob/master/examples/make_mp4.js

Any advice? Thanks for your help as always!

Exception SIGSEGV while reconnect RTSP

Hello

I would like to use this lib to get JPEG images of an RTSP MPEG stream. I can already establish a connection and receive the single images.
When the connection to the camera is lost or if it is restarting, I want to close the demuxer so I can establish a new connection, how can I do that?

If I simply create a new demuxer, I get a SIGSEGV error.

Find attached my test script and the logs. For my test I used beamcoder v0.6.3

Thank you for your support.

test_and_log.zip

AAC encoding issues

I'm trying to write a script that takes any video file with a h264 stream, and creates an mp4 file transcoding the audio to AAC (using an aformat and aresample filter to convert the frames to the correct format first) and just copying the H264 stream into the MP4 container.
I'm running into some issues however with the AAC encoding. For context, this is what I'm currently working witht: https://github.com/pith-media-server/ffmiddleware/blob/master/src/demo/transcodeManually.ts

First issue is that setting the profile to 'LC' in the encoder properties, I'll immediately get a SIGSERV upon creating the encoder. Setting it to 1 instead (the value of FF_PROFILE_AAC_LOW in avcodec.h) it doesn't crash. However, it doesn't seem to be actually set, because if I inspect it afterwards by opening it with the demuxer, the profile is set to -1:

      "codecpar": {
        "type": "CodecParameters",
        "codec_type": "audio",
        "codec_id": 86018,
        "name": "aac",
        "codec_tag": "mp4a",
        "format": "fltp",
        "bit_rate": 251238,
        "bits_per_coded_sample": 16,
        "profile": -1,
        "channel_layout": "stereo",
        "channels": 2,
        "sample_rate": 48000
      }

Additionally, neither the channel or channel_layout are properly stored (they were set to 8 channels and 7.1, same as the input), and playing the file back the audio is garbled and the timing of the video is all off (the video plays real fast, and the then the audio continues with blank video). The total duration of the video file is right though. What am I doing wrong?

Use of deleted Adaptor in formatContextFinalizer()

Hi,

I've seen a number of odd issues when using Beamcoder heavily, and I believe this is due to a 'use after free' issue in the following line of code - https://github.com/Streampunk/beamcoder/blob/master/src/format.cc#L3831

Immediately prior to this the Governor is collected which deletes the Adaptor instance, so the Adaptor instance (passed as a hint) used in formatContextFinalizer() is no longer valid.

The issues I've seen are segfaults & hangs (waiting to lock an invalid mutex), and stability is much better removing the above mentioned call to 'adaptor->finish()'.

I'm not sure that this is the best fix for this issue though, which is why I've not submitted a pull request.

Thanks

AudioInputParam/AudioOutputParam typedefs incorrect

When setting up a filter, the typedefs specify channel_layout, sample_format and sample_rate properties. However, the implementation seems to expect channelLayout, sampleFormat and sampleRate instead (anything else results in exceptions).

Locking issues with >3 concurrent video streams

I've noticed that attempting to demux more than 3 videos concurrently results in the Node application locking up completely.

The following reproduction sample pipes a small video file into beamcoder. It does so asynchronously and waits for all streams to complete. On running this you will see that nothing is printed after the Beamcoder startup banner text. If you remove one of the tags on line 31 (so that only 3 streams are created) you'll see that all 3 are read successfully to completion.

https://gist.github.com/mandersan/00df1955d551fc225c7eec9191cf9613/raw/3939f2b9ba46f2e7da4b0214c9133888f668ff7f/beamcoder-test.zip

Thanks

where is your media folder?

Hello!

I trying run your scratch examples but without success.

For example - stream_wav.js

I replaced Media/sound/BBCNewsCountdown.wav on my file.wav and recieved such

TypeError: Cannot read property 'length' of undefined
    at getLast (/home/user/project/node_modules/beamcoder/beamstreams.js:85:20)
    at frameDicer.dice (/home/user/project/node_modules/beamcoder/beamstreams.js:106:9)
    at /home/user/project/node_modules/beamcoder/beamstreams.js:550:47
    at /home/user/project/node_modules/beamcoder/beamstreams.js:316:40
    at Transform.flush [as _flush] (/home/user/project/node_modules/beamcoder/beamstreams.js:319:9)
    at Transform.prefinish (_stream_transform.js:142:10)
    at Transform.emit (events.js:315:20)
    at prefinish (_stream_writable.js:640:14)
    at finishMaybe (_stream_writable.js:648:5)
    at endWritable (_stream_writable.js:670:3)
[aac @ 0x5c05140] Qavg: -nan

'text' streams don't get written / cannot be read back - with Gist

TL;DR

I can write custom 'text' packets, but they don't seem to get written to the file and I can't read them back. See the gist for specific working example of the problem:

Gist showing this issue: https://gist.github.com/josiahbryan/70a551d5afa949e57a223f8216dd03e1

Setup

I adapted the mp4 example to add a second stream using the 'text' codec ("Raw Subtitle Codec", codec_id 94210), and added a simple data packet that is written out after every video frame. See gist above, line 37 for stream addition, and lines 83-97 where the data packet is generated:

const dataPacket = beamcoder.packet({
	pts:  lastPts || 0,
	data: Buffer.from("Hello, world!")
});

for (const pkt of [ dataPacket ]) {
	pkt.duration = 1;
	pkt.stream_index = dataStream.index;
	pkt.pts = pkt.pts * 90000/25;
	pkt.dts = pkt.dts * 90000/25;
	await mux.writeFrame(pkt);
}

Then, just to test it out, I read back the file with demuxer = await beamcoder.demuxer(...file...) and then find the stream for the data with let dataStreamInput = demuxer.streams.find(x => x.codecpar.codec_type === 'data').

That works great! I get a dataStreamInput variable, with an index of 1.

However, here is the problem:

Problem

When I try to read in the packets from the video file and find a packet with a stream_index matching the dataStreamInput.index found, there are no packets in the file with that index.

See the gist for specific example - lines 120-128. You'll notice I even tried finding ANY non-video packets - none were found.

Can you offer any insight into what's going wrong here? Why does the stream exist in the output file (even VLC/ffplay show the stream is indexed), but no packets seem to be written anywhere for that stream?

Am I reading it back wrong? Or did I write the packets out wrong?

Thanks so much!

Beamcoder and nvidia hwaccel, h264_cuvid & h264_nvenc

Is there any way to achieve gpu accelerated (de/en)coding with beamcoder, similar to:
ffmpeg -c:v h264_cuvid -i x.ts -c:v h264_nvenc y.ts

I've compiled the ffmpeg according to Nvidia's instructions and did npm install with a proper ffmpeg compilaton.

beamcoder.decoders() & encoders() lists appropriate codecs, but the performance of the whole flow is around 4 times slower then expected (aws g4 instance). The GPU utilization is around few percent, instead of ~45% with ffmpeg.

Seems like BC is copying the frames back and forth from gpu to ram and to gpu again, which is a huge bottleneck?

thanks in advance for any hints!

Transcoding rtmp to hls example

Hi all,
Thanks for this cool project. Can you provide a full example to demonstrate how to transcode from rtmp url to hls stream?
As I can see from this example, it seems like that I have to transcode frame by frame manually ? Does this affect program performane?
Thanks.

flush()/gc/reclamation interaction unclear in docs; pr proposal

From looking at the docs and the code it appears to be the case (although it's by no means absolutely clear that)

  1. The garbage collector will not, by itself, free resources associated with a decoder (or encoder).

  2. The user must call decoder.flush() to both get any stray frames and to free resources associated with the decoder.

  3. The user cannot flush the decoder (and thus pick up any stray frames) without freeing the resources associated with the decoder (which is why await decoder.flush() ; await decoder.flush() raises and error).

Is this correct?

If so, is there any reason why, as a result of FFMpeg's architecture, it has to be this way? As a user, I'd prefer that flush() just picked up the stray frames, and that resources were freed by the GC (e.g. finalizer invoking tidyCarrier, if I have that right). N-API seems to permit this for C/C++ stuff in general. That's why, not being an FFMpeg expert, I thought I'd ask if there was anything special about it which made this setup necessary before I looked into it in more detail.

If I am correct, and if there is no particular reason why FFMpeg prevented GC, would you accept PRs which changed it so that flush didn't free resources but the GC did?

YUV to RGB Pixel Format

Hi there,

Thanks for a great library. I'm reading a vp8 mkv video file and need to process the frames with TensorFlow JS. The frame data is given in YUV format that I need to get into RGB. I've tried some formulas but something is just not right. Is there an easy way to convert it that anyone knows about?

  let demuxer = await beamcoder.demuxer('/Users/ianrothmann/Downloads/RT76057bde97adbd0537bcdb89037c9034.mkv'); // Create a demuxer for a file
  let decoder = beamcoder.decoder({ name: 'vp8' }); // Codec asserted. Can pass in demuxer.
  let packet = {};

  for ( let x = 0 ; x < 1000 && packet != null ; x++ ) {
    packet = await demuxer.read(); // Read next frame. Note: returns null for EOF
    if (packet && packet.stream_index === 0) { // Check demuxer to find index of video stream
      let frames = await decoder.decode(packet);

      for(let f of frames.frames){
        const frameData=[];
        const width=f.width;
        const height=f.height;
        let [ y, u, v ] = f.data;
        //I need to convert here


            }
          }
      }
    }
  }
  let frames = await decoder.flush();

Much appreciated!

Creating AVFrame from jpeg images

I am trying to create a h264 video from jpeg images using beamcoder. I can read jpeg data using Jimp/jpeg-js but they give data as rgb data. how to assign this data to AVFrame (which takes yuv data). for this purpose i can use ffmpeg swcontext also to convert data. but i can't find out a way to get swsContext from beamcoder.

muxerstream and MP4 incompatibility

Sorry to bother you again... So I'm successfully transcoding an MKV file to MP4 (copying the videostream and transcoding the audio), but now I'm unable to get a working MP4 output using muxerstreams. It only works fine if I open the muxer with a file directly.
There are some movflags that are required for this to work in ffmpeg; the following works fine from the command line:
ffmpeg -loglevel trace -i media/DTS_HD_OUT_OF_THE_BOX_60-thedigitaltheater.mkv -f mp4 -c:v copy -movflags +empty_moov - | vlc

FFMPEG writes to a pipe and VLC immediately starts playing it (doesn't wait for the encoding to finish).

With beamcoder, it doesn't work with those flags (I've also tried various combinations of +empty_moov, +faststart, +frag_keyframe and even +dash). Without empty_moov and frag_keyframe, I get the expected error of the muxer not supporting a non-seekable output. Adding those flags removes that error (so it's correctly picking them up), but the resulting file is not playable. FFprobe displays the following error:
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x555f94a37f00] Could not find codec parameters for stream 0 (Video: h264 (avc1 / 0x31637661), none, 1920x1080): unspecified pixel format
The pixelformat is set correctly in the videostream object (outputVideoStream.codecpar.format === 'yuv420p').

Comparing the two files (only difference being one uses muxerstream and the other is written directly to file, movflags kept the same), I see quite a few offsets are 0 in the muxerstream result vs the direct file, and somewhere along the line the contents are shifted a couple of bytes (the muxerstream file is 24 bytes longer).

I've enabled trace logging in ffmpeg in the hopes to glean some insight into what it does differently, but I couldn't find anything. Any ideas as to what might be different?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.