Code Monkey home page Code Monkey logo

sipsorcerymedia.ffmpeg's Introduction

SIPSorceryMedia.FFmpeg

This project is an example of developing a C# library that can use features from FFmpeg native libraries and that inegrates with the SIPSorcery real-time communications library.

This project has been tested successfully on Windows, MacOs and Linux.

The classes in this project provide functions to:

  • Video codecs: VP8, H264
  • Audio codecs: PCMU (G711), PCMA (G711), G722, G729 and Opus
  • Video input:
    • using local file or remote using URI (like this)
    • using camera
    • using screen
  • Audio input:
    • using local file or remote using URI (like this or this)
    • using microphone

You can set any Video input (or none) with any Audio input (or none)

There is no Audio ouput in this library. For this you can use SIPSorcery.SDL2

Installing FFmpeg

For Windows

Install the FFmpeg binaries using the packages at https://www.gyan.dev/ffmpeg/builds/#release-builds and include the FFMPEG executable in your PATH to find them automatically.

As of 14 Jan 2024 the command below works on Windows 11 and installs the required FFmpeg binaries and lbraries wheer teh can be found by SIPSorceryMedia.FFmpeg:

winget install "FFmpeg (Shared)"

For Linux

Install the FFmpeg binaries using the package manager for the distribution.

sudo apt install ffmpeg

For Mac

Install homebrew

brew install ffmpeg brew install mono-libgdiplus

Testing

Test

sipsorcerymedia.ffmpeg's People

Contributors

christophei avatar exyi avatar fredej avatar ha-ves avatar lostmsu avatar matoskidata avatar namaneo avatar rob-baily avatar sipsorcery avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

sipsorcerymedia.ffmpeg's Issues

Initialize: Method not supported

When I try to initialize FFmpeg i get the following error:

System.NotSupportedException
  HResult=0x80131515
  Message=Specified method is not supported.
  Source=FFmpeg.AutoGen
  StackTrace:
   at FFmpeg.AutoGen.DynamicallyLoadedBindings.<>c.<Initialize>b__2_1244()
   at FFmpeg.AutoGen.DynamicallyLoadedBindings.<>c.<Initialize>b__2_525()
   at FFmpeg.AutoGen.ffmpeg.avdevice_register_all()
   at SIPSorceryMedia.FFmpeg.FFmpegInit.SetFFmpegBinariesPath(String path)
   at SIPSorceryMedia.FFmpeg.FFmpegInit.RegisterFFmpegBinaries(String libPath)
   at SIPSorceryMedia.FFmpeg.FFmpegInit.Initialise(Nullable`1 logLevel, String libPath, ILogger appLogger)
   at Frodo.Services.JanusClasses.RTCSession..ctor(Int32 missionId) in C:\FrodoNet\Services\WebRTC\RTCSession.cs:line 21
   at Frodo.Services.JanusClasses.JanusSession.<OnOfferReceived>d__47.MoveNext() in 

Simulcast Video Display Issue with SipSorcery in Janus Media Server Setup

We are currently encountering an issue with our video setup using the Janus media server, and I'd appreciate your expertise in helping us resolve it.

Here's a breakdown of our scenario:

  1. Client A: An iOS device equipped with simulcast support. This device publishes video to Janus, making decisions based on simulcast.
  2. Client B: An Android device, which does not support simulcast.
  3. Client C: A Windows desktop device running SipSorcery, also without simulcast support.

When all these clients join a video room:

  • Client A is able to view video feeds from both Client B and Client C.
  • Client B can also display videos from Client A and Client C.
  • However, Client C only manages to display the video from Client B and fails to show the video from Client A.

For your reference, all participants are utilizing the VP8 video codec.

Could you provide some insights into what might be causing this issue? Your assistance will be greatly appreciated.

Exception "No such file or directory" while initializing(FFmpegVideoDecoder->InitialiseSource)

Thanks for all your effort and contribution to this open-source project. That I am able to build my project on top of your hard work.
I try to use FFmpegCameraSource winVideoEP = new FFmpegCameraSource(FFmpegCameraManager.GetCameraDevices().First().Path); to get camera video stream. the it will report exception "No such file or directory". While WindowsVideoEndPoint winVideoEP = new(new FFmpegVideoEncoder()) works.

FFmpeg path in docker container?

How to install ffmpeg with needed libraries in docker?
Everytime I install it, it shows that the --shared are enabled using ffmpeg -version, but I cannot find the shared files, only ffmpeg executable exists in /usr/bin.
Setting the path in init function as null also does not help.
I do these steps in docker:

RUN apt-get update \
    && apt-get install -y ffmpeg \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/*

How do I set up ffmpeg, preferably so that I can set the path to ffmpeg libraries:

FFmpegInit.Initialise(FfmpegLogLevelEnum.AV_LOG_FATAL, @"C:\ffmpeg_build\bin", null);

as null? The above is Windows version

Error: 'FFmpeg video source, selected video codec CELB is not supported.'

Getting Error 'FFmpeg video source, selected video codec CELB is not supported.'

Code:
_videoSource = new FFmpegCameraSource(WebCamName);
_videoSink = new FFmpegVideoEndPoint();

        VideoCodecsEnum VideoCodec = VideoCodecsEnum.VP8;
        _videoSink.RestrictFormats(format => format.Codec == VideoCodec);
        _videoSource.RestrictFormats(x => x.Codec == VideoCodec);

        MediaStreamTrack videoTrack = new MediaStreamTrack(_videoSource.GetVideoSourceFormats(), 
            MediaStreamStatusEnum.SendRecv);
        _peerConnection.addTrack(videoTrack);

        _videoSource.OnVideoSourceEncodedSample += _peerConnection.SendVideo;
        _peerConnection.OnVideoFrameReceived += _videoSink.GotVideoFrame;

        _peerConnection.OnVideoFormatsNegotiated += (formats) =>
        {
            _videoSource.SetVideoSourceFormat(formats.First());
            _videoSink.SetVideoSinkFormat(formats.First());
        };

        _videoSource.OnVideoSourceRawSampleFaster += _videoSource_OnVideoSourceRawSampleFaster;

        _videoSink.OnVideoSinkDecodedSampleFaster += (RawImage rawImage) =>
        {
            Bitmap bmpImage = new Bitmap(rawImage.Width, rawImage.Height, rawImage.Stride, PixelFormat.Format24bppRgb, rawImage.Sample);
        };

Switch file source in ffmpeg

I have an algorithm that manipulate a jpeg image that comes out of a camera and I want to send it to webRTCservice.

I was able to use one of the code samples to create a mock app but with a single static file.

private static Task<RTCPeerConnection> CreatePeerConnection()
        {
            RTCConfiguration config = new RTCConfiguration
            {
                iceServers = new List<RTCIceServer> { new RTCIceServer { urls = STUN_URL } }
            };

            _peerConnection = new RTCPeerConnection(config);

            SIPSorceryMedia.FFmpeg.FFmpegFileSource fileSource = new SIPSorceryMedia.FFmpeg.FFmpegFileSource("path to file.jpg", true, null, 960, true);

           
            _videoSource = fileSource as IVideoSource;

            _videoSource.RestrictFormats(x => x.Codec == _videoCodec);
            MediaStreamTrack videoTrack = new MediaStreamTrack(_videoSource.GetVideoSourceFormats(), MediaStreamStatusEnum.SendRecv);

            _peerConnection.addTrack(videoTrack);
            _videoSource.OnVideoSourceEncodedSample += _peerConnection.SendVideo;
            _peerConnection.OnVideoFormatsNegotiated += (videoFormats) => _videoSource.SetVideoSourceFormat(videoFormats.First());

            

            return Task.FromResult(_peerConnection);
}

My question is how do I change the file once I receive the next image from my algo service, or if there is a smarter way to do that like using a stream

Thanks :-)

AV_PIX_FMT_RGB24 or AV_PIX_FMT_BGR24

Hi Team,

I am learning the source code of FFmpegVideoEncoder. On line 444. for i420ToRgb converter, the target PixelFormat is AV_PIX_FMTBGR24 instead of AV_PIX_FMT_RGB24. This baffles me if I correct this by changing it to AV_PIX_FMT_RGB24 to be consistent. The video is blue washed(showing blue). Can you help me understand this?

_i420ToRgb = new VideoFrameConverter(
width, height,
AVPixelFormat.AV_PIX_FMT_YUV420P,
width, height,
AVPixelFormat.AV_PIX_FMT_BGR24);

Appreciate it,

Regards,
Rong Zhu

Some questions about Xamarin iOS Android~

Hey Here I am again :P

For a long while I am watching this lib.
I know this is for Linux, Mac, Windows.
Let me take a possible.
If I build FFmpeg library to iOS or Android this and make sure it could found in console(environment variables).
This SIPSorceryMedia.FFmpeg will work well like desktop platform yet ?
I want to have some little test, So do you recommend me to do this?
Or this is just fantasy for me? T_T

waiting for answer and thanks.

In Ubuntu, I am getting Error: octl(VIDIOC_G_INPUT): Inappropriate ioctl for device

[video4linux2,v4l2 @ 0x558aa9181500] fd:207 capabilities:84a00001
[video4linux2,v4l2 @ 0x558aa9181500] fd:207 capabilities:84a00001
[video4linux2,v4l2 @ 0x558aa88de740] fd:212 capabilities:84a00001

[video4linux2,v4l2 @ 0x558aa88de740] ioctl(VIDIOC_G_INPUT): Inappropriate ioctl for device

Would you please help me to resolve it.

BTW: My code is working in Windows 11 and MacOS.

Unable to load DLL 'avformat.58' in Ubuntu

I have successfully execute my App in Windows BUT getting following errors in Ubuntu

Unhandled exception. System.DllNotFoundException: Unable to load DLL 'avformat.58

Pls note that I have installed FFmpeg in Ubuntu: sudo apt install ffmpeg -y

External camera not working in mac

In my application in Mac external camera device not working, there was an error related to the framerate not being supported.
But working in windows.
In the SIPSorceryMedia.FFmpeg solution there is a test Console application, I build it in Mac, no feed is showing there if I select the external USB camera.

image

Using a different codec than PCMU

Today, this library is working well with PCMU Audio Codec.

If I change to G722 a crash occur in FFmpegAudioSource.AudioDecoder_OnAudioFrame when _audioEncoder.EncodeAudio() is called.
I'm using the sample FFmpegFileAndDevicesTest with AudioCodecsEnum AudioCodec = AudioCodecsEnum.G722;

So I tried to add support to this codec and I succeeded but:

  • I'm not all sure if it's the correct way to do it and if it will work for another codecs
  • It's necessary to change IAudioEncoder interface so several projects/libraries are impacted

How I modify the code:

  1. _audioEncoder.EncodeAudio() crashs because the clock rate is not the same between PCMU and G722 so I need to resample the stream
  2. For this I need Resample() method from AudioEncoder class but it's not provided by IAudioEncoder ...
  3. So I did this: I added SIPSorcery as new package and change sevearl constructour method to use AudioEncoder instead of IAudioEncoder => Yes it's not good at all but the only way to have access to the Resample() method
  4. Then is use this code (in FFmpegAudioSource.AudioDecoder_OnAudioFrame):
          // FFmpeg AV_SAMPLE_FMT_S16 will store the bytes in the correct endianess for the underlying platform.
          short[] pcm = buffer.Take(dstSampleCount * 2).Where((x, i) => i % 2 == 0).Select((y, i) => BitConverter.ToInt16(buffer, i * 2)).ToArray();
          if (_audioFormatManager.SelectedFormat.ClockRate != Helper.AUDIO_SAMPLING_RATE_PCMU)
              pcm = _audioEncoder.Resample(pcm, Helper.AUDIO_SAMPLING_RATE_PCMU, _audioFormatManager.SelectedFormat.ClockRate);
          var encodedSample = _audioEncoder.EncodeAudio(pcm, _audioFormatManager.SelectedFormat);

          OnAudioSourceEncodedSample?.Invoke((uint)encodedSample.Length, encodedSample);

instead of

          // FFmpeg AV_SAMPLE_FMT_S16 will store the bytes in the correct endianess for the underlying platform.
          short[] pcm = buffer.Take(dstSampleCount * 2).Where((x, i) => i % 2 == 0).Select((y, i) => BitConverter.ToInt16(buffer, i * 2)).ToArray();
          var encodedSample = _audioEncoder.EncodeAudio(pcm, _audioFormatManager.SelectedFormat);

          OnAudioSourceEncodedSample?.Invoke((uint)encodedSample.Length, encodedSample);

It's working but I feel very lucky because I'm not sure that this line is correct if another codec is used:

OnAudioSourceEncodedSample?.Invoke((uint)encodedSample.Length, encodedSample);

I also think that the final audio quality is not greater because we asked FFmpeg to change first the format and rate to AVSampleFormat.AV_SAMPLE_FMT_S16 and Helper.AUDIO_SAMPLING_RATE in FFmpegAudioDecoder.InitialiseSource

So do you have any idea to add support of G722 ?
and G729 later ?

Thanks

frame dropped issue

Getting followings:

[dshow @ 0000023879ecb8c0] real-time buffer [HD Webcam] [video input] too full or near too full (98% of size: 3041280 [rtbufsize parameter])! frame dropped!
dshow passing through packet of type video size 296574 timestamp 19785225640000 orig timestamp 19785225629393 graph timestamp 19785225640000 diff 10607 HD Webcam

[mjpeg @ 0000023879ed0b80] unable to decode APP fields: Invalid data found when processing input
Input #0, dshow, from 'video=HD Webcam':
Duration: N/A, start: 1978522.165000, bitrate: N/A
Stream #0:0: Video: mjpeg (Baseline), 1 reference frame (MJPG / 0x47504A4D), yuvj422p(pc, bt470bg/unknown/unknown, center), 1280x720, 30 fps, 30 tbr, 10000k tbn, 10000k tbc

Would you please help me to fix these issues.

Attempted to read or write protected memory. This is often an indication that other memory is corrupt.”

异常信息:
由于线程退出或应用程序请求,已中止 I/O 操作。
Fatal error. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
at SIPSorceryMedia.FFmpeg.FFmpegVideoEncoder.DecodeFaster(FFmpeg.AutoGen.AVCodecID, FFmpeg.AutoGen.AVPacket*, Int32 ByRef, Int32 ByRef)
at SIPSorceryMedia.FFmpeg.FFmpegVideoEncoder.DecodeFaster(FFmpeg.AutoGen.AVCodecID, Byte[], Int32 ByRef, Int32 ByRef)
at SIPSorceryMedia.FFmpeg.FFmpegVideoEndPoint.GotVideoFrame(System.Net.IPEndPoint, UInt32, Byte[], SIPSorceryMedia.Abstractions.VideoFormat)
at SIPSorcery.Net.RTPSession.RaisedOnOnVideoFrameReceived(Int32, System.Net.IPEndPoint, UInt32, Byte[], SIPSorceryMedia.Abstractions.VideoFormat)
at SIPSorcery.net.RTP.VideoStream.ProcessVideoRtpFrame(System.Net.IPEndPoint, SIPSorcery.Net.RTPPacket, SIPSorcery.Net.SDPAudioVideoMediaFormat)
at SIPSorcery.net.RTP.MediaStream.OnReceiveRTPPacket(SIPSorcery.Net.RTPHeader, Int32, System.Net.IPEndPoint, Byte[], SIPSorcery.net.RTP.VideoStream)
at SIPSorcery.Net.RTPSession.OnReceiveRTPPacket(Int32, System.Net.IPEndPoint, Byte[])
at SIPSorcery.Net.RTPSession.OnReceive(Int32, System.Net.IPEndPoint, Byte[])
at SIPSorcery.Net.RTCPeerConnection.OnRTPDataReceived(Int32, System.Net.IPEndPoint, Byte[])
at SIPSorcery.Net.RtpIceChannel.OnRTPPacketReceived(SIPSorcery.Net.UdpReceiver, Int32, System.Net.IPEndPoint, Byte[])
at SIPSorcery.Net.UdpReceiver.CallOnPacketReceivedCallback(Int32, System.Net.IPEndPoint, Byte[])
at SIPSorcery.Net.UdpReceiver.EndReceiveFrom(System.IAsyncResult)
at System.Runtime.CompilerServices.TaskAwaiter+<>c.b__12_0(System.Action, System.Threading.Tasks.Task)
at System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
at System.Threading.Tasks.AwaitTaskContinuation.RunCallback(System.Threading.ContextCallback, System.Object, System.Threading.Tasks.Task ByRef)
at System.Threading.Tasks.Task.RunContinuations(System.Object)
at System.Threading.Tasks.ValueTask1+ValueTaskSourceAsTask+<>c[[System.Net.Sockets.SocketReceiveFromResult, System.Net.Sockets, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]].<.cctor>b__4_0(System.Object) at System.Net.Sockets.Socket+AwaitableSocketAsyncEventArgs.InvokeContinuation(System.Action1<System.Object>, System.Object, Boolean, Boolean)
at System.Net.Sockets.Socket+AwaitableSocketAsyncEventArgs.OnCompleted(System.Net.Sockets.SocketAsyncEventArgs)
at System.Net.Sockets.SocketAsyncEventArgs+<>c.<.cctor>b__176_0(UInt32, UInt32, System.Threading.NativeOverlapped*)
at System.Threading.ThreadPoolTypedWorkItemQueue`2[[System.Threading.PortableThreadPool+IOCompletionPoller+Event, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e],[System.Threading.PortableThreadPool+IOCompletionPoller+Callback, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].System.Threading.IThreadPoolWorkItem.Execute()
at System.Threading.ThreadPoolWorkQueue.Dispatch()
at System.Threading.PortableThreadPool+WorkerThread.WorkerThreadStart()

.NET Framework support

Hi,

the latest release 0.0.12-pre targets .NET Standard 2.1 and can therefore not be referenced in a .NET framework project. I saw that the target version has been changed to .NET Standard 2.0 in the meantime, so it should be compatible with the full framework.

Would it be possible to create a new release in order to support referencing this library in a framework project?

Thanks a lot for this amazing library and all its components.

h264 encoding not working

If I create a video encoder, and try to encode to h264, I get an error that occurs during initialization. If I comment out where the profile=baseline option is, it runs, but does not produce video.

To reproduce:

IVideoEncoder videoEncoder = new FFmpegVideoEncoder();

var encodedBuffer = videoEncoder.EncodeVideoFaster(new RawImage()
{
    Height = (int)height,
    Width = (int)width,
    PixelFormat = VideoPixelFormatsEnum.Bgra,
    Stride = (int)stride,
    Sample = frameData
}, VideoCodecsEnum.H264);

Error:

[libopenh264enc @ 0000021723ed5d80] [Eval @ 00000051fd1fe0f0] Undefined constant or missing '(' in 'baseline'
[libopenh264enc @ 0000021723ed5d80] Unable to parse option value "baseline"

[Question]How to set quality of encoded images

As the title states, how do I control the quality of the encoded image?

I tried adding to the FFmpegVideoEncoder constructor a dictionary with
"quality","best"
but that doesn¨t really effect quality right?
I also tried:

 dictionary.Add("crf", "10");
   dictionary.Add("quality", "best");
  dictionary.Add("b:v", "2000k");
new FFmpegVideoEncoder(dictionary)

But I get warning "option not found" .
I have been using:
https://trac.ffmpeg.org/wiki/Encode/VP8
as reference for parameters but I might be looking in the wrong direction?

If I don't provide any input I get a debug message:
[libvpx @ 000001da8decfb80] Neither bitrate nor constrained quality specified, using default CRF of 32 and bitrate of 256kbit/sec

Mixed video codecs. H264 and VP8

Making a video call from MicroSIP to my application. My application uses WindowsVideoEndPoint with FFmpegVideoEncoder and the media endpoints will be passed to VoIPMediaSession. However VoIPMediaSession.OnVideoSinkSample is not firing. That is because there is mixed video codecs as seen from logs that the video sink and source is setting the format to VP8, but Video depacketisasion codec is set to H264.

The issue is fixed when I remove from Helper class the VP8 codec as an supported video format. So everything will be set to H264.

[12:32:14 DBG] Setting audio sink and source format to 0:PCMU 8000 (RTP clock rate 8000).
[12:32:14 DBG] Setting video sink and source format to 96:VP8.
[12:32:14 DBG] Set remote track (audio - index=0) SSRC to 1470613415.
[12:32:15 INF] Video capture device Integrated Webcam successfully initialised: 1280x720 30fps pixel format NV12.
[12:32:15 WRN] Video source for capture device failure. Hardware MFT failed to start streaming due to lack of hardware resources.
[12:32:15 DBG] Successfully initialised ffmpeg based image encoder: CodecId:[AV_CODEC_ID_VP8] - 640:480 - 30 Fps
[12:32:15 DBG] Set remote track (video - index=0) SSRC to 1714766501.
[12:32:15 DBG] Video depacketisation codec set to H264 for SSRC 1714766501.
[12:32:19 DBG] [InitialiseDecoder] CodecId:[AV_CODEC_ID_VP8

Getting distorted image after converting RawImage to System.Drawing.Bitmap

I am using SIPSorceryMedia.FFmpeg as a media library and Janus as media Server.
When I am converting the RawImage of Publisher's Camera feed got from event OnVideoSourceRawSampleFaster to System.Drawing.Bitmap image = new Bitmap(rawImage.Width, rawImage.Height, rawImage.Stride, System.Drawing.Imaging.PixelFormat.Format24bppRgb, rawImage.Sample), converted image is OK.

But when I am converting the RawImage of Subscriber's feed got from event OnVideoSinkDecodedSampleFaster to System.Drawing.Bitmap image = new Bitmap(rawImage.Width, rawImage.Height, rawImage.Stride, System.Drawing.Imaging.PixelFormat.Format24bppRgb, rawImage.Sample), converted image is distorted.

Please provide me some clue to solve the issue.

Converting SIPSorceryMedia.Abstractions.RawImage to IronSoftware.Drawing.AnyBitmap

It is possible to convert SIPSorceryMedia.Abstractions.RawImage to System.Drawing.Bitmap as follows:

System.Drawing.Bitmap bmpImage = new System.Drawing.Bitmap(rawImage.Width, rawImage.Height, rawImage.Stride, System.Drawing.Imaging.PixelFormat.Format24bppRgb, rawImage.Sample);

Now I want to convert SIPSorceryMedia.Abstractions.RawImage to IronSoftware.Drawing.AnyBitmap.

Any help in this regard.

FFmpeg in MacOS

I have installed brew install ffmpeg and brew install mono-libgdiplus both.

But getting following error:

System.DllNotFoundException: Unable to load DLL avutil.56 - The specified module could not be found.

How to find following libraries for MacOS:

libavcodec
libavformat
libavutil
libavfilter
libavdevice
libswresample
libswscale

It will be very helpful if you provide these like Windows OS.

System.NotSupportedException: Specified method is not supported. Since 5b2dbe2dabef1131f214632346bde2ada9f8e229

Prior to the update of the ffmpeg version 6 no errors but now this code breaks with error
System.NotSupportedException: Specified method is not supported

ffmpeg.av_hwdevice_iterate_types(type))

System.NotSupportedException: Specified method is not supported. at FFmpeg.AutoGen.DynamicallyLoadedBindings.<>c.<Initialize>b__2_953(AVHWDeviceType <p0>) at FFmpeg.AutoGen.DynamicallyLoadedBindings.<>c.<Initialize>b__2_243(AVHWDeviceType prev) at FFmpeg.AutoGen.ffmpeg.av_hwdevice_iterate_types(AVHWDeviceType prev)

I have installed latest ffmpeg
ffmpeg version 6.0-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers
built with gcc 12.2.0 (Rev10, Built by MSYS2 project)

Runnning on windows 11.

Getting distorted images - possible error in decoder format?

Hello. I am using this decoder to decode video stream coming from Janus WebRTC. Now, it works, that means, it receives frames and decodes them, but the problem is, i am getting very distorted images.
This is incoming stream configuration in Janus:

rtp-sample101: {
        type = "rtp"
        id = 101
        description = "herelink"
        metadata = "You can use this metadata section to put any info you want!"
        audio = false
        video = true
        videoport = 5101
        videopt = 96
        videortpmap = "H264/90000"
        secret = "adminpwd"
}

Now, i set up the ffmpeg endpoint to decode this incoming stream:

VideoFormat videoFormat = new(96, "H264");
           _videoSink.RestrictFormats(format => format.Codec == VideoCodecsEnum.H264);
            _videoSink.SetVideoSinkFormat(videoFormat);`
           _videoSink.SetVideoSourceFormat(videoFormat);

Then, i save this frame to file:

Mat imageMatrix = new Mat((int)height, (int)width, DepthType.Cv8U, 3);
imageMatrix.SetTo(sample);`
Image<Rgb, byte> image = imageMatrix.ToImage<Rgb, byte>();
Console.WriteLine("creating photo...");
image.Save($"MIS_{missionId}_photo_{photoCount + 1}.jpg");

The above code is implemented using Emgu.CV, I get following image no matter if i use Rgb or Bgr format, it is always distorted:

MIS_1_photo_1
Now, i do not really know where is the problem. Is it a problem with incorrect format? Does ffmpeg need to include h264 library? or is this the problem with actual decoding?

Is it possible to use custom IO context?

Hello,

FFMpeg library has possibility to use custom IO context in order to accept memory buffers as input instead of using files. Does this library has possibility to do so?
Examples from official ffmpeg doc:
https://ffmpeg.org/doxygen/trunk/avio_reading_8c-example.html#a20
https://ffmpeg.org/doxygen/trunk/structAVFormatContext.html#a1e7324262b6b78522e52064daaa7bc87
I've tried to port that functionality in your library but unfortunately System.AccessViolationException thrown when I use custom IO.
Here https://stackoverflow.com/questions/41734219/avformat-open-input-fails-only-with-a-custom-io-context I found that this could be due to missing configuration flag in compilation of ffmpeg. Have you think about support of custom IO context in your library?

Any help will be appreciated)

Saving as image file from SIPSorceryMedia.Abstractions.RawImage

How to save a image file from SIPSorceryMedia.Abstractions.RawImage?

Is it possible to create MemoryStream from SIPSorceryMedia.Abstractions.RawImage?

Actually want to convert SIPSorceryMedia.Abstractions.RawImage to Avalonia.Media.Imaging.Bitmap.

Any help in this regard.

FFMpeg binaries are not embedded anymore

The documentation states :

No additional steps are required for an x64 build. The nuget package includes the FFmpeg x64 binaries.

This does not seem to be true since v1.0.

I get that embedding ffmpeg in the package is heavy and that there is no point in embedding it, but this sentence should be removed and replaced with a way to download ffmpeg.

Unable to decode APP fields: Invalid data found when processing input Input #0, dshow, from 'video=HD Webcam':

dshow passing through packet of type video size 246597 timestamp 711091400000 orig timestamp 711091375007 graph timestamp 711091400000 diff 24993 HD Webcam
[mjpeg @ 0000015e257e4380] unable to decode APP fields: Invalid data found when processing input
Input #0, dshow, from 'video=HD Webcam':
Duration: N/A, start: 71109.140000, bitrate: N/A
Stream #0:0: Video: mjpeg (Baseline), 1 reference frame (MJPG / 0x47504A4D), yuvj422p(pc, bt470bg/unknown/unknown, center), 1280x720, 30 fps, 30 tbr, 10000k tbn, 10000k tbc
dshow passing through packet of type video size 246506 timestamp 711091710000 orig timestamp 711091695233 graph timestamp 711091710000 diff 14767 HD Webcam
dshow passing through packet of type video size 243761 timestamp 711092030000 orig timestamp 711092016403 graph timestamp 711092030000 diff 13597 HD Webcam

Code:

_videoSource = new FFmpegCameraSource(WebCamName);
_videoSink = new FFmpegVideoEndPoint();

        VideoCodecsEnum VideoCodec = VideoCodecsEnum.VP8;
        _videoSink.RestrictFormats(format => format.Codec == VideoCodec);
        _videoSource.RestrictFormats(x => x.Codec == VideoCodec);

        MediaStreamTrack videoTrack = new MediaStreamTrack(_videoSource.GetVideoSourceFormats(), 
            MediaStreamStatusEnum.SendRecv);
        _peerConnection.addTrack(videoTrack);

        _videoSource.OnVideoSourceEncodedSample += _peerConnection.SendVideo;
        _peerConnection.OnVideoFrameReceived += _videoSink.GotVideoFrame;

        _peerConnection.OnVideoFormatsNegotiated += (formats) =>
        {
            _videoSource.SetVideoSourceFormat(formats.First());
            _videoSink.SetVideoSinkFormat(formats.First());
        };

        _videoSource.OnVideoSourceRawSampleFaster += _videoSource_OnVideoSourceRawSampleFaster;

        _videoSink.OnVideoSinkDecodedSampleFaster += (RawImage rawImage) =>
        {
            Bitmap bmpImage = new Bitmap(rawImage.Width, rawImage.Height, rawImage.Stride, PixelFormat.Format24bppRgb, rawImage.Sample);
        };

private async void _webRTC_OnPeerConnectionConnected(object sender, EventArgs e)
{
await _videoSource.StartVideo();
await _videoSink.StartVideoSink();

        await _audioSource.StartAudio();
        await _audioSink.StartAudioSink();
    }

Cannot access file path on MacOS

Hi, a great library for use within MacOS and WebRTC.

I am trying to run the example project FFmpegFileAndDevicesTest.

It works fine when pointing to the external video source file, however, when I switch the source to use my Mac camera, it cannot get the path. It does get the list of cameras and selects the correct one with the correct name, but the path is returned as "0:"

Is there a way to resolve this?

I have installed all the relevant FFMPEG libraries and dependencies on my Mac, via homebrew.

Screenshot 2023-07-23 at 21 52 02

After which the application crashes when trying hitting the InitialiseDecoder() method in the FFmpegCameraSource class, specifically on the
if (ffmpeg.avformat_open_input(&pFormatContext, _sourceUrl, _inputFormat, &options) < 0) in the FFmpegVideoDecoder class.

New releases?

The latest version of this package available on NuGet is 0.0.10-pre, which was published over a year ago. Are there any plans to publish a new version soon?

How to initialize FFmpeg in .NET MAUI-Windows

I have installed the bin using winget install "FFmpeg (Shared)" on Windows 11 22631.3810. it installs successfully but when running FFmpegInit.Initialise() I get System.NotSupportedException: 'Specified method is not supported.' I have also tried setting the logLevel to FfmpegLogLevelEnum.AV_LOG_FATAL , and I got the same exception. I tried to set the libPath to C:\Users\UserName\AppData\Local\Microsoft\WinGet\Packages\Gyan.FFmpeg.Shared_Microsoft.Winget.Source_8wekyb3d8bbwe\ffmpeg-7.0.1-full_build-shared\bin and I got the same exeption

UWP / .NETStandard compatibility

Hey,

first of all, thank you so much for this awesome library!
I am trying to use SIPSorcery in an UWP project. Unfortunately, the SIPSorceryMedia.Windows package does not work on UWP (getting System.TypeLoadException: "Could not load type 'NAudio.Wave.WaveInEvent' from assembly 'NAudio, Version=1.10.0.0, Culture=neutral, PublicKeyToken=null'.". I kinda expected this to happen. I wanted to try out the FFmpeg implementation before trying to implement my own AudioEndpoint. The SIPSorceryMedia.FFmpeg package does install on a UWP project just fine, however, the namespace is not available. I figured this has to do with it targeting .NETCore. Is it possible to make it a .NETStandard library?

Best regards
Richard

Unable to load DLL

avpriv_mpeg4audio_get_config2 Not found in DLL
Unable to load DLL 'avformat.58 under The specified module could not be found
In FileSourceDecoder,InitialiseSource line 70 _fmtCtx = ffmpeg.avformat_alloc_context();

SIPSorceryMedia.FFmpeg.FileSourceDecoder.InitialiseSource() в FileSourceDecoder.cs
SIPSorceryMedia.FFmpeg.FileSourceDecoder.StartDecode() в FileSourceDecoder.cs
SIPSorceryMedia.FFmpeg.FFmpegFileSource.Start() в FFmpegFileSource.cs
SIPSorceryMedia.FFmpeg.FFmpegFileSource.StartVideo() в FFmpegFileSource.cs
VideofileStreamingConsole.Program.CreatePeerConnectionNew.AnonymousMethod__1(SIPSorcery.Net.RTCPeerConnectionState) в Program.cs

My code:

 private static async Task<RTCPeerConnection> CreatePeerConnection()
        {
            RTCConfiguration config = new RTCConfiguration
            {
                iceServers = new List<RTCIceServer> { new RTCIceServer { urls = STUN_URL } }
            };
            var pc = new RTCPeerConnection(config);
            FFmpegInit.Initialise(FfmpegLogLevelEnum.AV_LOG_VERBOSE);

            var testPattern = new FFmpegFileSource("C:\\Users\\rcher\\Videos\\big-buck-bunny_trailer.webm", false, new AudioEncoder());

            MediaStreamTrack videoTrack = new MediaStreamTrack(testPattern.GetVideoSourceFormats(), MediaStreamStatusEnum.SendRecv);
            pc.addTrack(videoTrack);

            testPattern.OnVideoSourceEncodedSample += pc.SendVideo;
            pc.OnVideoFormatsNegotiated += (videoFormats) => testPattern.SetVideoSourceFormat(videoFormats.First());

            pc.onconnectionstatechange += async (state) =>
            {
                logger.LogDebug($"Peer connection state change to {state}.");

                if (state == RTCPeerConnectionState.connected)
                {
                    await testPattern.StartVideo();
                }
                else if (state == RTCPeerConnectionState.failed)
                {
                    pc.Close("ice disconnection");
                }
                else if (state == RTCPeerConnectionState.closed)
                {
                    await testPattern.CloseVideo();
                }
            };
            return pc;
        }

I don't know what I've done wrong, can you please help? Tried to add all av-dlls into bin\Debug\netcoreapp3.1\FFmpeg\bin\x64

RTCPeerConnection.OnRTPDataReceived Invalid argument

Hi, I'm working webrtc video call with sipsocery.

I make some code with sip examples and FFmpeg.
There are no problems janus server connection and peer connection.
But ffmepg decoding always failed with RTCPeerConnection.OnRTPDataReceived Invalid argument on Mac OSX
I installed [email protected].
brew install libvpx ffmpeg mono-libgdiplus

I put some log codes in FFmpeg and realize ffmpeg.av_image_copy_to_buffer always returns -22.

ffmpeg.av_image_copy_to_buffer(pOutData, outputBufferSize, _dstData, _dstLinesize, _dstPixelFormat, _dstWidth, _dstHeight, 1)

This is my log and code

[Debug] 0 Successfully initialised ffmpeg based image converted for 640:480:AV_PIX_FMT_YUV420P->640:480:AV_PIX_FMT_BGR24.
[Debug] 0 ConvertFrame result: -22 outputBufferSize: 921600 lineSizes: 1920,0,0,0 _dstPixelFormat: AV_PIX_FMT_BGR24 _dstWidth: 640 _dstHeight: 480
[Error] 0 Exception RTCPeerConnection.OnRTPDataReceived Invalid argument
            FFmpegInit.Initialise(FfmpegLogLevelEnum.AV_LOG_VERBOSE);
            
            RTCPeerConnection pc = new RTCPeerConnection(null);

            var encoder = new FFmpegVideoEncoder();
            var videoSource = new VideoTestPatternSource(encoder);
            videoSource.RestrictFormats(format => format.Codec == VideoCodecsEnum.H264);
            var videoSink = new FFmpegVideoEndPoint();
            videoSink.RestrictFormats(format => format.Codec == VideoCodecsEnum.H264);

            MediaStreamTrack videoTrack = new MediaStreamTrack(videoSink.GetVideoSourceFormats(), MediaStreamStatusEnum.SendRecv);
            pc.addTrack(videoTrack);
            // pc.OnVideoFrameReceived += (endpoint, durtaion, buffer, format) => {
            //     Debug.Log($"OnVideoFrameReceived endpoint: {endpoint} buffer: {buffer.Length} format: {format.FormatName}");
            // };
            pc.OnVideoFrameReceived += videoSink.GotVideoFrame;
            videoSource.OnVideoSourceEncodedSample += pc.SendVideo;
            // videoSource.OnVideoSourceError += (errorMessage) => {
            //     Debug.Log($"=== OnVideoSourceError {errorMessage}");
            // };

            pc.OnVideoFormatsNegotiated += (formats) => {
                // foreach (var format in formats) {
                //     Debug.Log($"=== OnVideoFormatsNegotiated {format.FormatName}");
                // }
                // Debug.Log($"=== OnVideoFormatsNegotiated {formats.First().FormatName}");
                videoSink.SetVideoSourceFormat(formats.First());
                videoSource.SetVideoSourceFormat(formats.First());
            };
            pc.OnTimeout += (mediaType) => Debug.LogFormat($"[WebRTC] Peer connection timeout on media {mediaType}.");
            pc.oniceconnectionstatechange += (state) => Debug.LogFormat($"[WebRTC] ICE connection state changed to {state}.");
            pc.onconnectionstatechange += async (state) => {
                Debug.LogFormat($"[WebRTC] Peer connection connected changed to {state}.");
                switch (state) {
                    case RTCPeerConnectionState.connected:
                    // Debug.Log($"=== pc.HasAudio: {pc.HasAudio} pc.HasVideo: {pc.HasVideo}");
                    // pc.GetRtpChannel().OnRTPDataReceived += (localPort, remoteEP, buffer) => {
                    //     Debug.Log($"localPort: {localPort} remoteEP: {remoteEP} buffer: {buffer.Length} buffer[0]: {buffer[0]}");
                    // };
                    break;
                    case RTCPeerConnectionState.closed:
                    case RTCPeerConnectionState.failed:
                    await videoSource.CloseVideo().ConfigureAwait(false);
                    videoSource.Dispose();
                    break;
                }
            };

            // pc.OnReceiveReport += (endpoint, mediatypes, packet) => {
            //     Debug.Log($"=== OnReceiveReport: {endpoint} {mediatypes}");
            // };

            videoSource.OnVideoSourceRawSample += (uint durationMiliseconds, int width, int height, byte[] sample, VideoPixelFormatsEnum pixelFormat) => {
                RunOnMainThread.Enqueue(() => {
                    try {
                        if (LocalVideoTexture == null) {
                            LocalVideoTexture = new Texture2D(width, height, TextureFormat.RGB24, false);
                        }
                        LocalVideoTexture.LoadRawTextureData(sample);
                        LocalVideoTexture.Apply();
                    } catch (Exception ex) {
                        Debug.LogError(ex);
                    }
                });
            };
            // videoSource.OnVideoSourceRawSample += videoSink.ExternalVideoSourceRawSample;
            // videoSource.OnVideoSourceEncodedSample += (uint durationMiliseconds, byte[] sample) => {
            //     Debug.Log($"OnVideoSourceEncodedSample {durationMiliseconds}");
            // };

            videoSink.OnVideoSinkDecodedSample += (byte[] bmp, uint width, uint height, int stride, VideoPixelFormatsEnum pixelFormat) => {
                Debug.Log($"videoSink.OnVideoSinkDecodedSample {width} {height}");
                RunOnMainThread.Enqueue(() => {
                    try {
                        if (RemoteVideoTexture == null) {
                            RemoteVideoTexture = new Texture2D((int)width, (int)height, TextureFormat.RGB24, false);
                        }
                        RemoteVideoTexture.LoadRawTextureData(bmp);
                        RemoteVideoTexture.Apply();
                    } catch (Exception ex) {
                        Debug.LogError(ex);
                    }
                });
            };

            var offer = pc.CreateOffer(null);
            await pc.setLocalDescription(new RTCSessionDescriptionInit { type = RTCSdpType.offer, sdp = offer.ToString() }).ConfigureAwait(false);
            Debug.LogFormat($"[WebRTC] SDP Offer: {pc.localDescription.sdp}");

            VideoSource = videoSource;
            await VideoSource.StartVideo().ConfigureAwait(false);
            Debug.Log("[WebRTC] Video Started");

            WebRTCCancellationTokenSource = new CancellationTokenSource();
            var janusClient = new JanusRestClient(
                JANUS_BASE_URL,
                SIPSorcery.LogFactory.CreateLogger("[WebRTC]"),
                WebRTCCancellationTokenSource.Token
            );

            janusClient.OnJanusEvent += async (resp) => {
                if (resp.jsep != null) {
                    Debug.LogFormat($"[Janus] get event jsep={resp.jsep.type}.");

                    Debug.LogFormat($"[Janus] SDP Answer: {resp.jsep.sdp}");
                    var result = pc.setRemoteDescription(new RTCSessionDescriptionInit { type = RTCSdpType.answer, sdp = resp.jsep.sdp });
                    Debug.LogFormat($"[Janus] SDP Answer: {pc.remoteDescription.sdp}");

                    if (result == SetDescriptionResultEnum.OK) {
                        Debug.LogFormat($"[Janus] Starting peer connection.");
                        await pc.Start().ConfigureAwait(false);
                    } else {
                        Debug.LogFormat($"[Janus] Error setting remote SDP description {result}.");
                    }
                }
            };

            await janusClient.StartSession().ConfigureAwait(false);
            Debug.Log("[WebRTC] StartSession Started");
            await janusClient.StartEcho(offer.ToString()).ConfigureAwait(false);
            Debug.Log("[WebRTC] StartEcho Started");

I'm new on video encoding/decoding. Is there anyone give me an advice to fix this issue?

Issue with FFmpegFileSource OnVideoSourceRawSampleFaster

First thank you for these great set of libraries! I have been trying to run the WebRTC sample as shown at https://github.com/sipsorcery-org/sipsorcery and have had limited success with the library as is.

First off, I could not find good FFMPEG libraries for version 4.4 so I have tried it out using version 6.0 and that seems to be mostly OK. My real problem has been that no matter with what version I use when I get the video going from the web page connecting to the socket the server side keeps crashing. By doing some tweaks to this library I found that by commenting out line 60 in FFmpegFileSouce.cs (see code below) that the crash would not happen and the MP4 file would always be played. Oddly, very infrequently with the existing library as is it would work. By by very rarely it was probably less than 1 in 10 times.

I am going to submit a PR for the upgrade for FFMPEG libraries but don't really understand what is happening with this not working. It could be related to the BGR24 conversion that is happening in FFmpegVideoSource when the OnVideoSourceRawSampleFaster handler is not null but again it is hard to tell and I don't know what the purpose of the "raw sample faster" is.

Please let me know if you have any thoughts and I happy to provide more information or answer questions.

Where to get FFmpeg binaries for arm64 Linux build? Readme outdated

readme says to use "apt install ffmpeg" on linux, but that version does not come with all the necessary components. For windows build using binaries that are present in repository works, but on Linux obviously not because it's a different platform. Is there a place where I can download FFmpeg from that would work on Linux?

Everything I tried so far was giving me either "Unable to load DLL avutil.56" or "Unable to load DLL avdevice.58".

[Problem] Unable to load DLL 'avformat.58', 'swscale.5' in Windows 11

Hi, have problems on my test sample

  • dotnet 6
  • SIPSorceryMedia.FFmpeg 1.2.0(latest)
  • default compile options (without single-file, etc)
  • Debug(Any CPU/x64)

I'm used WebRTC for receive traffic and convert this with "FFmpegVideoEndPoint" class

System.DllNotFoundException: Unable to load DLL 'avcodec.58 under ..\\BroadcastStreamServer\\bin\\Debug\\net6.0\\': The specified module could not be found.
   at FFmpeg.AutoGen.ffmpeg.LoadLibrary(String libraryName, Boolean throwException)
   at FFmpeg.AutoGen.ffmpeg.<>c.<.cctor>b__7_59()
   at SIPSorceryMedia.FFmpeg.FFmpegVideoEncoder.DecodeFaster(AVCodecID codecID, Byte[] buffer, Int32& width, Int32& height)

Problem with 'avformat.58' have solution - download FFMpeg and paste to solution folder, but i'm cannot find swscale.5.dll
udp. FFMpeg contains swscale.5.dll, i'm dont set "copy to output"

In readme written

For Windows
No additional steps are required for an x64 build. The nuget package includes the [FFmpeg](https://www.ffmpeg.org/) x64 binaries.

but i'm cannot find native platform-library in build folder

OnVideoSourceRawSampleFaster event is not firing on Linux (Ubuntu 22.04)

ffmpeg -> ffmpeg version 4.4.2-0ubuntu0.22.04.1 Copyright (c) 2000-2021 the FFmpeg developers

v4l2-ctl --list-devices:

WebCam C170: WebCam C170 (usb-0000:00:14.0-10):

/dev/video0
/dev/video1
/dev/media0

FFmpegInit.Initialise(FfmpegLogLevelEnum.AV_LOG_VERBOSE, linux_path) is working fine. path is /usr/lib/x86_64-linux-gnu. There no error to initiaze ffmpeg

private FFmpegCameraSource _videoSource;

_videoSource = new FFmpegCameraSource(camPath); //here camPath is /dev/video1
_videoSource.OnVideoSourceRawSampleFaster += _videoSource_OnVideoSourceRawSampleFaster;
_videoSource.OnVideoSourceError += _videoSource_OnVideoSourceError;
_videoSource.StartVideo().Wait();

There is no error on above code execution.

BUT OnVideoSourceRawSampleFaster event is not firing. But it is working on Windows.

RawImageToAnyBitmap.zip

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.