Code Monkey home page Code Monkey logo

membrane_rtc_engine's People

Contributors

balins avatar bblaszkow06 avatar daniel-jodlos avatar dependabot[bot] avatar dmorn avatar dominikwolek avatar evadne avatar felonekonom avatar jasonwc avatar karolk99 avatar lvala avatar mat-hek avatar mickel8 avatar ostinelli avatar philipgiuliani avatar pkrucz00 avatar qizot avatar rados13 avatar roznawsk avatar sax avatar sgfn avatar wojciechbarczynski avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

membrane_rtc_engine's Issues

Sometimes simulcast doesn't properly switch encoding

During work on #134 I met with strange behavior that sometimes even though we peer received callback that encoding switched, the width and height of incoming frames doesn't change. The SSRC in SimulcastTee changes correctly, after a change of encoding. In addition in this situation, WebRTC Stats API doesn't show any qualityLimitationReason.

Example logs from the failed test:

test User disable and then enable medium encoding (TestVideoroom.Integration.SimulcastTest)
     test/integration/simulcast_test.exs:27
     Failed on stage: disable should be encoding: h,
                     receiver stats are: [%{"callbackEncoding" => "m", "framesPerSecond" => 19, "framesReceived" => 356, "height" => 360, "width" => 640}, %{"callbackEncoding" => "m", "framesPerSecond" => 12, "framesReceived" => 389, "height" => 360, "width" => 640}, %{"callbackEncoding" => "h", "framesPerSecond" => 15, "framesReceived" => 404, "height" => 360, "width" => 640}, %{"callbackEncoding" => "h", "framesPerSecond" => 20, "framesReceived" => 441, "height" => 360, "width" => 640}, %{"callbackEncoding" => "h", "framesPerSecond" => 20, "framesReceived" => 481, "height" => 360, "width" => 640}, %{"callbackEncoding" => "h", "framesPerSecond" => 15, "framesReceived" => 519, "height" => 360, "width" => 640}, %{"callbackEncoding" => "h", "framesPerSecond" => 20, "framesReceived" => 559, "height" => 360, "width" => 640}, %{"callbackEncoding" => "h", "framesPerSecond" => 20, "framesReceived" => 595, "height" => 360, "width" => 640}, %{"callbackEncoding" => "h", "framesPerSecond" => 15, "framesReceived" => 632, "height" => 360, "width" => 640}, %{"callbackEncoding" => "h", "framesPerSecond" => 19, "framesReceived" => 674, "height" => 360, "width" => 640}, %{"callbackEncoding" => "h", "framesPerSecond" => 20, "framesReceived" => 710, "height" => 360, "width" => 640}, %{"callbackEncoding" => "h", "framesPerSecond" => 17, "framesReceived" => 747, "height" => 360, "width" => 640}]
                     sender stats are: [%{"h" => %{"framesPerSecond" => 20, "framesSent" => 279, "height" => 720, "qualityLimitationReason" => "none", "width" => 1280}, "l" => %{"framesPerSecond" => 20, "framesSent" => 279, "height" => 180, "qualityLimitationReason" => "none", "width" => 320}, "m" => %{"framesPerSecond" => 20, "framesSent" => 279, "height" => 360, "qualityLimitationReason" => "none", "width" => 640}}, %{"h" => %{"framesPerSecond" => 20, "framesSent" => 321, "height" => 720, "qualityLimitationReason" => "none", "width" => 1280}, "l" => %{"framesPerSecond" => 20, "framesSent" => 321, "height" => 180, "qualityLimitationReason" => "none", "width" => 320}, "m" => %{"framesPerSecond" => 19, "framesSent" => 320, "height" => 360, "qualityLimitationReason" => "none", "width" => 640}}, %{"h" => %{"framesPerSecond" => 21, "framesSent" => 362, "height" => 720, "qualityLimitationReason" => "none", "width" => 1280}, "l" => %{"framesPerSecond" => 21, "framesSent" => 362, "height" => 180, "qualityLimitationReason" => "none", "width" => 320}, "m" => %{"framesPerSecond" => 0, "framesSent" => 320, "height" => 360, "qualityLimitationReason" => "none", "width" => 640}}, %{"h" => %{"framesPerSecond" => 20, "framesSent" => 401, "height" => 720, "qualityLimitationReason" => "none", "width" => 1280}, "l" => %{"framesPerSecond" => 20, "framesSent" => 401, "height" => 180, "qualityLimitationReason" => "none", "width" => 320}, "m" => %{"framesPerSecond" => 0, "framesSent" => 320, "height" => 360, "qualityLimitationReason" => "none", "width" => 640}}, %{"h" => %{"framesPerSecond" => 20, "framesSent" => 442, "height" => 720, "qualityLimitationReason" => "none", "width" => 1280}, "l" => %{"framesPerSecond" => 20, "framesSent" => 442, "height" => 180, "qualityLimitationReason" => "none", "width" => 320}, "m" => %{"framesPerSecond" => 0, "framesSent" => 320, "qualityLimitationReason" => "none"}}, %{"h" => %{"framesPerSecond" => 20, "framesSent" => 482, "height" => 720, "qualityLimitationReason" => "none", "width" => 1280}, "l" => %{"framesPerSecond" => 20, "framesSent" => 482, "height" => 180, "qualityLimitationReason" => "none", "width" => 320}, "m" => %{"framesPerSecond" => 0, "framesSent" => 320, "qualityLimitationReason" => "none"}}, %{"h" => %{"framesPerSecond" => 20, "framesSent" => 522, "height" => 720, "qualityLimitationReason" => "none", "width" => 1280}, "l" => %{"framesPerSecond" => 20, "framesSent" => 522, "height" => 180, "qualityLimitationReason" => "none", "width" => 320}, "m" => %{"framesPerSecond" => 0, "framesSent" => 320, "qualityLimitationReason" => "none"}}, %{"h" => %{"framesPerSecond" => 21, "framesSent" => 563, "height" => 720, "qualityLimitationReason" => "none", "width" => 1280}, "l" => %{"framesPerSecond" => 21, "framesSent" => 563, "height" => 180, "qualityLimitationReason" => "none", "width" => 320}, "m" => %{"framesPerSecond" => 0, "framesSent" => 320, "qualityLimitationReason" => "none"}}, %{"h" => %{"framesPerSecond" => 20, "framesSent" => 603, "height" => 720, "qualityLimitationReason" => "none", "width" => 1280}, "l" => %{"framesPerSecond" => 20, "framesSent" => 603, "height" => 180, "qualityLimitationReason" => "none", "width" => 320}, "m" => %{"framesPerSecond" => 0, "framesSent" => 320, "qualityLimitationReason" => "none"}}, %{"h" => %{"framesPerSecond" => 21, "framesSent" => 644, "height" => 720, "qualityLimitationReason" => "none", "width" => 1280}, "l" => %{"framesPerSecond" => 21, "framesSent" => 644, "height" => 180, "qualityLimitationReason" => "none", "width" => 320}, "m" => %{"framesPerSecond" => 0, "framesSent" => 320, "qualityLimitationReason" => "none"}}, %{"h" => %{"framesPerSecond" => 20, "framesSent" => 683, "height" => 720, "qualityLimitationReason" => "none", "width" => 1280}, "l" => %{"framesPerSecond" => 20, "framesSent" => 683, "height" => 180, "qualityLimitationReason" => "none", "width" => 320}, "m" => %{"framesPerSecond" => 0, "framesSent" => 320, "qualityLimitationReason" => "none"}}, %{"h" => %{"framesPerSecond" => 20, "framesSent" => 724, "height" => 720, "qualityLimitationReason" => "none", "width" => 1280}, "l" => %{"framesPerSecond" => 20, "framesSent" => 724, "height" => 180, "qualityLimitationReason" => "none", "width" => 320}, "m" => %{"framesPerSecond" => 0, "framesSent" => 320, "qualityLimitationReason" => "none"}}]
       

Add ability to update peer, track and remove track

Just pushed the change to require the old track id when replacing a track.

It seems to me that updating the metadata will require a new MediaEvent type, no? Right now metadata and tracksMetadata only come in on the join media event.

Could the feature to update metadata be split out into a separate issue? If there was a more generic ability to change metadata and tracks metadata, publicly exposed on the client, then passing optional track metadata could just call that function.

Originally posted by @sax in #13 (comment)

Fly.io support?

Hi Guys, it appears that ex_dtls uses a feature of Erlang that cannot be used on IPv6 networks and because internally fly.io uses IPv6 (questionable decision, I know) we cannot use the whole of the membrane framework. With everyone using Elixir with Fly.io I do wonder if this limits this excellent framework's potential. I'm interested in helping if I can but I'm really not that sure I understand what ex_dtls is doing ๐Ÿ˜‚

Limit audio and video sending

We should limit broadcasting media of non-speaking participants as much as possible. Without that, bigger rooms use lots of resources on both the server and the client side.

Project license

Hi there ๐Ÿ‘‹,
membrane and jellyfish are really awesome projects ๐Ÿš€. I would love to use the membrane_rtc_engine in my own project, but there is no open source license for it yet. Is this intentional or was it just overlooked?

Implement simulcast

In simulcast, client (in our case browser) can send multiple encodings of the same source. Each encoding can have different resolution, bitrate etc.

SFU responsibility is to convey to each peer version of source encoding that this peer is able to process.

The task is to implement SFU side of simulcast.

Resources that might be useful:

Crash when removing endpoint

When a child is removed from the RTC engine using the function, we're hit by this crash:

{"time":"2023-07-18T09:52:51.928Z","severity":"error","message":"GenServer #PID<0.1295.0> terminating\n** (Membrane.CallbackError) Callback handle_child_pad_removed/4 is not implemented in Membrane.RTC.Engine\n\n    (membrane_rtc_engine 0.15.0) Membrane.RTC.Engine.handle_child_pad_removed({:endpoint, \"streaming-endpoint\"}, {Membrane.Pad, :input, \"65d69082-5598-40c4-83ed-83f0d00e8486:0d97df5c-e8e1-4c19-b463-26e9cbf44386\"}, ...

The child is removed using the RTC.Engine.remove_endpoint(state.rtc_engine, peer_id) and once the remove_tracks notification is received we remove the children of the endpoint.

The crash happens on any version of membrane_core v0.12, RTC engine v0.15.

Implementing an empty handle_child_pad_removed/4 in the Membrane.RTC.Engine module solves the problem, but isn't that function supposed to be optional?

WebRTC Data Channels

I think it'd be cool if this project (or a related spin-off project) was able to support data channels, so I figured I'd create an issue. ๐Ÿ˜„

This came up in the Discord a few months ago:

@DominikWolek 04/12/2023 10:07 UTC
In case you'd want to create WebRTC-based app:

This would be especially nice for standalone usage like multiplayer game networking in the browser!

Properly remove elements when they finish their work

After merge #82 we simply use the :remove_children action to remove tee for a specific track. This is not the best way because this element can still have some buffers to process. We should wait until his mailbox is empty and then remove it.

Make benchmarks

Though we already made some benchmarks, we should put together what we have, add more benchmarks and create a report.

TODO:

  • create benchmark scenarios
  • implement the scenarios in some engine (stampede/testRTC?) and add them to this repo
  • create a script to collect performance data from the server and publish it to GH
  • perform benchmarks and publish the report (in readme?)

In the future, we may want to use something more efficient than a browser to generate the load (another Membrane pipeline?)

Cannot read properties of undefined (reading `id` / `trackIdToMetadata`) (TS client bug)

index.js:1 Uncaught TypeError: Cannot read properties of undefined (reading 'id')
    at b.receiveMediaEvent (index.js:1)
    at Object.callback (rtcConnection.ts:95)
    at Channel.trigger (phoenix.esm.js:315)
    at phoenix.esm.js:1051
    at Object.decode (phoenix.esm.js:658)
    at Socket.onConnMessage (phoenix.esm.js:1037)
    at WebSocket.conn.onmessage [as __zone_symbol__ON_PROPERTYmessage] (phoenix.esm.js:840)
    at WebSocket.I (geometrics.js:2)
    at u.invokeTask (geometrics.js:2)
    at s.runTask (geometrics.js:2)
            case "peerLeft":
                if (a = this.idToPeer.get(t.data.peerId),
                a.id === this.getPeerId())
                    return;
                Array.from(a.trackIdToMetadata.keys()).forEach(T=>{
                    var f, p;
                    return (p = (f = this.callbacks).onTrackRemoved) == null ? void 0 : p.call(f, this.trackIdToTrack.get(T))
                }
                ),
                this.erasePeer(a),
                (E = (m = this.callbacks).onPeerLeft) == null || E.call(m, a);
                break;

and a similar bug

index.js:1 Uncaught TypeError: Cannot read properties of undefined (reading 'trackIdToMetadata')
    at b.receiveMediaEvent (index.js:1)
    at Object.callback (rtcConnection.ts:95)
    at Channel.trigger (phoenix.esm.js:315)
    at phoenix.esm.js:1051
    at Object.decode (phoenix.esm.js:658)
    at Socket.onConnMessage (phoenix.esm.js:1037)
    at WebSocket.conn.onmessage [as __zone_symbol__ON_PROPERTYmessage] (phoenix.esm.js:840)
    at WebSocket.I (geometrics.js:2)
    at u.invokeTask (geometrics.js:2)
    at s.runTask (geometrics.js:2)

I have issue with fast_stl compile error

  • I use homebrew install openssl 1.1 but when compiled still error
  • fatal error: 'openssl/err.h' file not found
    #include <openssl/err.h>
    image

what is the OpenSSL version required for membrane ?

Firefox: ICE failed, your TURN server appears to be broken

Running the webrtc_videoroom example in this repository, I am able to connect to a room in Chrome with no issue, however attempting to do the same in Firefox always fails, and the console logs the error "WebRTC: ICE failed, your TURN server appears to be broken, see about:webrtc for more details"

Running on Arch Linux, Firefox Developer version 122.0b9.

Attached is the about:webrtc (saved as .txt because Github won't let me upload .HTML)
aboutWebrtc.txt

Let me know if there is any more information I can provide!

Custom Endpoint support removed in v0.14.0

Hi, I've encountered an issue when upgrading my application from 0.13.0 to 0.14.0.

My application defines its own endpoints for some custom media processing work, and it appears that support for custom endpoints has been entirely removed from v0.14.0 with no mention in the upgrade guide -- endpoints now appear to be restricted to those defined by membrane_rtc_engine (WebRTC, HLS, and RTMP)

The problem seems to arise from the function Membrane.RTC.Engine.Endpoint.WebRTC.MediaEvent.to_type_string/1, defined here.

Was this an intentional decision? Or is this a bug? The decision to use membrane_rtc_engine at our company was driven by the promise of being able to use the wider membrane ecosystem for custom media processing, and this change seems to completely remove that capability.

Any assistance or workarounds would be much appreciated.

Improve SDP offer-answer exchange

This is needed to

  • allow clients to declare the streams they want to send
  • support simulcast #3

The current approach is to save ourselves from SDP weirdness by limiting its usage to the bare minimum and do the actual negotiation with our custom protocol

Investigate video artifacts

In some environments (probably with low network bandwidth or slow CPUs) there are a lot of video artifacts.

The task is to investigate the problem.

Useful info:

  • this problem seems not to be present in Zoom or Google Meet
  • we use VP8 codec by default. Check if H.264 behaves in the same manner

Crash in WebRTC endpoint when removing tracks

https://github.com/membraneframework/membrane_rtc_engine/blob/master/lib/membrane_rtc_engine/endpoints/webrtc_endpoint.ex#L164-L170

It looks like Membrane.RTC.Engine.Endpoint.WebRTC.update_tracks/2 returns a map, but in the handle_notification for :removed_tracks, it matches on a tuple.

It also looks like it's updating :outbound_tracks on state, even though it's working with inbound_tracks. Is this intentional, or maybe a carry-over from some previous version of the code?

Implement logging mechanism in TS client

The task is to discuss and implement logging mechanism in TS client.

One way would be to save logs to a file so that it can be send to us for debugging purposes.

Sending engine data to a pipeline

I setup an engine which receives data from the browser, everything seems to be working properly.

I would like to send the data to a membrane pipeline for post-processing before saving it to a file.

Is there a simple way to send the data between the engine and the pipeline ?

Currently the solution I have is creating a custom endpoint which sends the data to the pipeline.

Make RTC engine modular

Currently, to add new functionality to the RTC engine (like stream recording), it's necessary to modify the engine itself. This is problematic due to the following reasons:

  • it's hard to add custom/proprietary functionality to the engine
  • adding new functionalities in parallel results in many conflicts
  • the size of the RTC engine grows quickly

An idea of how to solve this is to reduce the RTC engine responsibility to route the traffic between externally delivered endpoints. Such endpoints could handle various protocols and serve different purposes. Apart from the WebRTC endpoint, we could have

  • Broadcasting endpoint (HLS/DASH/RTMP)
  • Recording endpoint (mp4/WebM)
  • An endpoint reading a recorded file and sending it to the RTC engine
  • SIP endpoint
  • RTP dump endpoint
  • ...

The RTC Engine could be only aware of the routing stuff - peers, tracks (their metadata, possibly also codecs), distribution, etc.

Implementation-wise, RTC endpoints could be implementations of an RTC.Endpoint behaviour. Each RTC endpoint could provide a piece of pipeline to be linked into the engine. Possibly it could be a Membrane.Bin exposing compatible pads and handling messages defined in an API. For example, to make the WebRTC.Endpoint work with the RTC engine, we would possibly have to wrap it into the RTC.Endpoint implementation - that way the WebRTC.Endpoint wouldn't rely on the RTC Engine nor the other way around. Then, when adding/accepting a new peer, the user could choose which RTC.Endpoint implementation to use for that peer.

Since the recording endpoint, for example, can be spawned per user or per entire room, possibly both the RTC endpoint implementation and the user choosing it should be able to decide which streams the endpoint instance should receive (or where its streams should be sent).

To get there from the current state, we should remove all the WebRTC-specific stuff from the RTC engine and move it either to the WebRTC.Endpoint or the new RTC.Engine implementation for the WebRTC.Endpoint. We should remember that the WebRTC.Endpoint should remain usable without the RTC.Engine.

Let me know what do you think ;)

Could not put/update key `:inbound_tracks` on a nil value

[error] GenServer #PID<0.1671.0> terminating
** (ArgumentError) could not put/update key :inbound_tracks on a nil value
    (elixir 1.12.3) lib/access.ex:365: Access.get_and_update/3
    (elixir 1.12.3) lib/map.ex:854: Map.get_and_update/3
    (elixir 1.12.3) lib/kernel.ex:2652: Kernel.update_in/3
    (membrane_rtc_engine 0.1.0-alpha.2) lib/membrane_rtc_engine/engine.ex:502: Membrane.RTC.Engine.handle_notification/4
    (membrane_core 0.7.0) lib/membrane/core/callback_handler.ex:126: Membrane.Core.CallbackHandler.exec_callback/4
    (membrane_core 0.7.0) lib/membrane/core/callback_handler.ex:71: Membrane.Core.CallbackHandler.exec_and_handle_callback/5
    (membrane_core 0.7.0) lib/membrane/core/parent/message_dispatcher.ex:29: Membrane.Core.Parent.MessageDispatcher.handle_message/2
    (stdlib 3.16) gen_server.erl:695: :gen_server.try_dispatch/4
    (stdlib 3.16) gen_server.erl:771: :gen_server.handle_msg/6
    (stdlib 3.16) proc_lib.erl:226: :proc_lib.init_p_do_apply/3
Last message: {Membrane.Core.Message, :notification, [endpoint: "r-1e93cc:05b3a2f6-7720-4b04-b71d-88ab1c51f036", new_tracks: [%Membrane.WebRTC.Track{encoding: :OPUS, fmtp: %ExSDP.Attribute.FMTP{apt: nil, cbr: nil, level_asymmetry_allowed: nil, max_br: nil, max_dpb: nil, max_fr: nil, max_fs: nil, max_mbps: nil, max_smbps: nil, maxaveragebitrate: nil, maxplaybackrate: nil, minptime: 10, packetization_mode: nil, profile_id: nil, profile_level_id: nil, pt: 111, range: nil, repair_window: nil, stereo: nil, usedtx: nil, useinbandfec: true}, id: "r-1e93cc:05b3a2f6-7720-4b04-b71d-88ab1c51f036:c8b53c2a-5a51-4461-92c2-fecebf3f3e17 ", mid: "0", name: "r-1e93cc:05b3a2f6-7720-4b04-b71d-88ab1c51f036:c8b53c2a-5a51-4461-92c2-fecebf3f3e17 -audio-ea320f84-9580-487e-8718-a2292c175841", rtp_mapping: %ExSDP.Attribute.RTPMapping{clock_rate: 48000, encoding: "opus", params: 2, payload_type: 111}, ssrc: 2278908833, status: :ready, stream_id: "ea320f84-9580-487e-8718-a2292c175841", type: :audio}, %Membrane.WebRTC.Track{encoding: :VP8, fmtp: nil, id: "r-1e93cc:05b3a2f6-7720-4b04-b71d-88ab1c51f036:1f454507-f9e5-4af0-b072-cc69ae012510 ", mid: "1", name: "r-1e93cc:05b3a2f6-7720-4b04-b71d-88ab1c51f036:1f454507-f9e5-4af0-b072-cc69ae012510 -video-ea320f84-9580-487e-8718-a2292c175841", rtp_mapping: %ExSDP.Attribute.RTPMapping{clock_rate: 90000, encoding: "VP8", params: nil, payload_type: 96}, ssrc: 801945448, status: :ready, stream_id: "ea320f84-9580-487e-8718-a2292c175841", type: :video}]], []}
[info] Spawned room process #PID<0.1725.0> f

Fix joining without sending media

The task is to fix following scenarios:

  • joining without sending any media
  • joining without sending and receiving any media
  • joining only with microphone or camera

Fail to compile fast_tls in webrtc_videoroom example (and probably others)

OSX Sonoma 14.0

openssl --version
OpenSSL 3.2.1 30 Jan 2024 (Library: OpenSSL 3.2.1 30 Jan 2024)
export LDFLAGS="-L/usr/local/opt/openssl/lib"
export CFLAGS="-I/usr/local/opt/openssl/include/"
export CPPFLAGS="-I/usr/local/opt/openssl/include/"
iex -S mix phx.server
Erlang/OTP 26 [erts-14.0.2] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1] [jit] [dtrace]

===> Compiling /Users/xxx/Development/membrane_rtc_engine/examples/webrtc_videoroom/deps/fast_tls/c_src/fast_tls.c
===> /Users/xxx/Development/membrane_rtc_engine/examples/webrtc_videoroom/deps/fast_tls/c_src/fast_tls.c:21:10: fatal error: 'openssl/err.h' file not found
#include <openssl/err.h>
         ^~~~~~~~~~~~~~~
1 error generated.

** (Mix) Could not compile dependency :fast_tls, "/Users/xxx/.asdf/installs/elixir/1.14.1-otp-25/.mix/elixir/1-14/rebar3 bare compile --paths /Users/xxx/Development/membrane_rtc_engine/examples/webrtc_videoroom/_build/dev/lib/*/ebin" command failed. Errors may have been logged above. You can recompile this dependency with "mix deps.compile fast_tls", update it with "mix deps.update fast_tls" or clean it with "mix deps.clean fast_tls"

Prioritizing streams

The goal is to implement a mechanism that will limit the sending streams to these with the highest priority. The solution for it will be the use of Last N algorithm or some sort of its modification. With the use of it, we will reduce the number of sent video streams to declared constant N (we will pass it as WebRTCEndpoint option).

To do list:

  • Element detecting active speakers in the room (SpeakersDetector) - should be done with the use of the VAD notifications from the EndpointBins.
  • Mechanism for turning off not prioritized streams - should be done with the use of track_filters in each EndpointBin

Description of how this will work

WebRTCEndpoint will forward all VAD notifications to SpeakersDetector. These notifications probably will have to include peer_id and timestamps. If the active speakers' group changes, it will send a notification with a list of new active speakers to the RTC Engine. RTC engine will map this list of peers to a list of video_tracks. Then it will publish that list of video tracks to all other endpoints as a new notification e.g.: {:speakers_video, tracks_list}. โ€‹WebRTCEndpoint will forward these notifications to EndpointBin, which will disable all video tracks outside this list. These require activating the RTCP outbound reports with proper intervals, otherwise errors described in the issue will occure.
Each EndpointBin can have a list of prioritized tracks based on input from the client library. It will be sent at a custom media event. When notification {:speakers_video, tracks_list} came, it will first take tracks from the client priority list a then this from notification.

add_peer function mismatched with handle_other callback

I just noticed the message sent from Membrane.RTC.Engine.add_peer/3 does not match its handle_other callback. It looks as if the function expects a peer_id and some metadata, whereas the callback expects a Peer struct.

Expose MediaEvents in API to enable implementing client library in other languages.

I just wanted to add a note here based on our conversation today.

I'm going forward with the code as-is, but I do think that MediaEvent as a complete black box will break down, especially when someone wants to implement a client in a new language. I was considering writing a client in Swift, for example. At the very least, the various MediaEvent types will need to be documented, even if the transport dictates that those data structures be serialized to JSON before sending.

PROTOBUF is a case where it might make sense to expose the underlying type. That said, we're not planning on using PROTOBUF any time soon.

Originally posted by @sax in #2 (comment)

Rethink detecting peer dissconection

At this moment it's user code resposibility to properly detect that one of the peers has dissconnected e.g. by closing web browser.
In such a situation message {:remove_peer, peerId} should be sent to the SFU engine.

This detection can be problematic.
We can track signalling connection e.g. if WS is alive and if not then send :remove_peer message to the SFU engine. The problem is that there can be situations in which signalling connection state does not correspond to the media connection state and for some reasons WS connection is broken and need to reconnect while media connection is still valid.

One of the possible solutions would be to create a new type of MediaEvent e.g. ping and send it periodically.

cc @mat-hek @sax @geometerio

An error occurred while parsing RTP packet

Hi. I get an error while waiting for a response from a participant. What could be the reason for this?
The error is repeated during the connection.

i am using 2 pc on local network for testing

[warn] [{:srtp_decryptor, #Reference<0.3821044555.2247622657.249818>}] Couldn't unprotect rtp packet:
<<1, 1, 0, 56, 33, 18, 164, 66, 144, 45, 192, 226, 17, 180, 172, 130, 190, 188, 15, 235, 0, 32, 0, 20, 0, 2, 178, 244, 1, 19, 172, 226, 98, 74, 99, 227, 220, 80, 35, 19, 24, 238, 8, 211, 0, 8, 0, 20, 188, 185, 121, 86, 29, 16, 198, 89, 62, 60, 53, 232, 105, 183, 93, 143, 98, 99, 126, 242, 128, 40, 0, 4, 70, 146, 114, 110>>
Reason: :auth_fail. Ignoring packet.

[warn] [:depayloader] An error occurred while parsing RTP packet.
Reason: invalid_first_packet
Packet: %Membrane.Buffer{metadata: %{rtp: %{csrcs: [], extension: nil, marker: true, payload_type: 98, sequence_number: 12961, ssrc: 2106802986, timestamp: 3516596918}, timestamp: 102400000000}, payload: <<128, 128, 163, 233, 49, 252, 80, 35, 146, 207, 180, 254, 6, 141, 123, 75, 61, 247, 31, 62, 240, 243, 174, 108, 164, 75, 78, 54, 109, 181, 223, 199, 133, 140, 191, 93, 61, 172, 235, 58, 30, 94, 31, 191, 7, 119, 47, 64, 16, 4, 86, 34, 112, 124, 17, 247, 165, 194, 146, 18, 2, 205, 59, 255, 240, 168, 176, 100, 231, 17, 77, 232, 163, 14, 51, 78, 241, 198, 10, 191, 115, 57, 37, 217, 47, 181, 105, 166, 42, 171, 24, 147, 135, 181, 250, 90, 62, 30, 254, 169, 199, 231, 47, 179, 177, 176, 142, 55, 80, 133, 188, 254, 44, 63, 52, 92, 226, 234, 165, 64, 80, 111, 32, 16, 39, 118, 239, 96, 189, 242, 60, 158, 188, 243, 120, 138, 202, 10, 215, 230, 74, 17, 234, 220, 32, 143, 139, 111, 106, 32, 223, 214, 213, 2, 91, 113, 29, 206, 189, 157, 53, 198, 67, 198, 130, 193, 92, 43, 236, 1, 58, 160, 229, 36, 174, 50, 125, 185, 220, 248, 21, 217, 100, 91, 102, 30, 116, 22, 113, 98, 136, 28, 127, 200, 32, 213, 16, 185, 121, 40, 204, 236, 180, 166, 92, 101, 253, 182, 164, 146, 191, 43, 27, 92, 100, 247, 227, 11, 241, 125, 101, 18, 7, 11, 165, 232, 16, 185, 178, 157, 186, 189, 15, 197, 147, 137, 158, 111, 44, 208, 179, 32, 120, 96, 166, 10, 11, 36, 199, 136, 215, 240, 96, 105, 228, 128, 99, 127, 163, 149, 107, 217, 113, 217, 253, 59, 220, 137, 72, 173, 5, 186, 162, 18, 103, 11, 183, 68, 27, 19, 128, 174, 67, 107, 219, 145, 7, 214, 6, 158, 0, 56, 223, 42, 137, 108, 18, 156, 12, 54, 139, 128, 168, 71, 84, 206, 172, 171, 214, 137, 55, 49, 13, 157, 2, 89, 119, 215, 157, 54, 209, 125, 87, 169, 173, 52, 166, 118, 99, 255, 19, 119, 73, 149, 149, 196, 127, 146, 46, 193, 103, 238, 82, 90, 229, 7, 120, 100, 42, 166, 143, 29, 197, 118, 194, 200, 69, 254, 58, 36, 230, 2, 212, 221, 134, 242, 84, 47, 65, 173, 168, 95, 250, 191, 73, 245, 86, 124, 176, 107, 137, 123, 228, 150, 146, 78, 214, 242, 102, 157, 245, 235, 225, 59, 14, 41, 63, 138, 212, 177, 27, 13, 19, 1, 131, 219, 14, 26, 108, 148, 19, 223, 34, 24, 154, 49, 171, 171, 20, 161, 245, 35, 42, 176, 213, 10, 13, 213, 65, 55, 160, 19, 46, 235, 7, 140, 228, 144, 248, 245, 43, 49, 154, 212, 204, 21, 236, 56, 133, 23, 34, 216, 211, 191, 172, 110, 233, 112, 193, 136, 172, 10, 81, 155, 142, 2, 192, 90, 80, 236, 28, 191, 231, 53, 181, 34, 238, 206, 142, 46, 117, 138, 104, 40, 168, 164, 218, 6, 111, 142, 173, 70, 44, 104, 206, 235, 17, 104, 231, 190, 162, 106, 146, 173, 65, 215, 251, 253, 178, 141, 182, 175, 228, 65, 158, 138, 53, 171, 91, 146, 39, 210, 229, 43, 234, 223, 51, 86, 215, 35, 109, 225, 70, 101, 238, 52, 86, 201, 245, 97, 190, 227, 83, 206, 101, 118, 182, 41, 71, 165, 67, 131, 228, 156, 33, 214, 52, 129, 238, 42, 123, 9, 160, 234, 111, 254, 133, 209, 34, 252, 5, 211, 155, 210, 1, 12, 196, 223, 202, 104, 240, 79, 208, 113, 2, 139, 145, 46, 220, 114, 251, 146, 150, 173, 116, 113, 70, 79, 41, 212, 50, 0>>}

Sometimes
[warn] [{:rtp_parser, #Reference<0.3949617156.2087714817.7805>}] Received packet loss indicator RTCP packet

Operating System: Ubuntu 20.04.2 LTS
Kernel: Linux 5.11.0-25-generic
Architecture: x86-64

Replace webpack with esbuild

We should probably go in a direction that phoenix framework went with, that is removing bloated webpack and repalcing it with a lightweight esbuild.

Failed to execute setRemoteDescription on RTCPeerConnection (TS client bug)

We very occasionally see this error in Chrome:

Uncaught (in promise) DOMException: Failed to execute 'addIceCandidate' on 'RTCPeerConnection': Error processing ICE candidate

DOMException: Failed to execute 'setRemoteDescription' on 'RTCPeerConnection': Failed to set remote answer sdp: Called in wrong state: stable

This happens in the following (pretty-printed) code:

this.onAnswer = e=>m(this, null, function*() {
            this.connection.ontrack = this.onTrack();
            try {
                yield this.connection.setRemoteDescription(e)
            } catch (t) {
                console.log(t)
            }
        });

Engine.get_endpoints/1 crashes when Tees are present

Hey folks, think I found a bug.

Engine.get_endpoints/1 fails with a match error if called when any tee children exist.

Looks like it expects all child names to match {:endpoint, id}, but tee children are named as {:tee, id}:

@impl true
def handle_call(:get_endpoints, ctx, state) do
endpoints =
ctx.children
|> Map.values()
|> Enum.map(fn endpoint ->
{:endpoint, id} = endpoint.name
%{id: id, type: endpoint.module}
end)
{[reply: endpoints], state}
end

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.