Code Monkey home page Code Monkey logo

Comments (51)

paniksystem avatar paniksystem commented on August 31, 2024 1

There is 3 buffering points. When you send a lot of data, depending on the loss rate and the RTT, usrsctp will buffer the packets waiting for a SACK from the other end. If there is too much data "in flight" usrsctp will refuse more packets for sending (depending on the buffer size configured).
Then when usrsctp refuse more data, libdatachannel will buffer them and increase bufferedAmount, so you will have to buffer those until bufferedAmoutLow is triggered indicating that usrsctp cleared its buffer and libdatachannel cleared its buffer as well.

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024 1

Oops. Never mind. I was blocking onDataChannel. Fixed now.

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024 1

Actually test duration is not calculated. Just printed out for info.

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024 1

Do you remember this #76 (comment)

I definitely will write a note for myself.
Do NOT use libnice version < 0.1.17.

I will test and write results tomorrow.

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024 1

Thanks for all. Great job.
I am closing this issue.

from libdatachannel.

paullouisageneau avatar paullouisageneau commented on August 31, 2024

How is sending/receiving implemented in both peers?

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024

By using tun interface handlers.

  • Read from tun interface & send with datachannel
  • Read from datachannel & send it to tun interface
  • No delays

from libdatachannel.

paniksystem avatar paniksystem commented on August 31, 2024

The performance issues are from usrsctp, i use libdatachannel and libjuice with a custom SCTP stack because of this.

from libdatachannel.

paniksystem avatar paniksystem commented on August 31, 2024

sctplab/usrsctp#245

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024

@paniksystem Is it publicly available? I can test & compare.

from libdatachannel.

paniksystem avatar paniksystem commented on August 31, 2024

No, because this is a really custom and incomplete implementation (no init, external congestion control, agressive retransmission policy) and is in use only for a specific edge case. Sorry :(

from libdatachannel.

paniksystem avatar paniksystem commented on August 31, 2024

And because it's not compatible with this libdatachannel implementation (we removed the SctpTransport layer of the lib to use only the ice/dtls part)

from libdatachannel.

paullouisageneau avatar paullouisageneau commented on August 31, 2024

Even if usrsctp is kind of slow, you should get more than 10Mbit/s. Did you disable debug/verbose logging?

* Read from tun interface & send with datachannel

If you just read and send in a loop it won't work efficiently, it's basically an infinite loop since it lacks backpressure: DataChannel::send is non-blocking and will always return immediately. If sending immediately is not possible due to flow or congestion control, it will buffer the messages.

You should stop sending when send() returns false (or when bufferedAmount() is high enough) and wait for the onBufferedAmountLow callback to be called to continue sending.

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024

In this case I will buffer the data & when received onBufferedAmountLow callback I will send the buffered data. Right?

In both cases there is a waiting buffer. What is the difference?

from libdatachannel.

paullouisageneau avatar paullouisageneau commented on August 31, 2024

Actually that's not exactly when received, more like when there is free space in SCTP buffer.

The difference is, if you don't stop sending to wait for the callback at some point, it will max out your processor filling the buffer with a basically infinite amount (with the JavaScript API this will result in the browser killing the data channel after a while), whereas if you control your buffer you can choose what to do.

If you get too much data from your tun interface, you should have a way to either drop packets or give some feedback to the sender to signal you don't have enough bandwidth downstream.

from libdatachannel.

paullouisageneau avatar paullouisageneau commented on August 31, 2024

@paniksystem Good explanation, just a detail: bufferedAmoutLow only indicates the buffer in libdatachannel has been emptied, not the one in usrsctp. You actually want to keep the one in usrsctp full enough to allow good performance.

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024

I intend to implement buffer function. To better understand our needs I watched bufferedAmount. It is always 0. It can send all data directly.

So for this test case, I don't think we need a another buffer.

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024

Also I made a test over mobile network. Connection speed is ~3MBit/s.

bufferedAmount is always 0.

By the way; I am reading bufferedAmount by using DataChannel::bufferedAmount()

from libdatachannel.

paullouisageneau avatar paullouisageneau commented on August 31, 2024

bufferedAmount being always zero would mean the bottleneck is actually reading from the tun interface. Could you check send() always returns true?

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024

It could be.
To test this I run iPerf3 with -P 5 parameter. Which means send with 5 parallel clients.
Now I can see buffer is increasing.
As you mentioned I waited buffer to be emptied to send new data but actually there is no significant change.
Still connection speed is 10-13 Mbit/s. (Total of 5 clients)

from libdatachannel.

paullouisageneau avatar paullouisageneau commented on August 31, 2024

Do you mean that each client reaches 10 Mbit/s or that the sum is always 10 Mbit/s no matter the number of parallel clients?

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024

I mean sum. (Each ~2-3 Mbit)

from libdatachannel.

paullouisageneau avatar paullouisageneau commented on August 31, 2024

Hum, that's quite weird. What are the message size and network RTT?

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024

Network RTT was ~10ms. (Ping time and also rtt time reported by sctp was closed)

According to message size; network mtu is 1500 and all buffers used is 64kb. For iperf3 I am using default values.
https://iperf.fr/iperf-doc.php#3doc

from libdatachannel.

paullouisageneau avatar paullouisageneau commented on August 31, 2024

Also, could you give more details about how you achieved the binding with iperf clients/server? It's not clear to me.

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024

The program is written like that;

TUN Device<-->libdatachannel<----------------------->libdatachannel<-->TUN Device
   peer1                                                   peer2

After connection established,

  • On peer1 start shell & start iperf3 -s (Server)
  • On peer2 start shell & start iperf3 -c <IP> (Client)

from libdatachannel.

paullouisageneau avatar paullouisageneau commented on August 31, 2024

Oh OK, so you run TCP over SCTP? This will never lead to good performance since it boils down to running one congestion control algorithm on top of another.

There might still be room for enhancement though, how much new data from the tun interface do you buffer?

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024

So for better performance you suggest ( for this scenario) to disable SCTP transport right?

from libdatachannel.

paullouisageneau avatar paullouisageneau commented on August 31, 2024

If you remove SCTP you will only measure the overhead added by DTLS, which is not very useful if you want to measure the data channel throughput. What do you want to measure exactly?

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024

Actually at the beginning, purpose of this test was understand data-channel performance. In my project I have TUN logic implemented. So for me it was easy to test with iperf3.
I missed two things;

  • If I want to use iperf, then I would use TCP+SCTP which will not give actual performance as you mentioned.
  • SCTP Performance issues as @paniksystem mentioned.

I want to test;

  • libdatachannel throughput (pass through) performance: For this I will need a logic inside of the test program, that will generate random data & send. There will not be iperf, TCP or other tools. I will prepare this as performance test & maybe we can add to git.
  • My app throughput (pass through) performance: For this I need TUN interface + iperf. And for better performance I should disable SCTP.

Anything to add?

from libdatachannel.

paullouisageneau avatar paullouisageneau commented on August 31, 2024

Indeed, to measure the actual throughput you should do it directly in a test program. I added a small local benchmark example in tests, if it can be useful for you: #86 (I get 500 Mbit/s with a local connection, the limit being processor usage).

For you second case, you should disable SCTP for better performance but you'll then lose compatibility with other WebRTC implementations. Also, I think by buffering only the minimum data at the entrance of the data channel and dropping the rest you can still achieve better performance even with SCTP.

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024

I am trying to write a benchmark test for using with two different computers. It is basically mix of your benchmark & client apps.

But there is a problem. Offerer can not send data. Could you please check code below & tell me what you think?

int main(int argc, char **argv)
{
   ...
    return 0;
}

Edit: I deleted code. It was too long. :)

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024

Here is my test results;

Test environment

  • 2 Linux PC
  • Connected with wired cable
  • Network is 100 MBit

Direct Communication

  • Start iPerf3 & note results (One PC is server & the other is client)
  • Client Command: iperf3 -c 10.0.0.10 -t 30
  • Server Command: iperf3 -s
  • Test Results
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.00  sec   337 MBytes  94.1 Mbits/sec  8457             sender
[  5]   0.00-30.00  sec   336 MBytes  94.0 Mbits/sec                  receiver

My Comment: It is good & near network limit.

Testing Tun Interface Implementation

  • I prepared a test app
  • Logic of app is like;
    • [TUN <-->UDP]<---------------------->[UDP<->TUN]
    • [TUN <-->UDP] is test app
  • Apps are running on different PCs
  • Test Results
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.00   sec  305  MBytes  85.3 Mbits/sec  9120            sender
[  5]   0.00-30.00   sec  303  MBytes  84.8 Mbits/sec                  receiver

My Comment: Not bad. It is OK.

Testing libdatachannel with 2 separate peer

WebSocket URL:localhost:8000
I am offerer
Starting Benchmark test...
Time needed to open data-channel:495ms
  • Output of answerer
WebSocket URL:localhost:8000
I am answerer
Test duration: 10000 ms
Goodput: 4.255 MB/s (34.04 Mbit/s)

Local benchmark

Description 1: v=0
o=- 1738059102 0 IN IP4 127.0.0.1
s=-
t=0 0
a=group:BUNDLE data
m=application 9 UDP/DTLS/SCTP webrtc-datachannel
c=IN IP4 0.0.0.0
a=mid:data
a=sctp-port:5000
a=max-message-size:262144
a=ice-options:trickle
a=ice-ufrag:vYS/
a=ice-pwd:fuUw+M0DAApMKra3Jf4MvK
a=setup:actpass
a=dtls-id:1
a=fingerprint:sha-256 2B:A3:69:86:9B:34:CD:C3:79:D0:B4:76:8C:9E:B5:A0:77:C1:18:93:3A:73:AC:4F:6E:C4:E3:AF:DB:35:44:6C

Description 2: v=0
o=- 3773458902 0 IN IP4 127.0.0.1
s=-
t=0 0
a=group:BUNDLE data
m=application 9 UDP/DTLS/SCTP webrtc-datachannel
c=IN IP4 0.0.0.0
a=mid:data
a=sctp-port:5000
a=max-message-size:262144
a=ice-options:trickle
a=ice-ufrag:ztbO
a=ice-pwd:3odQ9yVhUDbg3lo47G9Zok
a=setup:active
a=dtls-id:1
a=fingerprint:sha-256 2B:A3:69:86:9B:34:CD:C3:79:D0:B4:76:8C:9E:B5:A0:77:C1:18:93:3A:73:AC:4F:6E:C4:E3:AF:DB:35:44:6C

Gathering state 2: in_progress
Candidate 2: a=candidate:1 1 UDP 2013266431 fe80::4837:6bcd:138a:2da 12395 typ host
Candidate 2: a=candidate:2 1 UDP 2013266430 10.0.0.42 28781 typ host
Gathering state 2: complete
Gathering state 1: in_progress
State 1: connecting
Candidate 1: a=candidate:1 1 UDP 2013266431 fe80::4837:6bcd:138a:2da 32941 typ host
State 2: connecting
Candidate 1: a=candidate:2 1 UDP 2013266430 10.0.0.42 15625 typ host
Gathering state 1: complete
State 2: connected
State 1: connected
DataChannel open, sending data...
Received: 182252 KB
Received: 351988 KB
Received: 533127 KB
Received: 705811 KB
Received: 879938 KB
Received: 1065140 KB
Received: 1240839 KB
Received: 1394519 KB
Received: 1568580 KB
Received: 1736939 KB
DataChannel closed.
Test duration: 30000 ms
Connect duration: 50 ms
Goodput: 56.116 MB/s (448.928 Mbit/s)
State 1: closed
State 2: closed

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024

I am testing connection speed with these circumstances;

  • Both peer on same network
  • Using iPerf3 to test speed
  • Running iPerf3 for 60 sec
  • Average speed ise ~10Mbit

Actually it is lower than what I expected. If I make iPerf test with direct IPs I get ~75MBit.

Please note that this previous test was made with WiFi connection.

My last results are with wired ethernet connection.

from libdatachannel.

paullouisageneau avatar paullouisageneau commented on August 31, 2024

Thanks for the results. I think you should get twice that in the test with 2 separate peer. Do you get the same result if you run it for 30 seconds?

Also, have you tried without setting a STUN server? It could be that for some reason it wrongly establishes the connection through the NAT instead of directly.

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024

I agree. We should get a result near network limit.
This is the result for 30 sec & without a STUN.

offerer

WebSocket URL: localhost:8000
I am offerer
Starting Benchmark test...
Time needed to open data-channel:862ms

answerer

WebSocket URL: 10.0.0.42:8000
I am answerer
Selected Candidate Info:
Local: fe80::2e0:29ff:fe37:7c98:64335 Host Udp
Remote: fe80::4837:6bcd:138a:2da:10536 Host Udp

Received: 5439 KB
Received: 15597 KB
Received: 27065 KB
Received: 40762 KB
Received: 53673 KB
Received: 68090 KB
Received: 82639 KB
Received: 96205 KB
Received: 108198 KB
Received: 121305 KB
Test duration: 30000 ms
Transfer duration: 30045 ms
Goodput: 4.037 MB/s (32.296 Mbit/s)

And I can confirm that IPV6 addresses are local pc addreses.

from libdatachannel.

paullouisageneau avatar paullouisageneau commented on August 31, 2024

Ha ha I think there is a little timing mistake since transfer duration is longer than entire test duration somehow.

Still, there is clear a performance issue here. I have a suggestion: try disabling the NODELAY option here:

int nodelay = 1;

from libdatachannel.

paullouisageneau avatar paullouisageneau commented on August 31, 2024

You can also try with these SCTP parameters, it performs better on my side: #88

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024

For me it seems worse.

WebSocket URL: 10.0.0.42:8000
I am answerer
Selected Candidate Info:
Local: fe80::2e0:29ff:fe37:7c98:35092 Host Udp
Remote: fe80::4837:6bcd:138a:2da:14484 Host Udp

Received: 10223 KB
Received: 21167 KB
Received: 32636 KB
Received: 43449 KB
Received: 52493 KB
Received: 63306 KB
Received: 74972 KB
Received: 86637 KB
Received: 98171 KB
Received: 109771 KB
Test duration: 30000 ms
Transfer duration: 30023 ms
Goodput: 3.656 MB/s (29.248 Mbit/s)

I tried couple of times.

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024

Ha ha I think there is a little timing mistake since transfer duration is longer than entire test duration somehow.

Still, there is clear a performance issue here. I have a suggestion: try disabling the NODELAY option here:

int nodelay = 1;

WebSocket URL: 10.0.0.42:8000
I am answerer
Selected Candidate Info:
Local: fe80::2e0:29ff:fe37:7c98:27807 Host Udp
Remote: fe80::4837:6bcd:138a:2da:9236 Host Udp

Received: 10288 KB
Received: 21888 KB
Received: 33619 KB
Received: 44498 KB
Received: 54721 KB
Received: 64420 KB
Received: 74644 KB
Received: 84081 KB
Received: 95418 KB
Received: 106953 KB
Test duration: 30000 ms
Transfer duration: 30055 ms
Goodput: 3.558 MB/s (28.464 Mbit/s)

Mainly sctp-tuning params or nodelay param is not making a diference.

from libdatachannel.

paullouisageneau avatar paullouisageneau commented on August 31, 2024

That's weird, I can reach 500 Mbit/s on a gigabit LAN with libnice + OpenSSL and the example client on the following branch compiled in Release mode: https://github.com/paullouisageneau/libdatachannel/tree/benchmark
What is different with your setup?

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024

This is output from example client. (Both compiled as release)

The local ID is: riZP
Waiting for signaling to be connected...
WebSocket connected, signaling ready
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Answering to Z8KV
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Gathering State: in_progress
State: connecting
State: connected
DataChannel from Z8KV received with label "benchmark"
Gathering State: complete
Goodput: 3.211 MB/s (25.688 Mbit/s)
Goodput: 4.325 MB/s (34.6 Mbit/s)
Goodput: 4.364 MB/s (34.912 Mbit/s)
Goodput: 4.548 MB/s (36.384 Mbit/s)
Goodput: 4.168 MB/s (33.344 Mbit/s)
Goodput: 4.325 MB/s (34.6 Mbit/s)
Goodput: 4.403 MB/s (35.224 Mbit/s)
Goodput: 4.744 MB/s (37.952 Mbit/s)
Goodput: 4.692 MB/s (37.536 Mbit/s)
Goodput: 3.696 MB/s (29.568 Mbit/s)
Goodput: 3.866 MB/s (30.928 Mbit/s)
Goodput: 4.338 MB/s (34.704 Mbit/s)
Goodput: 4.377 MB/s (35.016 Mbit/s)
Goodput: 4.01 MB/s (32.08 Mbit/s)

from libdatachannel.

paullouisageneau avatar paullouisageneau commented on August 31, 2024

OK so definitely not a client thing. Do you test on Windows?

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024

No. I am on Linux.

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024

Compiling on windows is failing because of the new tcptransport.
I did not have time to fix it yet.

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024

This test is on 1 GB Network

The local ID is: Jjhl
Waiting for signaling to be connected...
WebSocket connected, signaling ready
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Goodput: 0 MB/s (0 Mbit/s)
Answering to 4EEL
Gathering State: in_progress
State: connecting
State: connected
DataChannel from 4EEL received with label "benchmark"
Goodput: 5.518 MB/s (44.144 Mbit/s)
Goodput: 13.932 MB/s (111.456 Mbit/s)
Goodput: 20.171 MB/s (161.368 Mbit/s)
Goodput: 22.334 MB/s (178.672 Mbit/s)
Goodput: 20.512 MB/s (164.096 Mbit/s)
Gathering State: complete
Goodput: 22.347 MB/s (178.776 Mbit/s)
Goodput: 18.1 MB/s (144.8 Mbit/s)
Goodput: 19.293 MB/s (154.344 Mbit/s)
Goodput: 22.609 MB/s (180.872 Mbit/s)
Goodput: 20.42 MB/s (163.36 Mbit/s)
Goodput: 21.102 MB/s (168.816 Mbit/s)
Goodput: 18.664 MB/s (149.312 Mbit/s)
Goodput: 19.149 MB/s (153.192 Mbit/s)
Goodput: 19.673 MB/s (157.384 Mbit/s)
Goodput: 16.75 MB/s (134 Mbit/s)
Goodput: 19.922 MB/s (159.376 Mbit/s)
Goodput: 21.6 MB/s (172.8 Mbit/s)
Goodput: 20.342 MB/s (162.736 Mbit/s)
Goodput: 19.319 MB/s (154.552 Mbit/s)
Goodput: 18.48 MB/s (147.84 Mbit/s)
Goodput: 20.263 MB/s (162.104 Mbit/s)
Goodput: 16.37 MB/s (130.96 Mbit/s)
Goodput: 15.767 MB/s (126.136 Mbit/s)
Goodput: 21.102 MB/s (168.816 Mbit/s)
Goodput: 22.596 MB/s (180.768 Mbit/s)
Goodput: 19.031 MB/s (152.248 Mbit/s)

from libdatachannel.

paullouisageneau avatar paullouisageneau commented on August 31, 2024

Compiling on windows is failing because of the new tcptransport.

Oh I indeed forgot to check that, I fixed it in #89

I'm really puzzled. It's obviously not a processing issue since it can reach higher throughputs on 1GB. What version of libnice do you use?

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024

What version of libnice do you use?

For Linux 0.1.16 (<-- Used for above tests, Ubuntu 20.04)
For Windows 0.1.17 (Not Used in test cases)

I don't have idea, too.
I will try to disable SCTP to see what's happening.
Could you show me the way to start in order to bypass SCTP transport?

from libdatachannel.

paullouisageneau avatar paullouisageneau commented on August 31, 2024

OK, I was using libnice 0.1.17, so I just tested with libnice and surprise, I can reproduce the issue.
Since it did not appear with libjuice, it seems very probable to me it is a solved performance issue with libnice 0.1.16, I guess the UDP buffers were too small or something like that. Actually the changelog for 0.1.17 states "Many ICE compatibility and performance improvements".

State: connected
DataChannel from mJUs received with label "benchmark"
Goodput: 21.77 MB/s (174.16 Mbit/s)
Goodput: 28.062 MB/s (224.496 Mbit/s)
Goodput: 24.116 MB/s (192.928 Mbit/s)

from libdatachannel.

murat-dogan avatar murat-dogan commented on August 31, 2024

Here is my test results with libnice V0.1.17 + 100MBit Network

WebSocket URL: 10.0.0.42:8000
I am answerer
Selected Candidate Info:
Local: fe80::2e0:29ff:fe37:7c98:43341 Host Udp
Remote: fe80::4837:6bcd:138a:2da:33383 Host Udp

Received: 20119 KB
Received: 37158 KB
Received: 55770 KB
Received: 73399 KB
Received: 92535 KB
Received: 111147 KB
Received: 129759 KB
Received: 147584 KB
Received: 177337 KB
Received: 200143 KB
Test duration: 30000 ms
Transfer duration: 29989 ms
Goodput: 6.658 MB/s (53.264 Mbit/s)

It was ~30 MBit/s in prev tests.

from libdatachannel.

paullouisageneau avatar paullouisageneau commented on August 31, 2024

Great, that's more like what you should expect given the protocol stack.
I did a bit more tuning in #90

from libdatachannel.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.