Code Monkey home page Code Monkey logo

rperf's Introduction

rperf

rperf is a Rust-based iperf alternative developed by 3D-P, aiming to avoid some reliability and consistency issues found in iperf3, while simultaneously providing richer metrics data, with a focus on operation in a loss-tolerant, more IoT-like environment. While it can be used as a near-drop-in replacement for iperf, and there may be benefits to doing so, its focus is on periodic data-collection in a monitoring capacity in a closed network, meaning it is not suitable for all domains that iperf can serve.

development

rperf is an independent implementation, referencing the algorithms of iperf3 and zapwireless to assess correctness and derive suitable corrections, but copying no code from either.

In particular, the most significant issues addressed from iperf3 follow:

  • Multiple concurrent clients are supported by any given server.

  • rperf's implementation of RFC 1889 for streaming jitter calculation starts by assuming a delta between the first and second packets in a sequence and gaps in a sequence trigger a reset of the count. Comparatively, iperf3 begins with 0, which creates artificially low values, and in case of a gap, it just continues naively, which creates artificially high values.

  • Duplicate packets are accounted for in UDP exchanges and out-of-order packets are counted as independent events.

  • All traffic can be emitted proportionally at regular sub-second intervals, allowing for configurations that more accurately reflect real data transmission and sending algorithms.

    • This addresses a commonly seen case in embedded-like systems where a piece of equipment has a very small send- or receive-buffer that the OS does not know about and it will just drop packets when receiving a huge mass of data in a single burst, incorrectly under-reporting network capacity.
  • Stream-configuration and results are exchanged via a dedicated connection and every data-path has clearly defined timeout, completion and failure semantics, so execution doesn't hang indefinitely on either side of a test when key packets are lost.

  • rperf's JSON output is structurally legal. No unquoted strings, repeated keys, or dangling commas, all of which require pre-processing before consumption or cause unexpected errors.

In contrast to zapwireless, the following improvements are realised:

  • rperf uses a classic client-server architecture, so there's no need to maintain a running process on devices that waits for a test-execution request.

  • Jitter is calculated.

  • IPv6 is supported.

  • Multiple streams may be run in parallel as part of a test.

  • An omit option is available to discard TCP ramp-up time from results.

  • Output is available in JSON for easier telemetry-harvesting.

platforms

rperf should build and work on all major platforms, though its development and usage focus is on Linux-based systems, so that is where it will be most feature-complete.

Pull-requests for implementations of equivalent features for other systems are welcome.

usage

Everything is outlined in the output of --help and most users familiar with similar tools should feel comfortable immediately.

rperf works much like iperf3, sharing a lot of concepts and even command-line flags. One key area where it differs is that the client drives all of the configuration process while the server just complies to the best of its ability and provides a stream of results. This means that the server will not present test-results directly via its interface and also that TCP and UDP tests can be run against the same instance, potentially by many clients simultaneously.

In its normal mode of operation, the client will upload data to the server; when the reverse flag is set, the client will receive data.

Unlike iperf3, rperf does not make use of a reserved port-range by default. This is so it can support an arbitrary number of clients in parallel without resource contention on what can only practically be a small number of contiguous ports. In its intended capacity, this shouldn't be a problem, but where non-permissive firewalls and NAT setups are concerned, the --tcp[6]-port-pool and --udp[6]-port-pool options may be used to allocate non-continguous ports to the set that will be used to receive traffic.

There also isn't a concept of testing throughput relative to a fixed quantity of data. Rather, the sole focus is on measuring throughput over a roughly known period of time.

Also of relevance is that, if the server is running in IPv6 mode and its host supports IPv4-mapping in a dual-stack configuration, both IPv4 and IPv6 clients can connect to the same instance.

building

rperf uses cargo. The typical process will simply be cargo build --release.

cargo-deb is also supported and will produce a usable Debian package that installs a disabled-by-default rperf systemd service. When started, it runs as nobody:nogroup, assuming IPv6 support by default.

theory of operation

Like its contemporaries, rperf's core concept is firing a stream of TCP or UDP data at an IP target at a pre-arranged target speed. The amount of data actually received is observed and used to gauge the capacity of a network link.

Within those domains, additional data about the quality of the exchange is gathered and made available for review.

Architecturally, rperf has clients establish a TCP connection to the server, after which the client sends details about the test to be performed and the server obliges, reporting observation results to the client during the entire testing process.

The client may request that multiple parallel streams be used for testing, which is facilitated by establishing multiple TCP connections or UDP sockets with their own dedicated thread on either side, which may be further pinned to a single logical CPU core to reduce the impact of page-faults on the data-exchange.

implementation details

The client-server relationship is treated as a very central aspect of this design, in contrast to iperf3, where they're more like peers, and zapwireless, where each participant runs its own daemon and a third process orchestrates communication.

Notably, all data-gathering, calculation, and display happens client-side, with the server simply returning what it observed. This can lead to some drift in recordings, particularly where time is concerned (server intervals being a handful of milliseconds longer than their corresponding client values is not at all uncommon). Assuming the connection wasn't lost, however, totals for data observed will match up in all modes of operation.

The server uses three layers of threading: one for the main thread, one for each client being served, and one more for each stream that communicates with the client. On the client side, the main thread is used to communicate with the server and it spawns an additional thread for each stream that communicates with the server.

When the server receives a request from a client, it spawns a thread that handles that client's specific request; internally, each stream for the test produces an iterator-like handler on either side. Both the client and server run these iterator-analogues against each other asynchronously until the test period ends, at which point the sender indicates completion within its stream.

To reliably handle the possibility of disconnects at the stream level, a keepalive mechanism in the client-server stream, over which test-results are sent from the server at regular intervals, will terminate outstanding connections after a few seconds of inactivity.

The host OS's TCP and UDP mechanisms are used for all actual traffic exchanged, with some tuning parameters exposed. This approach was chosen over a userspace implementation on top of layer-2 or layer-3 because it most accurately represents the way real-world applications will behave.

considerations

The "timestamp" values visible in JSON-serialised interval data are host-relative, so unless your environment has very high system-clock accuracy, send-timestamps should only be compared to other send-timestamps and likewise for receive-timestamps. In general, this data is not useful outside of correctness-validation, however.

During each exchange interval, an attempt is made to send length bytes at a time, until the amount written to the stream meets or exceeds the bandwdith target, at which point the sender goes silent until the start of the next interval; the data sent within an interval should be uniformly distributed over the period.

Stream indexes start at 0, not 1. This probably won't surprise anyone, but seeing "stream 0" in a report is not cause for concern.

copyright and distribution

rperf is distributed by Evtech Solutions, Ltd., dba 3D-P, under the GNU GPL version 3, the text of which may be found in COPYING.

Authorship details, copyright specifics, and transferability notes are present within the source code itself.

rperf's People

Contributors

flan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

rperf's Issues

error when compiling for mips-unknown-linux-musl (to run on an OpenWRT device)

hey ๐Ÿ‘‹

looks like not all Atomic... types are available for this target

$ cargo build --release --target mips-unknown-linux-musl
(...)
error[E0432]: unresolved import `std::sync::atomic::AtomicU64`
  --> src/client.rs:22:37
   |
22 | use std::sync::atomic::{AtomicBool, AtomicU64, Ordering};
   |                                     ^^^^^^^^^
   |                                     |
   |                                     no `AtomicU64` in `sync::atomic`
   |                                     help: a similar name exists in the module: `AtomicU8`

running multi-stream test returns summary of stream total bytes per second value seems off

Test output:

----------
TCP send result over 1.00s | stream: 0
bytes: 1250056 | per second: 1249955.375 | megabytes/second: 1.250
----------
TCP send result over 1.00s | stream: 1
bytes: 1277712 | per second: 1277307.250 | megabytes/second: 1.277
----------
TCP receive result over 1.00s | stream: 0
bytes: 1252952 | per second: 1252644.250 | megabytes/second: 1.253
----------
TCP receive result over 1.00s | stream: 1
bytes: 1279160 | per second: 1279021.250 | megabytes/second: 1.279


----------
TCP send result over 1.00s | stream: 0
bytes: 1270472 | per second: 1270333.750 | megabytes/second: 1.270
----------
TCP send result over 1.00s | stream: 1
bytes: 1280608 | per second: 1280497.750 | megabytes/second: 1.280
----------
TCP receive result over 1.00s | stream: 0
bytes: 1271920 | per second: 1271532.250 | megabytes/second: 1.272
----------
TCP receive result over 1.00s | stream: 1
bytes: 1283504 | per second: 1282977.375 | megabytes/second: 1.283


----------
TCP send result over 1.00s | stream: 0
bytes: 1250056 | per second: 1249949.000 | megabytes/second: 1.250
----------
TCP send result over 1.00s | stream: 1
bytes: 1250056 | per second: 1249945.750 | megabytes/second: 1.250
----------
TCP receive result over 1.05s | stream: 1
bytes: 1247160 | per second: 1188014.000 | megabytes/second: 1.188
----------
TCP receive result over 1.05s | stream: 0
bytes: 1247160 | per second: 1187317.250 | megabytes/second: 1.187


----------
TCP send result over 1.00s | stream: 0
bytes: 1270184 | per second: 1270075.625 | megabytes/second: 1.270
----------
TCP send result over 1.00s | stream: 1
bytes: 1270184 | per second: 1270009.000 | megabytes/second: 1.270
----------
TCP receive result over 1.00s | stream: 0
bytes: 1342872 | per second: 1342705.750 | megabytes/second: 1.343
----------
TCP receive result over 1.00s | stream: 1
bytes: 1334184 | per second: 1329288.125 | megabytes/second: 1.329



----------
TCP send result over 1.00s | stream: 0
bytes: 1274960 | per second: 1274860.000 | megabytes/second: 1.275
----------
TCP send result over 1.00s | stream: 1
bytes: 1269168 | per second: 1269085.375 | megabytes/second: 1.269
----------

----------
TCP send result over 0.05s | stream: 0
bytes: 64000 | per second: 1277847.000 | megabytes/second: 1.278
----------

TCP receive result over 0.97s | stream: 1
bytes: 1203720 | per second: 1240504.750 | megabytes/second: 1.241
[2021-06-18T22:18:24Z INFO  rperf::client] server reported completion of stream 1
----------
TCP receive result over 1.00s | stream: 0
bytes: 1264824 | per second: 1267207.125 | megabytes/second: 1.267
[2021-06-18T22:18:24Z INFO  rperf::client] server reported completion of stream 0


==========
TCP send result over 10.05s | streams: 2
stream-average bytes per second: 1266260.199 | megabytes/second: 1.266
total bytes: 12727456 | per second: 2532520.398 | megabytes/second: 2.533
==========
TCP receive result over 10.07s | streams: 2
stream-average bytes per second: 1263455.403 | megabytes/second: 1.263
total bytes: 12727456 | per second: 2526910.805 | megabytes/second: 2.527

I noticed that the summary for total bytes per second value is exactly double of the value of stream-average bytes per second value. 2532520.398 vs 1266260.199. I also tried to calculate the non rounded time per each send packet and added them together to get a total time. I then took the total byte and divided by that total time and the value is very close to the stream-average bytes per second. So I believe that the stream-average bytes per second is correct however, I don't think the total bytes per second is correct. For example the per second: 2532520.398 does not looks right from this line "total bytes: 12727456 | per second: 2532520.398 | megabytes/second: 2.533"

error when compiling for x86_64-pc-windows-gnu

hi and thanks for rperf.
it is mentioned in the readme that rperf should build and work on all major platforms, though its development and usage focus is on Linux-based systems, so that is where it will be most feature-complete.
trying to switch from iperf to rperf, I need some binaries for Windows and Linux aarch64.

compiling for Windows fails with:

โฏ cargo build --release --target x86_64-pc-windows-gnu
  Downloaded winapi-build v0.1.1
  Downloaded miow v0.2.2
  Downloaded kernel32-sys v0.2.2
  Downloaded ws2_32-sys v0.2.1
  Downloaded winapi-util v0.1.5
  Downloaded windows-targets v0.48.5
  Downloaded winapi v0.2.8
  Downloaded winapi v0.3.9
  Downloaded windows-sys v0.48.0
  Downloaded winapi-x86_64-pc-windows-gnu v0.4.0
  Downloaded windows_x86_64_gnu v0.48.5
  Downloaded windows v0.48.0
  Downloaded 12 crates (20.0 MB) in 2.60s (largest was `windows` at 11.9 MB)
   Compiling winapi-x86_64-pc-windows-gnu v0.4.0
   Compiling winapi v0.3.9
   Compiling winapi-build v0.1.1
   Compiling windows_x86_64_gnu v0.48.5
   Compiling winapi v0.2.8
   Compiling libc v0.2.147
   Compiling memchr v2.6.3
   Compiling bitflags v1.2.1
   Compiling kernel32-sys v0.2.2
   Compiling ws2_32-sys v0.2.1
   Compiling cfg-if v0.1.10
   Compiling cfg-if v1.0.0
   Compiling regex-syntax v0.7.5
   Compiling windows-targets v0.48.5
   Compiling aho-corasick v1.0.5
   Compiling serde v1.0.188
   Compiling num-traits v0.2.16
   Compiling memoffset v0.6.5
   Compiling slab v0.4.9
   Compiling regex-automata v0.3.8
   Compiling unicode-width v0.1.10
   Compiling log v0.4.20
   Compiling serde_json v1.0.105
   Compiling textwrap v0.11.0
   Compiling windows-sys v0.48.0
   Compiling getrandom v0.2.10
   Compiling num_cpus v1.16.0
   Compiling vec_map v0.8.2
   Compiling ryu v1.0.15
   Compiling regex v1.9.5
   Compiling net2 v0.2.39
   Compiling atty v0.2.14
   Compiling winapi-util v0.1.5
   Compiling termcolor v1.2.0
   Compiling time v0.1.45
   Compiling strsim v0.8.0
   Compiling itoa v1.0.9
   Compiling humantime v2.1.0
   Compiling iovec v0.1.4
   Compiling env_logger v0.8.4
   Compiling clap v2.33.4
   Compiling chrono v0.4.28
   Compiling miow v0.2.2
   Compiling mio v0.6.23
   Compiling core_affinity v0.5.10
   Compiling ctrlc v3.4.1
   Compiling uuid v0.8.2
   Compiling nix v0.20.2
   Compiling simple-error v0.2.3
   Compiling rperf v0.1.8 (/Users/nitzan/Downloads/rperf)
error[E0433]: failed to resolve: could not find `sys` in `nix`
  --> src/stream/tcp.rs:23:10
   |
23 | use nix::sys::socket::{setsockopt, sockopt::RcvBuf, sockopt::SndBuf};
   |          ^^^ could not find `sys` in `nix`

error[E0433]: failed to resolve: could not find `unix` in `os`
  --> src/stream/tcp.rs:70:18
   |
70 |     use std::os::unix::io::AsRawFd;
   |                  ^^^^ could not find `unix` in `os`

error[E0433]: failed to resolve: could not find `unix` in `os`
   --> src/stream/tcp.rs:416:18
    |
416 |     use std::os::unix::io::AsRawFd;
    |                  ^^^^ could not find `unix` in `os`

error[E0433]: failed to resolve: could not find `sys` in `nix`
  --> src/stream/udp.rs:26:10
   |
26 | use nix::sys::socket::{setsockopt, sockopt::RcvBuf, sockopt::SndBuf};
   |          ^^^ could not find `sys` in `nix`

error[E0433]: failed to resolve: could not find `unix` in `os`
  --> src/stream/udp.rs:72:18
   |
72 |     use std::os::unix::io::AsRawFd;
   |                  ^^^^ could not find `unix` in `os`

error[E0433]: failed to resolve: could not find `unix` in `os`
   --> src/stream/udp.rs:439:18
    |
439 |     use std::os::unix::io::AsRawFd;
    |                  ^^^^ could not find `unix` in `os`

error[E0599]: no method named `as_raw_fd` found for struct `mio::net::TcpStream` in the current scope
   --> src/stream/tcp.rs:261:90
    |
261 | ...                   super::setsockopt(stream.as_raw_fd(), super::RcvBuf, &self.receive_buffer)?;
    |                                                ^^^^^^^^^ method not found in `TcpStream`

error[E0599]: no method named `as_raw_fd` found for struct `mio::net::TcpStream` in the current scope
   --> src/stream/tcp.rs:492:46
    |
492 |                     super::setsockopt(stream.as_raw_fd(), super::SndBuf, &self.send_buffer)?;
    |                                              ^^^^^^^^^ method not found in `TcpStream`

error[E0599]: no method named `as_raw_fd` found for struct `std::net::UdpSocket` in the current scope
   --> src/stream/udp.rs:209:46
    |
209 |                     super::setsockopt(socket.as_raw_fd(), super::RcvBuf, receive_buffer)?;
    |                                              ^^^^^^^^^ method not found in `UdpSocket`

warning: use of deprecated associated function `chrono::NaiveDateTime::from_timestamp`: use `from_timestamp_opt()` instead
   --> src/stream/udp.rs:258:52
    |
258 |             let current_timestamp = NaiveDateTime::from_timestamp(now.as_secs() as i64, now.subsec_nanos());
    |                                                    ^^^^^^^^^^^^^^
    |
    = note: `#[warn(deprecated)]` on by default

warning: use of deprecated associated function `chrono::NaiveDateTime::from_timestamp`: use `from_timestamp_opt()` instead
   --> src/stream/udp.rs:302:55
    |
302 |                 let source_timestamp = NaiveDateTime::from_timestamp(origin_seconds, origin_nanoseconds);
    |                                                       ^^^^^^^^^^^^^^

error[E0599]: no method named `as_raw_fd` found for struct `std::net::UdpSocket` in the current scope
   --> src/stream/udp.rs:475:46
    |
475 |                     super::setsockopt(socket.as_raw_fd(), super::SndBuf, send_buffer)?;
    |                                              ^^^^^^^^^ method not found in `UdpSocket`

Some errors have detailed explanations: E0433, E0599.
For more information about an error, try `rustc --explain E0433`.
warning: `rperf` (bin "rperf") generated 2 warnings
error: could not compile `rperf` (bin "rperf") due to 10 previous errors; 2 warnings emitted

it seems that there are some dependencies on *nix tcp/udp

doesn't work over single SSH reverse-forward port

what i did

  1. locally, run rperf -s -d

  2. connect to a remote via SSH while setting up a reverse-port forwarding to rperf's default port:

    ssh $REMOTE -R 127.0.0.1:5199:127.0.0.1:5199
    
  3. on the remote, run rperf -c 127.0.0.1

  4. observe it fail (see all the logs)


client log

$ rperf -c 127.0.0.1 -d
[2024-04-05T10:15:34Z DEBUG rperf] registering SIGINT handler...
[2024-04-05T10:15:34Z DEBUG rperf] connecting to server...
[2024-04-05T10:15:34Z DEBUG rperf::stream::tcp::receiver] using OS assignment for IPv4 TCP ports
[2024-04-05T10:15:34Z DEBUG rperf::stream::tcp::receiver] using OS assignment for IPv6 TCP ports
[2024-04-05T10:15:34Z DEBUG rperf::stream::udp::receiver] using OS assignment for IPv4 UDP ports
[2024-04-05T10:15:34Z DEBUG rperf::stream::udp::receiver] using OS assignment for IPv6 UDP ports
[2024-04-05T10:15:34Z DEBUG rperf::utils::cpu_affinity] enumerated CPU cores: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95]
[2024-04-05T10:15:34Z DEBUG rperf::utils::cpu_affinity] not applying CPU core affinity
[2024-04-05T10:15:34Z DEBUG rperf::protocol::messaging] preparing TCP upload config...
[2024-04-05T10:15:34Z DEBUG rperf::protocol::messaging] preparing TCP download config...
[2024-04-05T10:15:34Z INFO  rperf::client] connecting to server at 127.0.0.1:5199...
[2024-04-05T10:15:34Z INFO  rperf::client] connected to server
[2024-04-05T10:15:34Z DEBUG rperf::client] running in forward-mode: server will be receiving data
[2024-04-05T10:15:34Z DEBUG rperf::protocol::communication] sending message of length 172, Object {"family": String("tcp"), "kind": String("configuration"), "length": Number(32768), "receive_buffer": Number(0), "role": String("download"), "streams": Number(1), "test_id": Array [Number(108), Number(25), Number(198), Number(168), Number(186), Number(122), Number(65), Number(153), Number(144), Number(67), Number(89), Number(88), Number(42), Number(211), Number(201), Number(16)]}, to 127.0.0.1:5199...
[2024-04-05T10:15:34Z DEBUG rperf::protocol::communication] awaiting length-value from 127.0.0.1:5199...
[2024-04-05T10:15:34Z DEBUG rperf::protocol::communication] received length-spec of 41 from 127.0.0.1:5199
[2024-04-05T10:15:34Z DEBUG rperf::protocol::communication] awaiting payload from 127.0.0.1:5199...
[2024-04-05T10:15:34Z DEBUG rperf::protocol::communication] received Object {"kind": String("connect"), "stream_ports": Array [Number(43503)]} from 127.0.0.1:5199
[2024-04-05T10:15:34Z INFO  rperf::client] preparing for TCP test with 1 streams...
[2024-04-05T10:15:34Z DEBUG rperf::client] preparing TCP-sender for stream 0...
[2024-04-05T10:15:34Z INFO  rperf::client] informing server that testing can begin...
[2024-04-05T10:15:34Z DEBUG rperf::protocol::communication] sending message of length 16, Object {"kind": String("begin")}, to 127.0.0.1:5199...
[2024-04-05T10:15:34Z DEBUG rperf::client] spawning stream-threads
[2024-04-05T10:15:34Z INFO  rperf::client] beginning execution of stream 0...
[2024-04-05T10:15:34Z DEBUG rperf::protocol::communication] awaiting length-value from 127.0.0.1:5199...
[2024-04-05T10:15:34Z DEBUG rperf::utils::cpu_affinity] CPU affinity is not configured; not doing anything
[2024-04-05T10:15:34Z DEBUG rperf::client] beginning test-interval for stream 0
[2024-04-05T10:15:34Z DEBUG rperf::stream::tcp::sender] preparing to connect TCP stream 0...
[2024-04-05T10:15:34Z ERROR rperf::client] unable to process stream: unable to connect stream 0: Connection refused (os error 111)
----------
Failure in client stream | stream: 0
[2024-04-05T10:15:34Z WARN  rperf::client] stream 0 failed
[2024-04-05T10:15:34Z INFO  rperf::client] giving the server a few seconds to report results...
[2024-04-05T10:15:37Z DEBUG rperf::protocol::communication] received length-spec of 50 from 127.0.0.1:5199
[2024-04-05T10:15:37Z DEBUG rperf::protocol::communication] awaiting payload from 127.0.0.1:5199...
[2024-04-05T10:15:37Z DEBUG rperf::protocol::communication] received Object {"kind": String("failed"), "origin": String("server"), "stream_idx": Number(0)} from 127.0.0.1:5199
[2024-04-05T10:15:37Z WARN  rperf::client] server reported failure with stream 0
[2024-04-05T10:15:37Z DEBUG rperf::protocol::communication] sending message of length 14, Object {"kind": String("end")}, to 127.0.0.1:5199...
[2024-04-05T10:15:37Z DEBUG rperf::client] stopping any still-in-progress streams
[2024-04-05T10:15:37Z DEBUG rperf::client] waiting for all streams to end
[2024-04-05T10:15:37Z DEBUG rperf::client] displaying test results
==========
TCP send result over 0.00s | streams: 1
stream-average bytes per second: 0.000 | megabits/second: 0.000
total bytes: 0 | per second: 0.000 | megabits/second: 0.000
==========
TCP receive result over 0.00s | streams: 1
stream-average bytes per second: 0.000 | megabits/second: 0.000
total bytes: 0 | per second: 0.000 | megabits/second: 0.000
TESTING DID NOT COMPLETE SUCCESSFULLY

nmap sees the port as open on the client

$ nix run nixpkgs#nmap -- 127.0.0.1 -p5199
Starting Nmap 7.94 ( https://nmap.org ) at 2024-04-05 10:14 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00023s latency).

PORT     STATE SERVICE
5199/tcp open  unknown

Nmap done: 1 IP address (1 host up) scanned in 0.03 seconds

server log

[2024-04-05T10:15:34Z INFO  rperf::server] connection from 127.0.0.1:60504
[2024-04-05T10:15:34Z INFO  rperf::server] [127.0.0.1:60504] running in forward-mode: server will be receiving data
[2024-04-05T10:15:34Z INFO  rperf::server] [127.0.0.1:60504] preparing for TCP test with 1 streams...
[2024-04-05T10:15:34Z INFO  rperf::server] [127.0.0.1:60504] beginning execution of stream 0...
[2024-04-05T10:15:37Z ERROR rperf::server] [127.0.0.1:60504] unable to process stream: TCP listening for stream 0 timed out
[2024-04-05T10:15:37Z INFO  rperf::server] [127.0.0.1:60504] end of testing signaled
[2024-04-05T10:15:37Z INFO  rperf::server] 127.0.0.1:60504 disconnected

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.