Code Monkey home page Code Monkey logo

oha's Introduction

oha (おはよう)

GitHub Actions Crates.io Arch Linux Homebrew Gitter

ko-fi

oha is a tiny program that sends some load to a web application and show realtime tui inspired by rakyll/hey.

This program is written in Rust and powered by tokio and beautiful tui by ratatui.

demo

Installation

This program is built on stable Rust, with both make and cmake prerequisites to install via cargo.

cargo install oha

You can optionally build oha against native-tls instead of rustls.

cargo install --no-default-features --features rustls oha

You can enable VSOCK support by enabling vsock feature.

cargo install --features vsock oha

On Arch Linux

pacman -S oha

On macOS (Homebrew)

brew install oha

On Windows (winget)

winget install hatoo.oha

On Debian (Azlux's repository)

echo "deb [signed-by=/usr/share/keyrings/azlux-archive-keyring.gpg] http://packages.azlux.fr/debian/ stable main" | sudo tee /etc/apt/sources.list.d/azlux.list
sudo wget -O /usr/share/keyrings/azlux-archive-keyring.gpg https://azlux.fr/repo.gpg
apt update
apt install oha

Containerized

You can also build and create a container image including oha

docker build . -t example.com/hatoo/oha:latest

Then you can use oha directly through the container

docker run -it example.com/hatoo/oha:latest https://example.com:3000

Profile-Guided Optimization (PGO)

You can build oha with PGO by using the following commands:

bun run pgo.js

And the binary will be available at target/[target-triple]/pgo/oha.

Platform

  • Linux - Tested on Ubuntu 18.04 gnome-terminal
  • Windows 10 - Tested on Windows Powershell
  • MacOS - Tested on iTerm2

Usage

-q option works different from rakyll/hey. It's set overall query per second instead of for each workers.

Ohayou(おはよう), HTTP load generator, inspired by rakyll/hey with tui animation.

Usage: oha [FLAGS] [OPTIONS] <url>

Arguments:
  <URL>  Target URL.

Options:
  -n <N_REQUESTS>                     Number of requests to run. [default: 200]
  -c <N_CONNECTIONS>                  Number of connections to run concurrently. You may should increase limit to number of open files for larger `-c`. [default: 50]
  -p <N_HTTP2_PARALLEL>               Number of parallel requests to send on HTTP/2. `oha` will run c * p concurrent workers in total. [default: 1]
  -z <DURATION>                       Duration of application to send requests. If duration is specified, n is ignored.
                                      When the duration is reached, ongoing requests are aborted and counted as "aborted due to deadline"
                                      Examples: -z 10s -z 3m.
  -q <QUERY_PER_SECOND>               Rate limit for all, in queries per second (QPS)
      --burst-delay <BURST_DURATION>  Introduce delay between a predefined number of requests.
                                      Note: If qps is specified, burst will be ignored
      --burst-rate <BURST_REQUESTS>   Rates of requests for burst. Default is 1
                                      Note: If qps is specified, burst will be ignored
      --rand-regex-url                Generate URL by rand_regex crate but dot is disabled for each query e.g. http://127.0.0.1/[a-z][a-z][0-9]. Currently dynamic scheme, host and port with keep-alive are not works well. See https://docs.rs/rand_regex/latest/rand_regex/struct.Regex.html for details of syntax.
      --max-repeat <MAX_REPEAT>       A parameter for the '--rand-regex-url'. The max_repeat parameter gives the maximum extra repeat counts the x*, x+ and x{n,} operators will become. [default: 4]
      --latency-correction            Correct latency to avoid coordinated omission problem. It's ignored if -q is not set.
      --no-tui                        No realtime tui
  -j, --json                          Print results as JSON
      --fps <FPS>                     Frame per second for tui. [default: 16]
  -m, --method <METHOD>               HTTP method [default: GET]
  -H <HEADERS>                        Custom HTTP header. Examples: -H "foo: bar"
  -t <TIMEOUT>                        Timeout for each request. Default to infinite.
  -A <ACCEPT_HEADER>                  HTTP Accept Header.
  -d <BODY_STRING>                    HTTP request body.
  -D <BODY_PATH>                      HTTP request body from file.
  -T <CONTENT_TYPE>                   Content-Type.
  -a <BASIC_AUTH>                     Basic authentication, username:password
      --http-version <HTTP_VERSION>   HTTP version. Available values 0.9, 1.0, 1.1.
      --http2                         Use HTTP/2. Shorthand for --http-version=2
      --host <HOST>                   HTTP Host header
      --disable-compression           Disable compression.
  -r, --redirect <REDIRECT>           Limit for number of Redirect. Set 0 for no redirection. Redirection isn't supported for HTTP/2. [default: 10]
      --disable-keepalive             Disable keep-alive, prevents re-use of TCP connections between different HTTP requests. This isn't supported for HTTP/2.
      --no-pre-lookup                 *Not* perform a DNS lookup at beginning to cache it
      --ipv6                          Lookup only ipv6.
      --ipv4                          Lookup only ipv4.
      --insecure                      Accept invalid certs.
      --connect-to <CONNECT_TO>       Override DNS resolution and default port numbers with strings like 'example.org:443:localhost:8443'
      --disable-color                 Disable the color scheme.
      --unix-socket <UNIX_SOCKET>     Connect to a unix socket instead of the domain in the URL. Only for non-HTTPS URLs.
      --vsock-addr <VSOCK_ADDR>       Connect to a VSOCK socket using 'cid:port' instead of the domain in the URL. Only for non-HTTPS URLs.
      --stats-success-breakdown       Include a response status code successful or not successful breakdown for the time histogram and distribution statistics
  -h, --help                          Print help
  -V, --version                       Print version

Benchmark

Performance Comparison

We used hyperfine for benchmarking oha against rakyll/hey on a local server. The server was coded using node. You can start the server by copy pasting this file and then running it via node. After copy-pasting the file, you can run the benchmark via hyperfine.

  1. Copy-paste the contents into a new javascript file called app.js
const http = require("http");

const server = http.createServer((req, res) => {
  res.writeHead(200, { "Content-Type": "text/plain" });

  res.end("Hello World\n");
});

server.listen(3000, () => {
  console.log("Server running at http://localhost:3000/");
});
  1. Run node app.js
  2. Run hyperfine 'oha --no-tui http://localhost:3000' 'hey http://localhost:3000' in a different terminal tab

Benchmark Results

Benchmark 1: oha --no-tui http://localhost:3000

  • Time (mean ± σ): 10.8 ms ± 1.8 ms [User: 5.7 ms, System: 11.7 ms]
  • Range (min … max): 8.7 ms … 24.8 ms (107 runs)

Benchmark 2: hey http://localhost:3000

  • Time (mean ± σ): 14.3 ms ± 4.6 ms [User: 12.2 ms, System: 19.4 ms]
  • Range (min … max): 11.1 ms … 48.3 ms (88 runs)

Summary

In this benchmark, oha --no-tui http://localhost:3000 was found to be faster, running approximately 1.32 ± 0.48 times faster than hey http://localhost:3000.

Tips

Stress test in more realistic condition

oha uses default options inherited from rakyll/hey but you may need to change options to stress test in more realistic condition.

I suggest to run oha with following options.

oha <-z or -n> -c <number of concurrent connections> -q <query per seconds> --latency-correction --disable-keepalive <target-address>
  • --disable-keepalive

    In real, user doesn't query same URL using Keep-Alive. You may want to run without Keep-Alive.

  • --latency-correction

    You can avoid Coordinated Omission Problem by using --latency-correction.

Burst feature

You can use --burst-delay along with --burst-rate option to introduce delay between a defined number of requests.

oha -n 10 --burst-delay 2s --burst-rate 4

In this particular scenario, every 2 seconds, 4 requests will be processed, and after 6s the total of 10 requests will be processed. NOTE: If you don't set --burst-rate option, the amount is default to 1

Dynamic url feature

You can use --rand-regex-url option to generate random url for each connection.

oha --rand-regex-url http://127.0.0.1/[a-z][a-z][0-9]

Each Urls are generated by rand_regex crate but regex's dot is disabled since it's not useful for this purpose and it's very inconvenient if url's dots are interpreted as regex's dot.

Optionally you can set --max-repeat option to limit max repeat count for each regex. e.g http://127.0.0.1/[a-z]* with --max-repeat 4 will generate url like http://127.0.0.1/[a-z]{0,4}

Currently dynamic scheme, host and port with keep-alive are not works well.

Contribution

Feel free to help us!

Here are some issues to improving.

  • Write tests
  • Improve tui design.
    • Show more information?
    • There are no color in realtime tui now. I want help from someone who has some color sense.
  • Improve speed
    • I'm new to tokio. I think there are some space to optimize query scheduling.

oha's People

Contributors

akiradeveloper avatar alexanderankin avatar chipsenkbeil avatar chocobo1 avatar dependabot[bot] avatar dmitry-j-mikhin avatar equal-l2 avatar fasterthanlime avatar hatoo avatar huntharo avatar jalil-salame avatar jtk18 avatar kianmeng avatar kngwyu avatar kyrias avatar lukehsiao avatar mechanicalbot avatar meronogbai avatar messense avatar mrjackwills avatar senden9 avatar shirshak55 avatar stefankreutz avatar svenstaro avatar togatoga avatar tonyskapunk avatar wjhoward avatar yiblet avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

oha's Issues

Provide option similar to curl's `--connect-to`

This is probably a rare use case (but it's ours!).

curl provides a connect-to option to override DNS resolution and always connect directly to an IP address for a given hostname+port.

Since oha uses hyper, and hyper allows passing a Resolver, it seems this would be reasonably easy to implement. I can try and give it a shot if you'd accept a PR for that in oha.

Let me know what you think!

IPv6 address support for `--connect-to`

I wrote the code for this so I'm to blame 🙈 but it just splits across : and expects 4 tokens, which obviously doesn't work for IPv6 addresses.

Since curl supports it, and oha's feature is modelled after curl's, I think it should support IPv6 syntax with brackets:

$ curl -I https://example.org --connect-to 'example.org:443:[2606:2800:220:1:248:1893:25c8:1946]:443'
HTTP/2 200
content-encoding: gzip
accept-ranges: bytes
age: 295951
cache-control: max-age=604800
content-type: text/html; charset=UTF-8
date: Fri, 27 May 2022 09:52:13 GMT
etag: "3147526947+gzip"
expires: Fri, 03 Jun 2022 09:52:13 GMT
last-modified: Thu, 17 Oct 2019 07:18:26 GMT
server: ECS (bsa/EB23)
x-cache: HIT
content-length: 648

Releasing a binary

Is it possible to set a github action that will release once you tag a release? If you think its acceptable i can setup and PR ?

Headers not working

This command should not result in this error, as I also tried without the header and the URL is correctly parsed, only when adding the header the command does not work anymore.

> oha -H "foo: bar" http://localhost:8080/
error: The following required arguments were not provided:
    <url>

USAGE:
    oha [FLAGS] [OPTIONS] <url>

And the headers -H resides in the OPTIONS section, so this should be fine.
-H "foo: bar" was taken from the example in the help.

oha version 0.4.7

Not able to run -H with content body

oha -z 20s -H "Authorization: Bearer $TOKEN" -T application/json -d to=+25078xxxxxx text="Hello" sender="Hello" 'https://api.pindo.io/v1/sms/' 
error: Found argument 'sender=Hello' which wasn't expected, or isn't valid in this context

USAGE:
    oha [FLAGS] [OPTIONS] <url>

For more information try --help

Run without argument prints too verbose

❯ cargo run
    Finished dev [unoptimized + debuginfo] target(s) in 0.06s
     Running `target/debug/oha`
error: The following required arguments were not provided:
    <url>

USAGE:
    oha <url> --fps <fps> --method <method> -n <n-requests> -c <n-workers> --redirect <redirect>

For more information try --help

it's better to

USAGE:
    oha <url>

Error reporting?

It seems Oha does not report errors at the end of run.

# Start Nginx
$ docker run --name nginx -d -p 8080:80 nginx

# Set the limit for file descriptors to its max value.
$ ulimit -n
1024
$ ulimit -H -n
524288
$ ulimit -n $(ulimit -H -n)
$ ulimit -n
524288

# Run Oha with a large number of workers.
$ ./target/release/oha -n 100000 -c 2000 'http://[::1]:8080'
Summary:
  ...

Response time histogram:
  0.102 [314]    |■■■■■■■■■■■■■■■■■
  0.129 [582]    |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.155 [97]     |■■■■■
  0.182 [0]      |
  0.209 [0]      |
  0.235 [3]      |
  0.262 [10]     |
  0.289 [23]     |■
  0.316 [24]     |■
  0.342 [35]     |■
  0.369 [14]     |

Latency distribution:
...

Status code distribution:
  [200] 1102 responses

As you can see there were only 1,102 responses received for 100,000 requests. There may be 98,898 errors but no errors were reported.

FYI, hey reports some errors like the followings.

$ ulimit -n
524288

$ ./hey -n 100000 -c 2000 'http://[::1]:8080'  
...
Status code distribution:
  [200]	99669 responses

Error distribution:
  [137]	Get http://[::1]:8080: EOF
  [194]	Get http://[::1]:8080: http: server closed idle connection

99,669 + 137 + 194 = 100,000

Environment

  • Fedora 31 (x86_64)
  • Rust 1.41.1

Use hypers's low layer API

https://docs.rs/hyper/0.13.4/hyper/client/conn/index.html

example

use anyhow::Context;
use futures_util::stream::*;
use tokio::prelude::*;

use std::str::FromStr;

trait AsyncRW: AsyncRead + AsyncWrite {}
impl<T: AsyncRead + AsyncWrite> AsyncRW for T {}

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    let url = url::Url::from_str("https://www.google.com")?;
    let addr = (
        url.host_str().context("get host")?,
        url.port_or_known_default().context("get port")?,
    );

    let addr = tokio::net::lookup_host(addr)
        .await?
        .next()
        .context("get addr")?;

    let stream: Box<dyn AsyncRW + Unpin + Send + 'static> = if url.scheme() == "https" {
        let stream = tokio::net::TcpStream::connect(addr).await?;
        let connector = native_tls::TlsConnector::new()?;
        let connector = tokio_tls::TlsConnector::from(connector);
        Box::new(
            connector
                .connect(url.domain().context("get domain")?, stream)
                .await?,
        )
    } else {
        Box::new(tokio::net::TcpStream::connect(addr).await?)
    };

    let (mut send, conn) = hyper::client::conn::handshake(stream).await?;

    // I don't know whe this line is needed
    let join = tokio::spawn(conn);

    // keep_alive
    for _ in 0..2 {
        let request = http::Request::builder()
            .version(http::Version::HTTP_11)
            .uri("/")
            .body(hyper::Body::empty())?;
        let res = send.send_request(request).await?;
        dbg!(
            res.into_body()
                .map(|bytes| bytes.unwrap().len())
                .collect::<Vec<_>>()
                .await
        );
    }
    Ok(())
}

Add a LICENSE file

Since I'm currently packaging this for Arch, I noticed that you're missing a LICENSE file. You should probably add one for MIT.

proxy support

Hello,

It would be nice to have a proxy (socks/http) support as hey does.

Thank you.

Rate limiting significantly impacts results

Here I set the rate limit to 10qps, and my response are crazy high, up to 18s. This service is absolutely not that slow.

# oha -q 10 -c 100 "${DESTINATION}"
Summary:
  Success rate: 1.0000
  Total:        19.9057 secs
  Slowest:      19.4002 secs
  Fastest:      0.1017 secs
  Average:      7.4746 secs
  Requests/sec: 10.0473

  Total data:   200.00 KiB
  Size/request: 1024 B
  Size/sec:     10.05 KiB

Response time histogram:
  1.754 [19] |■■■■■■■■■■■■■■■■
  3.509 [23] |■■■■■■■■■■■■■■■■■■■■
  5.263 [32] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  7.018 [36] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  8.772 [33] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  10.526 [7]  |■■■■■■
  12.281 [9]  |■■■■■■■■
  14.035 [13] |■■■■■■■■■■■
  15.790 [14] |■■■■■■■■■■■■
  17.544 [9]  |■■■■■■■■
  19.298 [5]  |■■■■

Latency distribution:
  10% in 1.9994 secs
  25% in 4.3002 secs
  50% in 6.6000 secs
  75% in 10.6993 secs
  90% in 14.7995 secs
  95% in 16.6002 secs
  99% in 18.6991 secs

Details (average, fastest, slowest):
  DNS+dialup:   5.5760 secs, 0.0336 secs, 16.1002 secs
  DNS-lookup:   5.4315 secs, 0.0298 secs, 16.0029 secs

Status code distribution:
  [200] 200 responses

Without a high rate limit, this drops down to more reasonable levels:

# oha -q 10000 -c 100 "${DESTINATION}"
Summary:
  Success rate: 1.0000
  Total:        0.0714 secs
  Slowest:      0.0631 secs
  Fastest:      0.0010 secs
  Average:      0.0235 secs
  Requests/sec: 2801.0961

  Total data:   200.00 KiB
  Size/request: 1024 B
  Size/sec:     2.74 MiB

Response time histogram:
  0.006 [100] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.011 [0]   |
  0.017 [0]   |
  0.023 [0]   |
  0.028 [0]   |
  0.034 [0]   |
  0.039 [6]   |■
  0.045 [51]  |■■■■■■■■■■■■■■■■
  0.051 [42]  |■■■■■■■■■■■■■
  0.056 [0]   |
  0.062 [1]   |

Latency distribution:
  10% in 0.0012 secs
  25% in 0.0014 secs
  50% in 0.0365 secs
  75% in 0.0456 secs
  90% in 0.0489 secs
  95% in 0.0497 secs
  99% in 0.0512 secs

Details (average, fastest, slowest):
  DNS+dialup:   0.0398 secs, 0.0309 secs, 0.0624 secs
  DNS-lookup:   0.0370 secs, 0.0272 secs, 0.0622 secs

Status code distribution:
  [200] 200 responses

One other note, in hey, the rate is -q*-z:

# hey -q 10 -c 100 "${DESTINATION}"

Summary:
  Total:        0.2059 secs
  Slowest:      0.0145 secs
  Fastest:      0.0023 secs
  Average:      0.0082 secs
  Requests/sec: 971.3312

Error: Error parsing resolv.conf: InvalidOption(17)

$ ./oha --version
oha 0.4.3

$ ./oha http://127.0.0.1:8888 
Error: Error parsing resolv.conf: InvalidOption(17)

$ cat /etc/resolv.conf 
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.

nameserver 127.0.0.53
options edns0 trust-ad
search onsec.ru lan

Use more colors

I think for very quickly looking at the output of your load test it would be cool if oha used some colors with sane defaults for how long something takes (maybe green 0s - 0.3s, orange, 0.3-0.8, red, >=0.8?).

I suggest using colors in the TUI view and in the final output. Of course, make it into an option but I think it should be "auto" by default so that if the terminal is detected to be capable of colors, it should output using colors.

-H option before url leads to no url being specified

When trying out oha with a custom header before the url it throws an error that no url is provided:

oha -n 100 -H "Authorization: Bearer $TOKEN_VALUE"  'https://github.com/llala'
error: The following required arguments were not provided:
    <url>

USAGE:
    oha [FLAGS] [OPTIONS] <url>

For more information try --help

As headers takes a Vec the url is parsed as a header which is unexpected, the following options do work:

oha  -H "Authorization: Bearer $TOKEN_VALUE"  -n 100 'https://github.com/llala'
oha  -n 100 'https://github.com/llala'  -H "Authorization: Bearer $TOKEN_VALUE"  

The simple "fix" would be to make -H not take a vec but a Option and let users append -H multiple times to set multiple headers. But this will break existing users workflow, as alternative maybe allow users to provide the url as parameter?
Or structopt could verify if the provided argument was an valid HTTP header, but that might add more complexity.

oha doesn't shutdown properly with Ctrl-C when stdout is piped

oha on  master is 📦 v0.1.4 via 🦀 v1.43.0-nightly
❯ cargo run --release -- --no-tui -z 6m http://192.168.10.201 | grep Requests/sec:           Finished release [optimized] target(s) in 0.09s
     Running `target/release/oha --no-tui -z 6m 'http://192.168.10.201'`
^Cthread 'tokio-runtime-worker' panicked at 'failed printing to stdout: Broken pipe (os error 32)', src/libstd/io/stdio.rs:805:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Show TCP bandwidth

Currently, oha shows the sum of the HTTP body.
But showing in/out TCP bandwidth is better.

I think it's hard to implement because all communication is abstracted by hyper.

Support -qps 0 for unlimited qps

# oha -q 0 http://server:8080
panicked at 'divide by zero error when dividing duration by scalar', src/libcore/time.rs:794:9
# hey -q 0 http://server:8080

Summary:
  Total:        0.0168 secs
  Slowest:      0.0127 secs
  Fastest:      0.0003 secs
  Average:      0.0036 secs
  Requests/sec: 11938.9656

Would be nice to support this. Not a big issue, but its pretty common for load clients to behave this way, especially hey which has the same command line parameters.

Show errors on realtime tui

Currently, an error like connection error is shown in summary but not realtime.
It should be shown on realtime.

no record found for name but curl is perfectly happy

I'm trying to load test an internal host that is only resolvable via a special internal DNS and only IPv4. However, I can resolve that name just fine via nslookup, host, curl, and even hey but oha tells me

[69] no record found for name: internal.host type: A class: IN

I'm calling

oha --ipv4 -q 200 -n 2000 https://internal.host/test

However:

$ nslookup internal.host
Server:		127.0.0.1
Address:	127.0.0.1#53

internal.host	canonical name = redacted.
Name:	redacted
Address: 10.1.241.80

Provide immutable source archives

Hi,

in order to make life for us (package maintainers) a bit easier, I wonder if it's possible to release also properly rolled, immutable, source archives along with binary files.

The way it is done now relies on github-created archives that can, under some circumstances, change, causing checksums to also change (among other things).

What do you think about it?
Thanks.

Publish arm64 binaries

Currently, only amd64 binaries are published. It would be useful to additionally build arm64 binaries.

Bad numbers?

oha: oha-linux-amd64 -j -z 10s -m GET http://localhost:3000
image
why average, slowest, fastest is very small?

bombardier: bombardier -p r -o j -l -c 40 -d 10s -m GET http://localhost:3000
image

Could you implement properties min, max, average r/s ? Also pXX r/s

Packaged for Arch Linux

Hey, not really an issue but I think this is a cool tool and I packaged it for Arch Linux here. Once you make a release including the LICENSE file, I'll add that as well. If it's popular it has a good chance to become part of the [community] repository.

Configure timescale

Currently oha plots per 1 sec.
It's good to make the scale configurable via keyboard shortcut.

Response time histogram collapses when there are many responses

Summary:
  Success rate: 1.0000
  Total:        50.0131 secs
  Slowest:      0.0768 secs
  Fastest:      0.0006 secs
  Average:      0.0053 secs
  Requests/sec: 18847.3182

  Total data:   10.79 MiB
  Size/request: 12.00 B
  Size/sec:     220.87 KiB

Response time histogram:
  0.001 [18563] |■■
  0.002 [118922]        |■■■■■■■■■■■■■
  0.003 [284620]        |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.005 [275058]        |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.006 [127936]        |■■■■■■■■■■■■■■
  0.008 [62017] |■■■■■■
  0.009 [30270] |■■■
  0.010 [12834] |■
  0.012 [5407]  |
  0.013 [2621]  |
  0.015 [4365]  |

...

Sometimes oha produces controversial results

Summary:
  Success rate:	1.0000
  Total:	199.9830 secs
  Slowest:	0.1966 secs
  Fastest:	0.0112 secs
  Average:	0.0165 secs
  Requests/sec:	50.0042

  Total data:	40.45 MiB
  Size/request:	4.14 KiB
  Size/sec:	207.10 KiB

Response time histogram:
  0.003 [4391] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.007 [3777] |■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.010 [1398] |■■■■■■■■■■
  0.014 [178]  |■
  0.017 [52]   |
  0.021 [18]   |
  0.024 [37]   |
  0.028 [39]   |
  0.031 [25]   |
  0.035 [7]    |
  0.038 [78]   |

Latency distribution:
  10% in 0.0128 secs
  25% in 0.0136 secs
  50% in 0.0150 secs
  75% in 0.0173 secs
  90% in 0.0196 secs
  95% in 0.0212 secs
  99% in 0.0401 secs

Details (average, fastest, slowest):
  DNS+dialup:	0.0006 secs, 0.0001 secs, 0.0019 secs
  DNS-lookup:	0.0000 secs, 0.0000 secs, 0.0002 secs

Status code distribution:
  [200] 10000 responses

It says that fastest is 0.0112. But in response time histogram there are more lower numbers 0.003 0.007 0.010, all of then are lower then fastest 0.0112. How this could happen?
P.S. oha was executed with following params:

$ ./oha-linux-amd64 -V
oha 0.5.0
$ ./oha-linux-amd64 $'http://127.0.0.1:8080/api/graphql?orgId=77129' \
  -H $'ACCEPT: */*' \
  -H $'ACCEPT-ENCODING: gzip, deflate, br' \
  -H $'ACCEPT-LANGUAGE: en-US,en;q=0.9' \
  -H $'HOST: experiment.amplitude.com' \
  -H $'REFERER: https://experiment.amplitude.com/ford/296855/config/3818/overview' \
  -H $'USER-AGENT: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36' \
  -H $'CONTENT-TYPE: application/json' \
  -H $'X-FORWARDED-FOR: 94.192.0.171' \
  -H $'X-FORWARDED-PROTO: https' \
  -H $'X-FORWARDED-PORT: 443' \
  -H $'X-AMZN-TRACE-ID: Root=1-623b20da-1a78a5ad4133d446797ec370' \
  -H $'SEC-CH-UA: " Not;A Brand";v="99", "Google Chrome";v="97", "Chromium";v="97"' \
  -H $'SEC-CH-UA-MOBILE: ?0' \
  -H $'SEC-CH-UA-PLATFORM: "macOS"' \
  -H $'ORIGIN: https://experiment.amplitude.com' \
  -H $'SEC-FETCH-SITE: same-origin' \
  -H $'SEC-FETCH-MODE: cors' \
  -H $'SEC-FETCH-DEST: empty' \
  -H $'COOKIE: corp_utm={%22utm_source%22:%22adwordsb%22%2C%22utm_medium%22:%22ppc%22%2C%22utm_campaign%22:%22Search_EMEA_UK_EN_Brand%22%2C%22utm_content%22:%22Brand_Exact%22%2C%22utm_term%22:%22amplitude%22%2C%22gclid%22:%22Cj0KCQiA3rKQBhCNARIsACUEW_bybYh1mI2zCPPjvvkkGlgLLvjFCQ0b9eoqpZmNu8CZN9PnOn9RgOUaAgXmEALw_wcB%22%2C%22blaid%22:%22%22%2C%22referrer%22:%22https://www.google.com/%22%2C%22referring_domain%22:%22www.google.com%22}; amp_9ff40c=Jy-igGuAUSYA6WsRm7kQS8...1fs1kjeqs.1fs1kjerb.0.2.2; __utmzz=utmcsr=google|utmcmd=organic|utmccn=(not set)|utmctr=(not provided); __utmzzses=1; amp_e3e918=...0.0.0.0.0; CookieControl={"necessaryCookies":["__utmzz","__utmzzses","corp_utm","membership_token_*","wordpress_pricing_page_uuid","wordpress_pricing_page_variant","wordpress_pricing_page_uuid_http_only","wordpress_pricing_page_variant_http_only"],"optionalCookies":{"performance":"accepted","functional":"accepted","advertising":"accepted"},"statement":{"shown":true,"updated":"25/04/2018"},"consentDate":1645027639419,"consentExpiry":90,"interactedWith":true,"user":"782D5A6E-935A-4FC6-A1FF-CDFFBA494BD7"}; amp_e3e918_amplitude.com=Jy-igGuAUSYA6WsRm7kQS8...1fs1kjg38.1fs1kjg44.7.3.a; _mkto_trk=id:138-CDN-550&token:_mch-amplitude.com-1645027639544-70830; _rdt_uuid=1645027639614.88316fc4-a3d7-4116-81d8-d13d928da49d; _ga=GA1.2.1995760543.1645027639; _biz_uid=4c1a4c78f8924bc1d9f8d406bb6cb30f; _biz_nA=3; _biz_flagsA=%7B%22Version%22%3A1%2C%22Mkto%22%3A%221%22%2C%22ViewThrough%22%3A%221%22%2C%22XDomain%22%3A%221%22%7D; _biz_pendingA=%5B%5D; _ga_2FY44PPV92=GS1.1.1645087713.2.0.1645087713.60; org_login_production="2|1:8|10:1647855916|20:org_login_production|116:eyJlbWFpbCI6ImxtY2dyYXQ4QGZvcmQuY29tIiwibG9naW5zIjpbeyJvcmdfaWQiOjc3MTI5LCJ0aW1lIjoxNjQ3ODU1OTE2LjIxNTQ1MTQ3OX1dfQ==|b3de52d364947420d9a6e2255ed6f6c6e5181b4c16c82ed6f6497c9cf095f715"; access_token_production=2|1:8|10:1647855916|23:access_token_production|48:NWU2ODNiNjUtZDNlNC00OTQ5LTk3MGItYjYzYzYxMjRmYmMz|48363b3f62e6cf4d49d38f8f3b2cd179b65d21b540fc59c421fb28003c67d5fe; amp_e5a2c9=ZP09Aj69GRhOvJDBIGm41h...1furej3on.1furej3os.l.i.17; amp_7f21dc=OdeEULvqqrw1AiAEjRT9x6...1furekh1d.1furekh1d.0.0.0; intercom-session-gjvo8fgi=d0dWcUluVENXbE5IVm5LMWpRQVpzRUd2azlreWxKRXZtSHFKSUhPRjZBWXhCK3BCSHRqUndiMzN1N1gwdytLYi0tL2E4SThrTEpiNVQ1dnBOMFBudm5BZz09--a648b8f55ebb74baa798dabe54fbd5a1531cce86; amp_e5a2c9_amplitude.com=UbutnSW8aK-u0AkJodB6LK.bG1jZ3JhdDhAZm9yZC5jb20=..1furej3p5.1furev6kd.1sf.jo.2g7; amp_6d2283=uqpetFBJEpQYUMDcOQ8kqZ.bG1jZ3JhdDhAZm9yZC5jb20=..1furej3p3.1furev6kh.6l.g1.mm; amp_fb0efa=DOctVsiLXCSJa1RIc-_XG2.bG1jZ3JhdDhAZm9yZC5jb20=..1furej3p0.1furff3d5.55q.gb.5m5; amp_7f21dc_amplitude.com=OdeEULvqqrw1AiAEjRT9x6.bG1jZ3JhdDhAZm9yZC5jb20=..1furekh1p.1furfgl02.di.8r.md; amp_99dd8b=X8LeGQH6I5XD5G0FNPUDqF.bG1jZ3JhdDhAZm9yZC5jb20=..1furekh1h.1furfgl12.4i.8r.dd' \
  -d $'{"operationName":"flagKeysInEnv","variables":{"withDeleted":true,"projectId":"296855"},"query":"query flagKeysInEnv($projectId: ID!, $deploymentId: ID, $withDeleted: Boolean = false) {\\n  configs: flagConfigsInEnv(\\n    projectId: $projectId\\n    deploymentId: $deploymentId\\n    withDeleted: $withDeleted\\n  ) {\\n    key\\n    __typename\\n  }\\n}\\n"}' \
  -n 10000 \
  -q 50

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.