Code Monkey home page Code Monkey logo

app-servers's People

Contributors

aurimasniekis avatar costajob avatar kblok avatar nateberkopec avatar nono avatar willie avatar zanderso avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

app-servers's Issues

Vibe.d running debug build + possible Bug in detection threadsPerCpu

Finally some time at the PC. I took a quick look at your configuration and found it odd that the memory usage was so high for D. And after checking the command that you used to run, it downed on me that your running in debug mode.

Try:

dub build --build=release --compiler=ldc2 

The compiler switch may not be needed if you only have lcd installed. But if you have both installed, its possible it falls back to the slower ( but fast to compile DMD compiler and again in debug mode ).


Next issue with the multithreading:

Try:

	auto settings = new HTTPServerSettings;
	settings.port = 9292;
	settings.bindAddresses = ["::1", "0.0.0.0"];

	settings.options |= HTTPServerOption.distribute;
	setupWorkerThreads(4);

	listenHTTP(settings, &hello);
	runApplication();

Try this code. I suspect that your worker threadcount is not being set to 4 but only to 1. I think there is a bug with the threadpercpu detection on your system because distribute really needs to use all 4 Cpu Cores and not one.

Wrk on local host

To not have wrks load influence your servers you should not run it on the same system.

Maybe you can also include D + Vibe.d

import vibe.vibe;

void main()
{
	listenHTTP(":8080", &handleRequest);
	runApplication();
}

void handleRequest(HTTPServerRequest req, HTTPServerResponse res)
{
	if (req.path == "/")
		res.writeBody("Hello, World!");
}

The default dmd compiler is the slowest. LDC is fast.

Alternative NodeJS HTTP Server

I stumbled upon this which serves as a replacement for the standard library's http module. It claims high performance and efficiency.

Also, could you add a second benchmark for low-end specs (1 Core CPU + 512MB)... I guess that would speak a lot since most individuals/devs (except maybe startups) won't go in for full blown 16GB server until significant traffic is noticed.

ASP.NET Core

Would you accept a PR with a ASP.NET Core server?

Rust ?

Hi!

It would be nice to maybe include Rust in this little benchmark.

The standard lib does not ship with an HTTP server, but the folks of the Iron framework are doing a very decent job. ๐Ÿ˜„

Multi-threaded httpbeast

Nim compiler by default do not turn on mutli-threaded support, but it doesn't mean nim or httpbeast is designed to be single-threaded. And it is very easy to turn on mutli-thread support by just adding one option --threads:on.

nim c -d:release --threads:on servers/httpbeast_server.nim

Just like most of the lang/framework are concurrent, httpbeast should also be.

Node Version

Hello!

I like the idea behind this project - I've seen some people requesting this recently, as the last one I saw Node in was pre-io.js (but, admittedly, I haven't looked at performance reviews a lot).

However, Node v4 is, if you're using the current version, LTS. It would be beneficial and fair to also run Node's v6 latest branch - that's what the majority of software is running, sans a smaller group of Enterprise solutions that need the extreme stability of the LTS version.

If you have any questions, I'd be happy to answer or try to pull in appropriate people to answer them!

I got 1285049 requests in 30.05s for Plug server

Running 30s test @ http://127.0.0.1:9292

4 threads and 100 connections

Thread Stats Avg Stdev Max +/- Stdev

Latency     2.34ms    1.99ms  67.24ms   90.90%

Req/Sec    10.75k     2.04k   17.82k    69.92%

1285049 requests in 30.05s, 190.19MB read

Requests/sec: 42761.97

Transfer/sec: 6.33MB`

I think there is some wrong with the results of Plug
I'm running this test on a 2011 i7 mb pro

Test results qualification

Hi, thank you for detailed description of the benchmark. I've reproduced it step by step, but got different ratio of results. As I understand, you have run tests on different days, so maybe you could rerun all of them all at once?
Besides, I noticed, that response length with considering of headers isn't the same. And some of servers didn't even calculate date header.

Here is my results:

App Server Throughput (req/s) Latency in ms (avg/stdev/max) Response in bytes
Plug with Cowboy 43564.50 2.24/0.41/ 15.84 172
Rack with Puma 32947.19 0.39/0.11/ 17.38 72
Nim asynchttpserver 60916.85 1.64/0.28/ 26.79 47
Node Cluster 44437.09 2.27/0.92/ 58.59 139
Ring with Jetty 45946.47 2.39/3.81/129.38 159
Rust Hyper 55010.47 1.81/0.30/ 4.90 84
Gunicorn with Meinheld 53362.82 1.87/0.30/ 5.36 154
Servlet3 with Jetty 50938.14 2.37/5.43/147.65 154
Colossus 59850.19 1.67/0.28/ 7.36 72
GO ServeMux 49734.60 1.99/0.42/ 9.39 123
Crystal HTTP 67656.15 1.48/0.19/ 4.56 95

Add launcher scripts and benchmark logging

It would be nice to have a launcher script(s) for each of the servers (e.g.: server.sh and server.bat) that would:

  • start server
  • benchmark with wrk
  • add benchmark results to common log
  • stop server

Things to improve

Hello,

I think there are some things to improve this benchmark:

  • First, the payload for the Elixir version is Hello world! (12 bytes), but the payload for other tests is Hello world (11 byte). Not a big deal ;-)
  • Then, on the same topic, seeing the Transfer/sec is interesting. It looks like Node and Elixir are sending heavier HTTP headers than Ruby and Go for example.
  • For Elixir, I think it should be more fair to use the Hello World example from Plug instead of a router. Others don't have a router.
  • Last but not least, http pipelining should be disabled for all tests. See wg/wrk#197 for some details.

update dartlang to v2

Thanks for this, i have not looked at all but can you update esp. dartlang to v2? :)

Dart2Native benchmark

Dart2Native is released. Please update your benchmark and use Dart2Native in place of Dart VM

Bootstrap of plug

Can you please clarify how did you run http server using elixir?

I started elixir by using iex interactive console as described on Plug read-me.

Do I understand correctly that you did this?

$ iex -S mix
iex> c "path/to/file.ex"
[MyPlug]
iex> {:ok, _} = Plug.Adapters.Cowboy.http MyPlug, []
{:ok, #PID<...>}

Clojure analysis

Concurrency and parallelism

Clojure leverages on the JVM to deliver parallelism: indeed it parallelizes better than Java, since it uses all of the available cores.

I don't think you can say that at all, actually. It's using more of the CPU, but possibly for additional garbage collection or a property of the differing Jetty configurations. Ring Jetty is built on top of Jetty+Servlets. It's functionally equivalent but adds more work (e.g. allocations) on top of the raw Java/Jetty/Servlet3 test as far as this simple "Hello World" goes.

The configuration of Jetty is also differing between the two tests:

https://github.com/ring-clojure/ring/blob/master/ring-jetty-adapter/src/ring/adapter/jetty.clj#L132-L142

Ring Jetty has default configuration of Jetty, whereas your test with Jetty does not change Jetty's default configuration. For reference, Jetty's default configuration creates a new threadpool with max 200 threads (vs. 50). At the default thread stack size (1mb on most platforms), an additional 150 threads (that is, if the threadpool creates that many threads -- it creates as necessary) is an additional 150mb of memory which can account for the extra memory (e.x. if Java/Jetty/Servlets is using 85 threads steady state and Clojure/Ring is using 50). In general you shouldn't really expect a stable or representative RSS from Java as it'll hold onto memory as it claims up to the max heap size. Run your benchmarks for longer and the footprint may well increase unless you cap the heap size.

Fundamentally, with the correct rigor you're just benchmarking the Ring library. But the analysis presented is incorrect.

Test accuracy could be improved

Running the client threads and server threads on the same machine is costing you valuable time during context switches, especially for platforms like OTP.

In other words (as @evadne puts it):

  1. The observer effect.
  2. NUMA.
  3. Context switching.
  4. Lack of core affinity.

... are all affecting your results, probably generating poor performance across all benchmarks, not just OTP!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.