costajob / app-servers Goto Github PK
View Code? Open in Web Editor NEWApp Servers benchmarked for: Ruby, Python, JavaScript, Dart, Elixir, Java, Crystal, Nim, GO, Rust
App Servers benchmarked for: Ruby, Python, JavaScript, Dart, Elixir, Java, Crystal, Nim, GO, Rust
Finally some time at the PC. I took a quick look at your configuration and found it odd that the memory usage was so high for D. And after checking the command that you used to run, it downed on me that your running in debug mode.
Try:
dub build --build=release --compiler=ldc2
The compiler switch may not be needed if you only have lcd installed. But if you have both installed, its possible it falls back to the slower ( but fast to compile DMD compiler and again in debug mode ).
Next issue with the multithreading:
Try:
auto settings = new HTTPServerSettings;
settings.port = 9292;
settings.bindAddresses = ["::1", "0.0.0.0"];
settings.options |= HTTPServerOption.distribute;
setupWorkerThreads(4);
listenHTTP(settings, &hello);
runApplication();
Try this code. I suspect that your worker threadcount is not being set to 4 but only to 1. I think there is a bug with the threadpercpu detection on your system because distribute really needs to use all 4 Cpu Cores and not one.
please could i add PHP support, i develop PHP applications all the time with Slim Framework, but right now i am rewriting an application in the dart language using Angel Framework and would like to see more performance comparisons between php and dart.
There are several server implementations in PHP but these are the best known
https://reactphp.org/
https://github.com/youngj/httpserver
https://github.com/amphp/http-server
To not have wrks load influence your servers you should not run it on the same system.
Hi,
default for dotnet run
is to build in Debug
, as per https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-run?tabs=netcore21.
Would you consider doing a dotnet run --configuration Release
and seeing if it gives better results?
Thank you.
import vibe.vibe;
void main()
{
listenHTTP(":8080", &handleRequest);
runApplication();
}
void handleRequest(HTTPServerRequest req, HTTPServerResponse res)
{
if (req.path == "/")
res.writeBody("Hello, World!");
}
The default dmd compiler is the slowest. LDC is fast.
I stumbled upon this which serves as a replacement for the standard library's http
module. It claims high performance and efficiency.
Also, could you add a second benchmark for low-end specs (1 Core CPU + 512MB)... I guess that would speak a lot since most individuals/devs (except maybe startups) won't go in for full blown 16GB server until significant traffic is noticed.
Would you accept a PR with a ASP.NET Core server?
could upgrade to dart 2.8 and compare the JIT vs AOT version
Hi!
It would be nice to maybe include Rust in this little benchmark.
The standard lib does not ship with an HTTP server, but the folks of the Iron framework are doing a very decent job. ๐
Nim compiler by default do not turn on mutli-threaded support, but it doesn't mean nim or httpbeast is designed to be single-threaded. And it is very easy to turn on mutli-thread support by just adding one option --threads:on
.
nim c -d:release --threads:on servers/httpbeast_server.nim
Just like most of the lang/framework are concurrent, httpbeast should also be.
Please add results for production dart
Hello!
I like the idea behind this project - I've seen some people requesting this recently, as the last one I saw Node in was pre-io.js (but, admittedly, I haven't looked at performance reviews a lot).
However, Node v4 is, if you're using the current version, LTS. It would be beneficial and fair to also run Node's v6 latest branch - that's what the majority of software is running, sans a smaller group of Enterprise solutions that need the extreme stability of the LTS version.
If you have any questions, I'd be happy to answer or try to pull in appropriate people to answer them!
Running 30s test @ http://127.0.0.1:9292
4 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 2.34ms 1.99ms 67.24ms 90.90%
Req/Sec 10.75k 2.04k 17.82k 69.92%
1285049 requests in 30.05s, 190.19MB read
Requests/sec: 42761.97
Transfer/sec: 6.33MB`
I think there is some wrong with the results of Plug
I'm running this test on a 2011 i7 mb pro
Hi, thank you for detailed description of the benchmark. I've reproduced it step by step, but got different ratio of results. As I understand, you have run tests on different days, so maybe you could rerun all of them all at once?
Besides, I noticed, that response length with considering of headers isn't the same. And some of servers didn't even calculate date
header.
Here is my results:
App Server | Throughput (req/s) | Latency in ms (avg/stdev/max) | Response in bytes |
---|---|---|---|
Plug with Cowboy | 43564.50 | 2.24/0.41/ 15.84 | 172 |
Rack with Puma | 32947.19 | 0.39/0.11/ 17.38 | 72 |
Nim asynchttpserver | 60916.85 | 1.64/0.28/ 26.79 | 47 |
Node Cluster | 44437.09 | 2.27/0.92/ 58.59 | 139 |
Ring with Jetty | 45946.47 | 2.39/3.81/129.38 | 159 |
Rust Hyper | 55010.47 | 1.81/0.30/ 4.90 | 84 |
Gunicorn with Meinheld | 53362.82 | 1.87/0.30/ 5.36 | 154 |
Servlet3 with Jetty | 50938.14 | 2.37/5.43/147.65 | 154 |
Colossus | 59850.19 | 1.67/0.28/ 7.36 | 72 |
GO ServeMux | 49734.60 | 1.99/0.42/ 9.39 | 123 |
Crystal HTTP | 67656.15 | 1.48/0.19/ 4.56 | 95 |
It would be nice to have a launcher script(s) for each of the servers (e.g.: server.sh and server.bat) that would:
Hello,
I think there are some things to improve this benchmark:
Hello world!
(12 bytes), but the payload for other tests is Hello world
(11 byte). Not a big deal ;-)http://ponylang.org - they would be very interested if some toolstack can beat them.
Thanks for this, i have not looked at all but can you update esp. dartlang to v2? :)
Dart2Native is released. Please update your benchmark and use Dart2Native in place of Dart VM
Nim's httpbeast is compiled without --threads:on
. Using this flag would surely improve it's performance.
Please add Bjoern which has a lot of benchmarks (on the Internet) proving its faster than Meinheld by orders of magnitude.
Can you please clarify how did you run http server using elixir?
I started elixir by using iex interactive console as described on Plug read-me.
Do I understand correctly that you did this?
$ iex -S mix
iex> c "path/to/file.ex"
[MyPlug]
iex> {:ok, _} = Plug.Adapters.Cowboy.http MyPlug, []
{:ok, #PID<...>}
Concurrency and parallelism
Clojure leverages on the JVM to deliver parallelism: indeed it parallelizes better than Java, since it uses all of the available cores.
I don't think you can say that at all, actually. It's using more of the CPU, but possibly for additional garbage collection or a property of the differing Jetty configurations. Ring Jetty is built on top of Jetty+Servlets. It's functionally equivalent but adds more work (e.g. allocations) on top of the raw Java/Jetty/Servlet3 test as far as this simple "Hello World" goes.
The configuration of Jetty is also differing between the two tests:
Ring Jetty has default configuration of Jetty, whereas your test with Jetty does not change Jetty's default configuration. For reference, Jetty's default configuration creates a new threadpool with max 200 threads (vs. 50). At the default thread stack size (1mb on most platforms), an additional 150 threads (that is, if the threadpool creates that many threads -- it creates as necessary) is an additional 150mb of memory which can account for the extra memory (e.x. if Java/Jetty/Servlets is using 85 threads steady state and Clojure/Ring is using 50). In general you shouldn't really expect a stable or representative RSS from Java as it'll hold onto memory as it claims up to the max heap size. Run your benchmarks for longer and the footprint may well increase unless you cap the heap size.
Fundamentally, with the correct rigor you're just benchmarking the Ring library. But the analysis presented is incorrect.
Running the client threads and server threads on the same machine is costing you valuable time during context switches, especially for platforms like OTP.
In other words (as @evadne puts it):
... are all affecting your results, probably generating poor performance across all benchmarks, not just OTP!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.