Scalable, asynchronous IO coroutine-based handling (aka MIO COroutines).
Using mioco
you can handle scalable, asynchronous mio
-based IO, using set of synchronous-IO
handling functions. Based on asynchronous mio
events mioco
will cooperatively schedule your
handlers.
You can think of mioco
as of Node.js for Rust or green threads on top of mio
.
mioco
is still very experimental, but already usable. For real-life project using
mioco
see colerr.
Read Documentation for details.
If you need help, try asking on #mioco gitter.im. If still no luck, try rust user forum.
To report a bug or ask for features use github issues.
Note: You must be using nightly Rust release. If you're using
multirust, which is highly recommended, switch with multirust default nightly
command.
cargo build --release
make echo
Beware: This is very naive comparison! I tried to run it fairly,
but I might have missed something. Also no effort was spent on optimizing
neither mioco
nor other tested tcp echo implementations.
In thousands requests per second:
bench1 |
bench2 |
|
---|---|---|
libev |
183 | 225 |
node |
37 | 42 |
mio |
TBD | TBD |
mioco |
157 | 177 |
Server implementation tested:
libev
- https://github.com/dpc/benchmark-echo/blob/master/server_libev.c ; Note: this implementation "cheats", by waiting only for read events, which works in this particular scenario.node
- https://github.com/dpc/node-tcp-echo-server ;mio
- TBD. See: hjr3/mob#1 ;mioco
- https://github.com/dpc/mioco/blob/master/examples/echo.rs;
Benchmarks used:
bench1
- https://github.com/dpc/benchmark-echo ;PARAMS='-t64 -c10 -e10000 -fdata.json'
;bench2
- https://gist.github.com/dpc/8cacd3b6fa5273ffdcce ;GOMAXPROCS=64 ./tcp_bench -c=128 -t=30 -a=""
;
Machine used:
- i7-3770K CPU @ 3.50GHz, 32GB DDR3 1800Mhz, some basic overclocking, Fedora 21;