Code Monkey home page Code Monkey logo

grpc-rust's Introduction

grpc-rust

The project is dead; long live the new project

gRPC team is developing shiny new gRPC implementation in Rust, grpc crate name is transferred to them.

This code is no longer maintained.

Original README.md

Build Status License crates.io

Rust implementation of gRPC protocol, under development.

Some development questions in FAQ.

Current status

It basically works, but not suitable for production use.

See grpc-examples/src/bin/greeter_{client,server}.rs. It can be tested for example with go client:

# start greeter server implemented in rust
$ cargo run --bin greeter_server

# ... or start greeter server implemented in go
$ go get -u google.golang.org/grpc/examples/helloworld/greeter_client
$ greeter_server

# start greeter client implemented in rust
$ cargo run --bin greeter_client rust
> message: "Hello rust"

# ... or start greeter client implemented in go
$ go get -u google.golang.org/grpc/examples/helloworld/greeter_client
$ greeter_client rust
> 2016/08/19 05:44:45 Greeting: Hello rust

Route guide

Route guide example implementation in grpc-rust is in grpc-examples folder.

How to generate rust code

There are two ways to generate rust code from .proto files

Invoke protoc programmatically with protoc-rust-grpc crate

(Recommended)

Have a look at readme in protoc-rust-grpc crate.

With protoc command and protoc-gen-rust-grpc plugin

Readme

Use generated protos in your project

In Cargo.toml:

[dependencies]
grpc            = "~0.8"
protobuf        = "2.23"
futures         = "~0.3"

[build-dependencies]
protoc-rust-grpc = "~0.8"

TODO

  • Fix performance
  • More tests
  • In particular, add more compatibility tests, they live in interop directory
  • Fix all TODO in sources

Related projects

  • grpc-rs — alternative implementation of gRPC in Rust, a wrapper to C++ implementation
  • httpbis — implementation of HTTP/2, which is used by this implementation of gRPC
  • rust-protobuf — implementation of Google Protocol Buffers

grpc-rust's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

grpc-rust's Issues

Client doesn't return flow after error

I found lock for sync client if I send invalid message (without required field).
Got an error protobuf error: not all message fields set here in code.
Seems I locked here, because there is no print trace below.

Almost same happens when I try connect to server which wasn't started yet and call something (locked in call). But really I don't know correct grpc behaviour. My opinion client have to reconnect itself and return error after a few attempts.

Please add a license

Hello! Would it be possible to add a license? I would recommend MIT/Apache 2.0 like the rest of the Rust ecosystem, but please chose whatever you think is appropriate :)

Start Http2Server manually

Hello @stepancheg

I saw Http2Server::new() creates a Http2Server and start it immediately.

impl Http2Server {
    pub fn new<S>(port: u16, service: S) -> Http2Server
        where S : HttpService
    {
        let listen_addr = ("::", port).to_socket_addrs().unwrap().next().unwrap();

        let (get_from_loop_tx, get_from_loop_rx) = mpsc::channel();

        let state: Arc<Mutex<HttpServerState>> = Default::default();

        let state_copy = state.clone();

        let join_handle = thread::spawn(move || { // Http2Server starts at here.
            run_server_event_loop(listen_addr, state_copy, service, get_from_loop_tx);
        });

        let loop_to_server = get_from_loop_rx.recv().unwrap();

        Http2Server {
            state: state,
            loop_to_server: loop_to_server,
            thread_join_handle: Some(join_handle),
        }
    }
    // ...
}

Could you separate run_server_event_loop from new? Then the following code
could be less confusing I think.

Confuse me 😕:

fn main() {
    let _server = LongTestsAsyncServer::new(23432, LongTestsServerImpl {});
    // Why does the server magically started?
    loop {
        thread::park();
    }
}

Much clear 😀:

fn main() {
    let server = LongTestsAsyncServer::new(23432, LongTestsServerImpl {});
    server.run()
}

Thanks!

mutable implementation of service trait

It seems to me that structs implementing a service trait often need to be mutable, for example in the route-guide protobuf example. The trait requires a non-mutable reference to self. It's an interesting problem as to when to make the reference mutable, has there been any thought put into this? What does the timeline look like?

Check in Cargo.lock

I'm having trouble building - would you mind checking in your Cargo.lock file so I can use the same versions of libraries? Specifically I'm currently getting the following error when building:

src/futures_misc/task_data.rs:5:5: 5:28 error: unresolved import `futures::task::TaskData`. There is no `TaskData` in `futures::task` [E0432]
src/futures_misc/task_data.rs:5 use futures::task::TaskData;

Pass context to rpc handlers

It would be nice to be able to pass some arbitrary type to the rpc handlers from the server, in order to gain access to some global state across the entire application. I've been looking through the code, but I'm not seeing any way to do this yet.

If it isn't a feature yet, it seems like it would require a modification to the MethodHandlers in server.rs?

serve web content and grpc?

It is cool that this library now builds on httpbin. Is it possible to serve both web content and gRPC content on the same port? This is possible only using Go currently. grpc/grpc#8043

I want to make a single page application that uses gRPC for communication. The single page app just needs to serve an index.html page and some css and image files.

Support for additional headers

At the moment it looks like call_impl supports a fixed set of headers when starting a request. The wire format outlines a few others http://www.grpc.io/docs/guides/wire.html. in particular I'm wondering there is a way to start a request with the authorization header to authenticate requests? If not, would you be open to a pull request adding that support?

Document how to use grpc-compiler

It is not immediately clear how to compile grpc to rust, docs should be updated. I can see some stuff from grpc-examples/gen-rs.sh but it can be documented better.

I may be able to submit pull request once I figure this out (disclaimer: I am noob in Rust).

PS: can this be done via cargo?

Make openssl an optional dependency for real

Hi! Currently openssl dependency is specified as openssl = { version = "0.8", optional = true }, but in fact it is not optional, because httpbis is not optional, and it depends on native-tls :)

Very excited to see this!

Due to a lack of options we started with c++ at work but it will be great to be able to put some rust implemented services into production at some point.

When you come to a point far enough along to have some TODO items I'd be happy to contribute.

Java client hangs after receiving certain amount of data from streaming response

Hi! I've written a simple client/server both in Kotlin and Rust (four binaries in total), and all combinations work fine except for Rust server + Kotlin client: the client just hangs after receiving about 70k of data. There must be a bug somewhere, but I don't know if it is withing my code, rust grpc or java grpc :(

Here's the code: https://github.com/matklad/kt-rs-grpc.

There are two rpc requests:

The clients then subscribes for updates and sends 100 events, expecting to receive 100 updates: https://github.com/matklad/kt-rs-grpc/blob/master/rs/client.rs#L14-L24

It works except for the case when the server is in Rust, and the client is in Kotlin. The client then doest not receive all the messages:

λ ./build/install/kt/bin/client
recv 0 size 14336
recv 1 size 28672
recv 2 size 43008
recv 3 size 57344
recv 4 size 71680

If I vary the message size, the clients get different number of updates, but there total encoded size comes about 70 kilobytes.

Could not find `bytesx` in `httpbis`.

$ cargo build
   Compiling grpc v0.1.7
error[E0432]: unresolved import `httpbis::bytesx::bytes_extend_with`
  --> /Users/adodd/.cargo/registry/src/github.com-1ecc6299db9ec823/grpc-0.1.7/src/grpc_http_to_response.rs:16:5
   |
16 | use httpbis::bytesx::bytes_extend_with;
   |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Could not find `bytesx` in `httpbis`

error: aborting due to previous error

error: Could not compile `grpc`.

To learn more, run the command again with --verbose.

Looks like some very recent changes to httpbis may have caused this. Can't seem to figure out how to get Cargo to use an older version of the dep.

interop: unimplemented_service fails

/Users/stevej% ./go-grpc-interop-client -use_tls=false  -server_port=60011 -test_case=unimplemented_service
2017/01/25 12:59:27 transport: http2Client.notifyError got notified that the client transport was broken EOF.
2017/01/25 12:59:27 &{0xc420180500}.UnimplementedCall() = _, Internal, want _, Unimplemented

Server log

thread '<unnamed>' panicked at 'unknown method: /grpc.testing.UnimplementedService/UnimplementedCall', ../src/libcore/option.rs:705
stack backtrace:
   1:        0x10b4e0f4a - std::sys::imp::backtrace::tracing::imp::write::h917062bce4ff48c3
   2:        0x10b4e42bf - std::panicking::default_hook::{{closure}}::h0bacac31b5ed1870
   3:        0x10b4e2b4f - std::panicking::default_hook::h5897799da33ece67
   4:        0x10b4e3186 - std::panicking::rust_panic_with_hook::h109e116a3a861224
   5:        0x10b4e3024 - std::panicking::begin_panic::hbb38be1379e09df0
   6:        0x10b4e2f42 - std::panicking::begin_panic_fmt::h26713cea9bce3ab0
   7:        0x10b4e2ea7 - rust_begin_unwind
   8:        0x10b50ee10 - core::panicking::panic_fmt::hcfbb59eeb7f27f75
   9:        0x10b50ee7d - core::option::expect_failed::h530ede3d41450938
  10:        0x10b15f393 - <core::option::Option<T>>::expect::hca386619a6598654
  11:        0x10b16dc82 - grpc::server::ServerServiceDefinition::find_method::h9946d6be813fe143
  12:        0x10b16dd2f - grpc::server::ServerServiceDefinition::handle_method::hfff63ef4b38e0c98
  13:        0x10b16e58d - <grpc::server::GrpcHttpServerHandlerFactory as httpbis::http_common::HttpService>::new_request::h686d3a0bcf9a8216
  14:        0x10b08dac7 - <httpbis::server_conn::ServerInner<F>>::new_request::h1352ddd81d84d1bb
  15:        0x10b08d51a - <httpbis::server_conn::ServerInner<F>>::new_stream::h8b12a51798f4a8c6
  16:        0x10b08e020 - <httpbis::server_conn::ServerInner<F>>::get_or_create_stream::h74f6195dda1b1983
  17:        0x10b128b78 - <httpbis::server_conn::ServerInner<F> as httpbis::http_common::LoopInner>::process_headers_frame::hc8a12f83d8eebbb5
  18:        0x10b0e9f06 - httpbis::http_common::LoopInner::process_stream_frame::h4f0563f1432aae44
  19:        0x10b0e8f90 - httpbis::http_common::LoopInner::process_raw_frame::h72512b2fdbea3350
  20:        0x10b15c7c9 - <httpbis::http_common::ReadLoopData<I, N>>::process_raw_frame::{{closure}}::h8ac4e6a9a470fc80
  21:        0x10b15bc42 - <httpbis::futures_misc::task_data::TaskRcMut<A>>::with::{{closure}}::hd8418ff3a647b2e0
  22:        0x10b15bb15 - <futures::task_impl::task_rc::TaskRc<A>>::with::{{closure}}::h5957640af0198563
  23:        0x10b0e78ee - futures::task_impl::with::h4d500208dd9714d1
  24:        0x10b091ef8 - <futures::task_impl::task_rc::TaskRc<A>>::with::h5788a6804680fb11
  25:        0x10b09dbde - <httpbis::futures_misc::task_data::TaskRcMut<A>>::with::h58f5fe718ee762a6
  26:        0x10b09e248 - <httpbis::http_common::ReadLoopData<I, N>>::process_raw_frame::hdcbf98d7fedea675
  27:        0x10b14ca30 - <httpbis::http_common::ReadLoopData<I, N>>::read_process_frame::{{closure}}::hdead651237ee6cdc
  28:        0x10b14c8d9 - <futures::future::and_then::AndThen<A, B, F> as futures::future::Future>::poll::{{closure}}::{{closure}}::ha5d3b6079c0ac19d
  29:        0x10b03601c - <core::result::Result<T, E>>::map::haa30dcaec57e0afb
  30:        0x10b14c765 - <futures::future::and_then::AndThen<A, B, F> as futures::future::Future>::poll::{{closure}}::h5aff83becb316430
  31:        0x10b0af579 - <futures::future::chain::Chain<A, B, C>>::poll::h934fd2c32af9196a
  32:        0x10afad69b - <futures::future::and_then::AndThen<A, B, F> as futures::future::Future>::poll::h624afae44e01cb9d
  33:        0x10b097360 - <Box<F> as futures::future::Future>::poll::h5663703284cfc5b0
  34:        0x10b0fa68f - <futures::future::map::Map<A, F> as futures::future::Future>::poll::h60a02ff1edd682f7
  35:        0x10b12628e - <futures::future::loop_fn::LoopFn<A, F> as futures::future::Future>::poll::h9d909abadeaa99b6
  36:        0x10b0974e0 - <Box<F> as futures::future::Future>::poll::hbe74747cdbd82b2d
  37:        0x10b0883cf - <futures::future::join::MaybeDone<A>>::poll::h8c350fc7330571eb
  38:        0x10b0ff23b - <futures::future::join::Join<A, B> as futures::future::Future>::poll::h257a1f5590ca58d7
  39:        0x10b087a6f - <futures::future::join::MaybeDone<A>>::poll::h3be9cb5a48e44920
  40:        0x10b0ff61e - <futures::future::join::Join<A, B> as futures::future::Future>::poll::h7b93ebbe7fc432e4
  41:        0x10b0f9e3f - <futures::future::map::Map<A, F> as futures::future::Future>::poll::h0b7182e2ef4cbfe9
  42:        0x10b0b2158 - <futures::future::chain::Chain<A, B, C>>::poll::hba6b3744f00ea1a3
  43:        0x10afad81b - <futures::future::and_then::AndThen<A, B, F> as futures::future::Future>::poll::h8559062ece4a4531
  44:        0x10b0ae3de - <futures::future::chain::Chain<A, B, C>>::poll::h8f1d4633982e819c
  45:        0x10b12ee8b - <futures::future::then::Then<A, B, F> as futures::future::Future>::poll::hb174db5a4c6c7c90
  46:        0x10b0974e0 - <Box<F> as futures::future::Future>::poll::hbe74747cdbd82b2d
  47:        0x10b0b0c8e - <futures::future::chain::Chain<A, B, C>>::poll::haff940f1c66c0cd6
  48:        0x10b12ee4b - <futures::future::then::Then<A, B, F> as futures::future::Future>::poll::h590b8b9950fe08d8
  49:        0x10b126c17 - <futures::future::map_err::MapErr<A, F> as futures::future::Future>::poll::h215309a7cb9600ab
  50:        0x10b382641 - <Box<F> as futures::future::Future>::poll::h0bf7c3a7376c0fb7
  51:        0x10b39924c - <futures::task_impl::Spawn<F>>::poll_future::{{closure}}::h25b21ab6d3e3c43c
  52:        0x10b39934e - <futures::task_impl::Spawn<T>>::enter::{{closure}}::hbd6ae331b59e3928
  53:        0x10b3995d5 - futures::task_impl::set::{{closure}}::h24e11cc8086d0707
  54:        0x10b3723db - <std::thread::local::LocalKey<T>>::with::hf28f9eabb1a35cb7
  55:        0x10b3879e7 - futures::task_impl::set::h37e185db962d9e2c
  56:        0x10b3702b3 - <futures::task_impl::Spawn<T>>::enter::hef2312ed513d9ba9
  57:        0x10b36fe12 - <futures::task_impl::Spawn<F>>::poll_future::hf33a49ae53a9d9f4
  58:        0x10b398cc6 - tokio_core::reactor::Core::dispatch_task::{{closure}}::h8206a940250cd144
  59:        0x10b36c285 - <scoped_tls::ScopedKey<T>>::set::hc614969f549da2f2
  60:        0x10b393829 - tokio_core::reactor::Core::dispatch_task::h0611009a070b78ad
  61:        0x10b392a68 - tokio_core::reactor::Core::dispatch::hb52174544f2ea540
  62:        0x10b39267e - tokio_core::reactor::Core::poll::h43f4b499158d6eeb
  63:        0x10afbc8a0 - tokio_core::reactor::Core::run::hff9aa04c61170002
  64:        0x10b0eec24 - httpbis::server::run_server_event_loop::h2c4616fe62e930bc
  65:        0x10b15d6da - httpbis::server::HttpServer::new::{{closure}}::h676f8b24bb44f4b0
  66:        0x10b11e00b - <std::panic::AssertUnwindSafe<F> as core::ops::FnOnce<()>>::call_once::h39220ce80d5b6b28
  67:        0x10b01fe96 - std::panicking::try::do_call::h8c72af1898eb6da3
  68:        0x10b4e487a - __rust_maybe_catch_panic
  69:        0x10b01f110 - std::panicking::try::h80e0693523141e0b
  70:        0x10b01d695 - std::panic::catch_unwind::ha44a07ca6ac143a1
  71:        0x10b1577b8 - std::thread::Builder::spawn::{{closure}}::hef856fb921630796
  72:        0x10b086590 - <F as alloc::boxed::FnBox<A>>::call_box::h4ce7df1bead467ca
  73:        0x10b4e2634 - std::sys::imp::thread::Thread::new::thread_start::ha102a6120fc52763
  74:     0x7fffbd9d7aaa - _pthread_body
  75:     0x7fffbd9d79f6 - _pthread_start```

cargo build fails with compiler error

While attempting to build from a clean branch, I've run into a compile-time error complaining about a double import error.

/Users/stevej/src/grpc-rust% cargo clean                                                       (git)-[master]
/Users/stevej/src/grpc-rust% cargo version                                                     (git)-[master]
cargo 0.13.0-nightly (eca9e15 2016-11-01)
/Users/stevej/src/grpc-rust% cargo build                                                       (git)-[master]
   Compiling cfg-if v0.1.0
   Compiling crossbeam v0.2.10
   Compiling void v1.0.2
   Compiling semver v0.1.20
   Compiling protobuf v1.0.24 (http://github.com/stepancheg/rust-protobuf#dd1b66ae)
   Compiling log v0.3.6
   Compiling futures v0.1.6
   Compiling hpack v0.3.0
   Compiling core-foundation-sys v0.2.2
   Compiling libc v0.2.18
   Compiling bitflags v0.4.0
   Compiling lazycell v0.4.0
   Compiling solicit-fork v0.4.4 (http://github.com/stepancheg/solicit.git#b8a5030e)
   Compiling net2 v0.2.26
   Compiling rustc_version v0.1.7
   Compiling core-foundation v0.2.2
   Compiling security-framework-sys v0.1.9
   Compiling nix v0.7.0
   Compiling security-framework v0.1.9
   Compiling slab v0.3.0
   Compiling scoped-tls v0.1.0
   Compiling num_cpus v1.2.0
   Compiling futures-cpupool v0.1.2
   Compiling mio v0.6.1
   Compiling tokio-core v0.1.1
   Compiling tokio-tls v0.1.0 (https://github.com/tokio-rs/tokio-tls/#3d49e52e)
   Compiling grpc v0.0.2 (file:///Users/stevej/src/grpc-rust)
error[E0252]: a trait named `Stream` has already been imported in this module
 --> src/futures_misc/stream_merge2.rs:2:5
  |
1 | use futures::*;
  |     ----------- previous import of `Stream` here
2 | use futures::stream::Stream;
  |     ^^^^^^^^^^^^^^^^^^^^^^^ already imported

error: aborting due to previous error

error: Could not compile `grpc`.

Multiple services on one server/port

I don't see a way to serve multiple gRPC services (for different 'concerns') on one server/port. Is there a way?
In offical libs for Go at least it's generatedpackagename.RegisterServiceNameServer(srv, impl)

Can it be done like this?
https://github.com/stepancheg/grpc-rust/blob/master/grpc-examples/src/bin/greeter_server_multi_server.rs#L30-L31

Can you please explain that example, why are there two instances of the same service started, when one is enough to serve multiple clients?

Similarly, if one client executable wants to use multiple services, can it reuse the same connection among the different instantiated clients? (In Go it'd be NewServiceNameClient(clientconn).)

interop: `unimplemented_method` fails

unimplemented_method

/Users/stevej% ./go-grpc-interop-client -use_tls=false  -server_port=60011 -test_case=unimplemented_method
2017/01/25 12:58:13 transport: http2Client.notifyError got notified that the client transport was broken EOF.
2017/01/25 12:58:13 grpc.Invoke(_, _, _, _, _) = rpc error: code = 13 desc = transport is closing, want error code Unimplemented

Server error:

/Users/stevej/src/grpc-rust/interop% RUST_BACKTRACE=1 ../target/debug/grpc-interop                                                     (git)-[stevej/interop_improvements]
thread '<unnamed>' panicked at 'unknown method: /grpc.testing.TestService/UnimplementedCall', ../src/libcore/option.rs:705
stack backtrace:
   1:        0x10aaacf4a - std::sys::imp::backtrace::tracing::imp::write::h917062bce4ff48c3
   2:        0x10aab02bf - std::panicking::default_hook::{{closure}}::h0bacac31b5ed1870
   3:        0x10aaaeb4f - std::panicking::default_hook::h5897799da33ece67
   4:        0x10aaaf186 - std::panicking::rust_panic_with_hook::h109e116a3a861224
   5:        0x10aaaf024 - std::panicking::begin_panic::hbb38be1379e09df0
   6:        0x10aaaef42 - std::panicking::begin_panic_fmt::h26713cea9bce3ab0
   7:        0x10aaaeea7 - rust_begin_unwind
   8:        0x10aadae10 - core::panicking::panic_fmt::hcfbb59eeb7f27f75
   9:        0x10aadae7d - core::option::expect_failed::h530ede3d41450938
  10:        0x10a72b393 - <core::option::Option<T>>::expect::hca386619a6598654
  11:        0x10a739c82 - grpc::server::ServerServiceDefinition::find_method::h9946d6be813fe143
  12:        0x10a739d2f - grpc::server::ServerServiceDefinition::handle_method::hfff63ef4b38e0c98
  13:        0x10a73a58d - <grpc::server::GrpcHttpServerHandlerFactory as httpbis::http_common::HttpService>::new_request::h686d3a0bcf9a8216
  14:        0x10a659ac7 - <httpbis::server_conn::ServerInner<F>>::new_request::h1352ddd81d84d1bb
  15:        0x10a65951a - <httpbis::server_conn::ServerInner<F>>::new_stream::h8b12a51798f4a8c6
  16:        0x10a65a020 - <httpbis::server_conn::ServerInner<F>>::get_or_create_stream::h74f6195dda1b1983
  17:        0x10a6f4b78 - <httpbis::server_conn::ServerInner<F> as httpbis::http_common::LoopInner>::process_headers_frame::hc8a12f83d8eebbb5
  18:        0x10a6b5f06 - httpbis::http_common::LoopInner::process_stream_frame::h4f0563f1432aae44
  19:        0x10a6b4f90 - httpbis::http_common::LoopInner::process_raw_frame::h72512b2fdbea3350
  20:        0x10a7287c9 - <httpbis::http_common::ReadLoopData<I, N>>::process_raw_frame::{{closure}}::h8ac4e6a9a470fc80
  21:        0x10a727c42 - <httpbis::futures_misc::task_data::TaskRcMut<A>>::with::{{closure}}::hd8418ff3a647b2e0
  22:        0x10a727b15 - <futures::task_impl::task_rc::TaskRc<A>>::with::{{closure}}::h5957640af0198563
  23:        0x10a6b38ee - futures::task_impl::with::h4d500208dd9714d1
  24:        0x10a65def8 - <futures::task_impl::task_rc::TaskRc<A>>::with::h5788a6804680fb11
  25:        0x10a669bde - <httpbis::futures_misc::task_data::TaskRcMut<A>>::with::h58f5fe718ee762a6
  26:        0x10a66a248 - <httpbis::http_common::ReadLoopData<I, N>>::process_raw_frame::hdcbf98d7fedea675
  27:        0x10a718a30 - <httpbis::http_common::ReadLoopData<I, N>>::read_process_frame::{{closure}}::hdead651237ee6cdc
  28:        0x10a7188d9 - <futures::future::and_then::AndThen<A, B, F> as futures::future::Future>::poll::{{closure}}::{{closure}}::ha5d3b6079c0ac19d
  29:        0x10a60201c - <core::result::Result<T, E>>::map::haa30dcaec57e0afb
  30:        0x10a718765 - <futures::future::and_then::AndThen<A, B, F> as futures::future::Future>::poll::{{closure}}::h5aff83becb316430
  31:        0x10a67b579 - <futures::future::chain::Chain<A, B, C>>::poll::h934fd2c32af9196a
  32:        0x10a57969b - <futures::future::and_then::AndThen<A, B, F> as futures::future::Future>::poll::h624afae44e01cb9d
  33:        0x10a663360 - <Box<F> as futures::future::Future>::poll::h5663703284cfc5b0
  34:        0x10a6c668f - <futures::future::map::Map<A, F> as futures::future::Future>::poll::h60a02ff1edd682f7
  35:        0x10a6f228e - <futures::future::loop_fn::LoopFn<A, F> as futures::future::Future>::poll::h9d909abadeaa99b6
  36:        0x10a6634e0 - <Box<F> as futures::future::Future>::poll::hbe74747cdbd82b2d
  37:        0x10a6543cf - <futures::future::join::MaybeDone<A>>::poll::h8c350fc7330571eb
  38:        0x10a6cb23b - <futures::future::join::Join<A, B> as futures::future::Future>::poll::h257a1f5590ca58d7
  39:        0x10a653a6f - <futures::future::join::MaybeDone<A>>::poll::h3be9cb5a48e44920
  40:        0x10a6cb61e - <futures::future::join::Join<A, B> as futures::future::Future>::poll::h7b93ebbe7fc432e4
  41:        0x10a6c5e3f - <futures::future::map::Map<A, F> as futures::future::Future>::poll::h0b7182e2ef4cbfe9
  42:        0x10a67e158 - <futures::future::chain::Chain<A, B, C>>::poll::hba6b3744f00ea1a3
  43:        0x10a57981b - <futures::future::and_then::AndThen<A, B, F> as futures::future::Future>::poll::h8559062ece4a4531
  44:        0x10a67a3de - <futures::future::chain::Chain<A, B, C>>::poll::h8f1d4633982e819c
  45:        0x10a6fae8b - <futures::future::then::Then<A, B, F> as futures::future::Future>::poll::hb174db5a4c6c7c90
  46:        0x10a6634e0 - <Box<F> as futures::future::Future>::poll::hbe74747cdbd82b2d
  47:        0x10a67cc8e - <futures::future::chain::Chain<A, B, C>>::poll::haff940f1c66c0cd6
  48:        0x10a6fae4b - <futures::future::then::Then<A, B, F> as futures::future::Future>::poll::h590b8b9950fe08d8
  49:        0x10a6f2c17 - <futures::future::map_err::MapErr<A, F> as futures::future::Future>::poll::h215309a7cb9600ab
  50:        0x10a94e641 - <Box<F> as futures::future::Future>::poll::h0bf7c3a7376c0fb7
  51:        0x10a96524c - <futures::task_impl::Spawn<F>>::poll_future::{{closure}}::h25b21ab6d3e3c43c
  52:        0x10a96534e - <futures::task_impl::Spawn<T>>::enter::{{closure}}::hbd6ae331b59e3928
  53:        0x10a9655d5 - futures::task_impl::set::{{closure}}::h24e11cc8086d0707
  54:        0x10a93e3db - <std::thread::local::LocalKey<T>>::with::hf28f9eabb1a35cb7
  55:        0x10a9539e7 - futures::task_impl::set::h37e185db962d9e2c
  56:        0x10a93c2b3 - <futures::task_impl::Spawn<T>>::enter::hef2312ed513d9ba9
  57:        0x10a93be12 - <futures::task_impl::Spawn<F>>::poll_future::hf33a49ae53a9d9f4
  58:        0x10a964cc6 - tokio_core::reactor::Core::dispatch_task::{{closure}}::h8206a940250cd144
  59:        0x10a938285 - <scoped_tls::ScopedKey<T>>::set::hc614969f549da2f2
  60:        0x10a95f829 - tokio_core::reactor::Core::dispatch_task::h0611009a070b78ad
  61:        0x10a95ea68 - tokio_core::reactor::Core::dispatch::hb52174544f2ea540
  62:        0x10a95e67e - tokio_core::reactor::Core::poll::h43f4b499158d6eeb
  63:        0x10a5888a0 - tokio_core::reactor::Core::run::hff9aa04c61170002
  64:        0x10a6bac24 - httpbis::server::run_server_event_loop::h2c4616fe62e930bc
  65:        0x10a7296da - httpbis::server::HttpServer::new::{{closure}}::h676f8b24bb44f4b0
  66:        0x10a6ea00b - <std::panic::AssertUnwindSafe<F> as core::ops::FnOnce<()>>::call_once::h39220ce80d5b6b28
  67:        0x10a5ebe96 - std::panicking::try::do_call::h8c72af1898eb6da3
  68:        0x10aab087a - __rust_maybe_catch_panic
  69:        0x10a5eb110 - std::panicking::try::h80e0693523141e0b
  70:        0x10a5e9695 - std::panic::catch_unwind::ha44a07ca6ac143a1
  71:        0x10a7237b8 - std::thread::Builder::spawn::{{closure}}::hef856fb921630796
  72:        0x10a652590 - <F as alloc::boxed::FnBox<A>>::call_box::h4ce7df1bead467ca
  73:        0x10aaae634 - std::sys::imp::thread::Thread::new::thread_start::ha102a6120fc52763
  74:     0x7fffbd9d7aaa - _pthread_body
  75:     0x7fffbd9d79f6 - _pthread_start

WINDOW_UPDATE spec is not followed

(I have a half-finished patch that I'll post for review so we can discuss whether you like the API or think it's placed at the right layer)

As we write bytes to a peer we don't properly account for the required changes to the window size. This results in FLOW_CONTROL_ERROR errors from peers in my interop testing with grpc-go (and also the occasional panic from grpc-go).

use a thread name for the event loop

Hi @stepancheg

Now you use thread::spawn to start the event loop thread, it is not convenient to know the thread CPU takes with top -H. It is better to pass a customized thread name in config, or we can use grpc-server-loop or grpc-client-loop by default.

If you think it is fine, we can send you a PR.

Single client to call multiple services on same host/port

Something like that:

let grpc_client = GrpcClient::new();
let service1_client = Service1Client.with_client(grpc_client);
let service2_client = Service1Client.with_client(grpc_client);

What signature of with_client should be? Should it take parameter

  • by borrowing reference
  • it should take Arc<GrpcClient>
  • or probably GrpcClient should be Clone with Arc inside

import "google/protobuf/empty.proto";

It seems like importing "google/protobuf/...*proto" is not supported out-of-the-box? I'm getting the following error when I tried to use google.protobuf.Empty.

24 |     fn GetId(&self, p: super::empty::Empty) -> ::grpc::result::GrpcResult<super::m19g::PlayerId>;
   |                        ^^^^^^^^^^^^^^^^^^^ Maybe a missing `extern crate empty;`?

Closing the socket from a streaming client early results in a panic

(I have a fix for this in a branch)

If you exit a client without sending an end-of-stream indicator on the stream, the server will panic. This is technically out of spec but grpc-rust should be resilient to it.

DEBUG:tokio_core::reactor: consuming notification queue
DEBUG:tokio_core::reactor: scheduling direction for: 31
DEBUG:tokio_core::reactor: consuming notification queue
DEBUG:tokio_core::reactor: scheduling direction for: 30
DEBUG:tokio_core::reactor: consuming notification queue
DEBUG:tokio_core::reactor: scheduling direction for: 3
DEBUG:grpc::http_common: received frame: Stream(WindowUpdate(WindowUpdateFrame { stream_id: 1, increment: 16402, flags: 0 }))
thread '<unnamed>' panicked at 'stream not found', ../src/libcore/option.rs:705
stack backtrace:
   1:        0x109911d88 - std::sys::backtrace::tracing::imp::write::h6f1d53a70916b90d
   2:        0x10991474f - std::panicking::default_hook::{{closure}}::h137e876f7d3b5850
   3:        0x1099136c5 - std::panicking::default_hook::h0ac3811ec7cee78c
   4:        0x109913cd6 - std::panicking::rust_panic_with_hook::hc303199e04562edf
   5:        0x109913b74 - std::panicking::begin_panic::h6ed03353807cf54d
   6:        0x109913a92 - std::panicking::begin_panic_fmt::hc321cece241bb2f5
   7:        0x1099139f7 - rust_begin_unwind
   8:        0x10993dfa0 - core::panicking::panic_fmt::h27224b181f9f037f
   9:        0x10993e00d - core::option::expect_failed::h8606bc228cd3f504
  10:        0x109503a83 - <core::option::Option<T>>::expect::hd1c224735b6d5b7e
  11:        0x1095b4251 - grpc::http_common::HttpReadLoopInner::process_stream_window_update_frame::h4a77850c9c5e9bad
  12:        0x1095b5032 - grpc::http_common::HttpReadLoopInner::process_stream_frame::h71cda37509159b2b
  13:        0x1095b5589 - grpc::http_common::HttpReadLoopInner::process_raw_frame::h3d9300ed22391085
  14:        0x1095ca559 - <grpc::http_common::HttpReadLoopData<I, N>>::process_raw_frame::{{closure}}::h9edcd3c2c8b4639f
  15:        0x1095c84ca - <grpc::futures_misc::task_data::TaskRcMut<A>>::with::{{closure}}::hdd8f2fce5f681796
  16:        0x1095c8389 - <futures::task_impl::task_rc::TaskRc<A>>::with::{{closure}}::h23e6222f832c3e39
  17:        0x10959820d - futures::task_impl::with::h1a77c05e9abdca7e
  18:        0x10956be18 - <futures::task_impl::task_rc::TaskRc<A>>::with::h92bcf6250cca322a
  19:        0x1095b667e - <grpc::futures_misc::task_data::TaskRcMut<A>>::with::ha244e3141b1cb783
  20:        0x1095b627e - <grpc::http_common::HttpReadLoopData<I, N>>::process_raw_frame::h0abde819d55bdc45
  21:        0x1095c2070 - <grpc::http_common::HttpReadLoopData<I, N>>::read_process_frame::{{closure}}::hbe5e6b2bc994a733
  22:        0x1095c1f09 - <futures::future::and_then::AndThen<A, B, F> as futures::future::Future>::poll::{{closure}}::{{closure}}::h523e5700c98cd748
  23:        0x10951cddc - <core::result::Result<T, E>>::map::h87edfc063f082977
  24:        0x1095c1d95 - <futures::future::and_then::AndThen<A, B, F> as futures::future::Future>::poll::{{closure}}::h337bf07b06bb4ebd
  25:        0x109575c47 - <futures::future::chain::Chain<A, B, C>>::poll::h0ac3825a08b979bd
  26:        0x1094ee2cb - <futures::future::and_then::AndThen<A, B, F> as futures::future::Future>::poll::h6e39038c1485d2c6
  27:        0x10956cab0 - <Box<F> as futures::future::Future>::poll::h566b91994cc8de79
  28:        0x1094f0179 - <futures::stream::fold::Fold<S, F, Fut, T> as futures::future::Future>::poll::hf356629c8aabac77
  29:        0x10959d942 - <futures::future::map::Map<A, F> as futures::future::Future>::poll::h3ab6139582a3a55f
  30:        0x10956ca70 - <Box<F> as futures::future::Future>::poll::h31d9da5fdf92f3a3
  31:        0x109560cec - <futures::future::join::MaybeDone<A>>::poll::h2737411e5c64adb5
  32:        0x1095a054c - <futures::future::join::Join<A, B> as futures::future::Future>::poll::h378db279c54d5c85
  33:        0x10959f7c2 - <futures::future::map::Map<A, F> as futures::future::Future>::poll::hea98821f2a71dbb9
  34:        0x109578c84 - <futures::future::chain::Chain<A, B, C>>::poll::h4afc42c83afc2f84
  35:        0x1094ee38b - <futures::future::and_then::AndThen<A, B, F> as futures::future::Future>::poll::hdb826d97350693a6
  36:        0x10957edc7 - <futures::future::chain::Chain<A, B, C>>::poll::hfc3e86a1153e5dc5
  37:        0x1095a907b - <futures::future::then::Then<A, B, F> as futures::future::Future>::poll::h446cf0aba072d6e8
  38:        0x10956ca70 - <Box<F> as futures::future::Future>::poll::h31d9da5fdf92f3a3
  39:        0x1095a50aa - <futures::future::map_err::MapErr<A, F> as futures::future::Future>::poll::hc64e08dbe707cd36
  40:        0x10978d981 - <Box<F> as futures::future::Future>::poll::hdcde478ef4e8da40
  41:        0x1097a47fc - <futures::task_impl::Spawn<F>>::poll_future::{{closure}}::h4593987aece2335f
  42:        0x1097a484e - <futures::task_impl::Spawn<T>>::enter::{{closure}}::h96233ed4af8f7a98
  43:        0x1097a4965 - futures::task_impl::set::{{closure}}::h06a53b1d516ec9fe
  44:        0x109780694 - <std::thread::local::LocalKey<T>>::with::hafd993f9e3933e97
  45:        0x1097935d7 - futures::task_impl::set::hd0fa5ec781774660
  46:        0x10977fd4c - <futures::task_impl::Spawn<T>>::enter::haa5fa2f5d202f777
  47:        0x10977fb92 - <futures::task_impl::Spawn<F>>::poll_future::h0ccf585be6916a7e
  48:        0x1097a4446 - tokio_core::reactor::Core::dispatch_task::{{closure}}::h80049e866b2a20c1
  49:        0x10977ce0e - <scoped_tls::ScopedKey<T>>::set::h4537c6755a99f191
  50:        0x10979f09f - tokio_core::reactor::Core::dispatch_task::he97843a353c5112d
  51:        0x10979e308 - tokio_core::reactor::Core::dispatch::h756d49fd8a309683
  52:        0x10979df1b - tokio_core::reactor::Core::poll::hb80b0fedd628e39d
  53:        0x1094f3437 - tokio_core::reactor::Core::run::h7f6313bcbc91d331
  54:        0x1095ae40b - grpc::server::run_server_event_loop::h76e03d25a0f0cd66
  55:        0x1095c8d68 - grpc::server::GrpcServer::new::{{closure}}::hcd41d52da0bae7d4
  56:        0x1095a2dbb - <std::panic::AssertUnwindSafe<F> as core::ops::FnOnce<()>>::call_once::h5c9d794b2db5435f
  57:        0x10950ed16 - std::panicking::try::do_call::h39ac82e24b244e58
  58:        0x109914d0a - __rust_maybe_catch_panic
  59:        0x10950e9a0 - std::panicking::try::he0cef71b38587938
  60:        0x10950dc05 - std::panic::catch_unwind::h29077e42a5630281
  61:        0x1095c537d - std::thread::Builder::spawn::{{closure}}::hd0e2341b25be272b
  62:        0x10955f850 - <F as alloc::boxed::FnBox<A>>::call_box::h4a3944545481bfc9
  63:        0x109912f34 - std::sys::thread::Thread::new::thread_start::h759e10bc4abc7e72
  64:     0x7fffce7f7aba - _pthread_body
  65:     0x7fffce7f7a06 - _pthread_start

Flow control

Currently client or server essentially offers unlimited window to the peer. So if client or server is slow at processing messages, it may run out of memory.

So gRPC implementation should slow down, if user provided callback cannot process messages right now.

Internally it could probably be done by replacing UnboundedSender with some kind of bounded sender.

Client reconnect and GOAWAY

Currently client becomes unusable after a network failure. It should transparently reconnect.

Reconnect is also needed if servers returns GOAWAY frame.

interop: meta-issue

In order to test how well grpc-rust interoperates with other grpc stacks, we
implement the standard interop service.

These test names are standard and defined in the above document.

Go client, Rust server

  • empty_unary empty (zero bytes) request and response.
  • cacheable_unary #77
  • large_unary single request and (large) response. #39
  • client_compressed_unary
  • server_compressed_unary
  • client_streaming request streaming with single response.
  • client_compressed_streaming
  • server_streaming single request with response streaming. #41
  • server_compressed_streaming
  • ping_pong full-duplex streaming.
  • empty_stream full-duplex streaming with zero message.
  • compute_engine_creds large_unary with compute engine auth. (needs TLS)
  • service_account_creds large_unary with service account auth. (needs TLS)
  • jwt_token_creds large_unary with jwt token auth. (needs TLS)
  • oauth2_auth_token large_unary with oauth2 token auth. (needs TLS)
  • per_rpc_creds large_unary with per rpc token. (needs TLS)
  • custom_metadata server will echo custom metadata. #43
  • status_code_and_message status code propagated back to client. #42
  • unimplemented_method client attempts to call unimplemented method. #44
  • unimplemented_service client attempts to call unimplemented service. #45
  • cancel_after_begin cancellation after metadata has been sent but before payloads are sent.
  • cancel_after_first_response cancellation after receiving 1st message from the server.
    "large_unary")
  • timeout_on_sleeping_server fullduplex streaming on a sleeping server.

Rust client, Go server

  • empty_unary empty (zero bytes) request and response.
  • cacheable_unary
  • large_unary single request and (large) response.
  • client_compressed_unary
  • server_compressed_unary
  • client_streaming request streaming with single response.
  • client_compressed_streaming
  • server_streaming single request with response streaming.
  • server_compressed_streaming
  • ping_pong full-duplex streaming.
  • empty_stream full-duplex streaming with zero message.
  • compute_engine_creds large_unary with compute engine auth. (needs TLS)
  • service_account_creds large_unary with service account auth. (needs TLS)
  • jwt_token_creds large_unary with jwt token auth. (needs TLS)
  • oauth2_auth_token large_unary with oauth2 token auth. (needs TLS)
  • per_rpc_creds large_unary with per rpc token. (needs TLS)
  • custom_metadata server will echo custom metadata.
  • status_code_and_message status code propagated back to client.
  • unimplemented_method client attempts to call unimplemented method.
  • unimplemented_service client attempts to call unimplemented service.
  • cancel_after_begin cancellation after metadata has been sent but before payloads are sent.
  • cancel_after_first_response cancellation after receiving 1st message from the server.
    "large_unary")
  • timeout_on_sleeping_server fullduplex streaming on a sleeping server.

Supporting both sync & async calls.

It looks like Async client can only make async requests and sync client can only make sync clients?

Shouldn't we be able to select sync/async request per call?

Improve performance

Hi @stepancheg, thank your for the excellent work.

I've found that current performance is not very good (about 10x slower then go).
I know it is in the TODO list, but I wonder if there are any plan for Optimizing performance.

Benchmark

  • Git HEAD @ 7e2ec88

  • Based on long-tests.

  • Rust server and client were built under release.

cargo build --manifest-path=long-tests/with-rust/Cargo.toml --release
  • Machine:

    • Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz x 8
    • 4GB DDR3 @ 1600 MHz x 4

Streams are skipped because of incompletion.

Unary request: Echo

I have tweaked go client a little bit, my version runs in 40 goroutines.

Client sends 100000 echo requests.

time ./long_tests_client echo 100000
Rust Server Go Server
time(s) 26.732 2.902

FlameGraph

I also recorded a flame graph, hope it helps.

https://gist.github.com/overvenus/018e19ccc23555a7768e15774819f3af#file-kernel-7e2ec88-svg
image

Thank you! :)

interop: `server_streaming` fails intermittently

/Users/stevej% ./go-grpc-interop-client -use_tls=false  -test_case=server_streaming -server_port=60011
2017/01/25 12:52:52 ServerStreaming done
/Users/stevej% ./go-grpc-interop-client -use_tls=false  -test_case=server_streaming -server_port=60011
2017/01/25 12:52:54 Failed to finish the server streaming rpc: <nil>
/Users/stevej% ./go-grpc-interop-client -use_tls=false  -test_case=server_streaming -server_port=60011
2017/01/25 12:52:55 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:60011: getsockopt: connection refused"; Reconnecting to {127.0.0.1:60011 <nil>}
2017/01/25 12:52:55 &{0xc420096b40}.StreamingOutputCall(_) = _, rpc error: code = 14 desc = grpc: the connection is unavailable

Server log

/Users/stevej/src/grpc-rust/interop% RUST_BACKTRACE=1 ../target/debug/grpc-interop                                                     (git)-[stevej/interop_improvements]
thread '<unnamed>' panicked at 'stream not found: 1', ../src/libcore/option.rs:705
stack backtrace:
   1:        0x101154f4a - std::sys::imp::backtrace::tracing::imp::write::h917062bce4ff48c3
   2:        0x1011582bf - std::panicking::default_hook::{{closure}}::h0bacac31b5ed1870
   3:        0x101156b4f - std::panicking::default_hook::h5897799da33ece67
   4:        0x101157186 - std::panicking::rust_panic_with_hook::h109e116a3a861224
   5:        0x101157024 - std::panicking::begin_panic::hbb38be1379e09df0
   6:        0x101156f42 - std::panicking::begin_panic_fmt::h26713cea9bce3ab0
   7:        0x101156ea7 - rust_begin_unwind
   8:        0x101182e10 - core::panicking::panic_fmt::hcfbb59eeb7f27f75
   9:        0x101182e7d - core::option::expect_failed::h530ede3d41450938
  10:        0x100c73ff3 - <core::option::Option<T>>::expect::h989efd4584df5caf
  11:        0x100d5ca3e - httpbis::http_common::LoopInner::close_remote::h4897c08ea604bcab
  12:        0x100d5dfb9 - httpbis::http_common::LoopInner::process_stream_frame::h4f0563f1432aae44
  13:        0x100d5cf90 - httpbis::http_common::LoopInner::process_raw_frame::h72512b2fdbea3350
  14:        0x100dd07c9 - <httpbis::http_common::ReadLoopData<I, N>>::process_raw_frame::{{closure}}::h8ac4e6a9a470fc80
  15:        0x100dcfc42 - <httpbis::futures_misc::task_data::TaskRcMut<A>>::with::{{closure}}::hd8418ff3a647b2e0
  16:        0x100dcfb15 - <futures::task_impl::task_rc::TaskRc<A>>::with::{{closure}}::h5957640af0198563
  17:        0x100d5b8ee - futures::task_impl::with::h4d500208dd9714d1
  18:        0x100d05ef8 - <futures::task_impl::task_rc::TaskRc<A>>::with::h5788a6804680fb11
  19:        0x100d11bde - <httpbis::futures_misc::task_data::TaskRcMut<A>>::with::h58f5fe718ee762a6
  20:        0x100d12248 - <httpbis::http_common::ReadLoopData<I, N>>::process_raw_frame::hdcbf98d7fedea675
  21:        0x100dc0a30 - <httpbis::http_common::ReadLoopData<I, N>>::read_process_frame::{{closure}}::hdead651237ee6cdc
  22:        0x100dc08d9 - <futures::future::and_then::AndThen<A, B, F> as futures::future::Future>::poll::{{closure}}::{{closure}}::ha5d3b6079c0ac19d
  23:        0x100caa01c - <core::result::Result<T, E>>::map::haa30dcaec57e0afb
  24:        0x100dc0765 - <futures::future::and_then::AndThen<A, B, F> as futures::future::Future>::poll::{{closure}}::h5aff83becb316430
  25:        0x100d23579 - <futures::future::chain::Chain<A, B, C>>::poll::h934fd2c32af9196a
  26:        0x100c2169b - <futures::future::and_then::AndThen<A, B, F> as futures::future::Future>::poll::h624afae44e01cb9d
  27:        0x100d0b360 - <Box<F> as futures::future::Future>::poll::h5663703284cfc5b0
  28:        0x100d6e68f - <futures::future::map::Map<A, F> as futures::future::Future>::poll::h60a02ff1edd682f7
  29:        0x100d9a28e - <futures::future::loop_fn::LoopFn<A, F> as futures::future::Future>::poll::h9d909abadeaa99b6
  30:        0x100d0b4e0 - <Box<F> as futures::future::Future>::poll::hbe74747cdbd82b2d
  31:        0x100cfc3cf - <futures::future::join::MaybeDone<A>>::poll::h8c350fc7330571eb
  32:        0x100d7323b - <futures::future::join::Join<A, B> as futures::future::Future>::poll::h257a1f5590ca58d7
  33:        0x100cfba6f - <futures::future::join::MaybeDone<A>>::poll::h3be9cb5a48e44920
  34:        0x100d7361e - <futures::future::join::Join<A, B> as futures::future::Future>::poll::h7b93ebbe7fc432e4
  35:        0x100d6de3f - <futures::future::map::Map<A, F> as futures::future::Future>::poll::h0b7182e2ef4cbfe9
  36:        0x100d25a9b - <futures::future::chain::Chain<A, B, C>>::poll::hba6b3744f00ea1a3
  37:        0x100c2181b - <futures::future::and_then::AndThen<A, B, F> as futures::future::Future>::poll::h8559062ece4a4531
  38:        0x100d223de - <futures::future::chain::Chain<A, B, C>>::poll::h8f1d4633982e819c
  39:        0x100da2e8b - <futures::future::then::Then<A, B, F> as futures::future::Future>::poll::hb174db5a4c6c7c90
  40:        0x100d0b4e0 - <Box<F> as futures::future::Future>::poll::hbe74747cdbd82b2d
  41:        0x100d24c8e - <futures::future::chain::Chain<A, B, C>>::poll::haff940f1c66c0cd6
  42:        0x100da2e4b - <futures::future::then::Then<A, B, F> as futures::future::Future>::poll::h590b8b9950fe08d8
  43:        0x100d9ac17 - <futures::future::map_err::MapErr<A, F> as futures::future::Future>::poll::h215309a7cb9600ab
  44:        0x100ff6641 - <Box<F> as futures::future::Future>::poll::h0bf7c3a7376c0fb7
  45:        0x10100d24c - <futures::task_impl::Spawn<F>>::poll_future::{{closure}}::h25b21ab6d3e3c43c
  46:        0x10100d34e - <futures::task_impl::Spawn<T>>::enter::{{closure}}::hbd6ae331b59e3928
  47:        0x10100d5d5 - futures::task_impl::set::{{closure}}::h24e11cc8086d0707
  48:        0x100fe63db - <std::thread::local::LocalKey<T>>::with::hf28f9eabb1a35cb7
  49:        0x100ffb9e7 - futures::task_impl::set::h37e185db962d9e2c
  50:        0x100fe42b3 - <futures::task_impl::Spawn<T>>::enter::hef2312ed513d9ba9
  51:        0x100fe3e12 - <futures::task_impl::Spawn<F>>::poll_future::hf33a49ae53a9d9f4
  52:        0x10100ccc6 - tokio_core::reactor::Core::dispatch_task::{{closure}}::h8206a940250cd144
  53:        0x100fe0285 - <scoped_tls::ScopedKey<T>>::set::hc614969f549da2f2
  54:        0x101007829 - tokio_core::reactor::Core::dispatch_task::h0611009a070b78ad
  55:        0x101006a68 - tokio_core::reactor::Core::dispatch::hb52174544f2ea540
  56:        0x10100667e - tokio_core::reactor::Core::poll::h43f4b499158d6eeb
  57:        0x100c308a0 - tokio_core::reactor::Core::run::hff9aa04c61170002
  58:        0x100d62c24 - httpbis::server::run_server_event_loop::h2c4616fe62e930bc
  59:        0x100dd16da - httpbis::server::HttpServer::new::{{closure}}::h676f8b24bb44f4b0
  60:        0x100d9200b - <std::panic::AssertUnwindSafe<F> as core::ops::FnOnce<()>>::call_once::h39220ce80d5b6b28
  61:        0x100c93e96 - std::panicking::try::do_call::h8c72af1898eb6da3
  62:        0x10115887a - __rust_maybe_catch_panic
  63:        0x100c93110 - std::panicking::try::h80e0693523141e0b
  64:        0x100c91695 - std::panic::catch_unwind::ha44a07ca6ac143a1
  65:        0x100dcb7b8 - std::thread::Builder::spawn::{{closure}}::hef856fb921630796
  66:        0x100cfa590 - <F as alloc::boxed::FnBox<A>>::call_box::h4ce7df1bead467ca
  67:        0x101156634 - std::sys::imp::thread::Thread::new::thread_start::ha102a6120fc52763
  68:     0x7fffbd9d7aaa - _pthread_body
  69:     0x7fffbd9d79f6 - _pthread_start

TLS support in client

I have a server (implemented in golang) that requires clients to connect with TLS.
So I've tried to implement TLS support in grpc-rust, which, in theory, should be relatively easy: grpc works over standard HTTP2 + tls. For now, I don't need any cert validation, I just want some security over the wire.

However, I got stuck and I think I need some help/explaining.

I was hoping I could just propagate different value of http_scheme to solicit and it will do everything for me. So far I've managed to add http_scheme param to GrpcClient::new and propagate it to HttpClientConnectionAsync. But then it looks like you are doing handshake manually in solicit_async.rs and it's just static string, and you are doing most of the http-related stuff semi-manually.

Can you give me some advice here? Should I try to replace client_handshake with something provided by solicit? Or, maybe I should look at tokio_tls and pass it instead of TcpStream?

interop `status_code_and_message` fails consistently

/Users/stevej% ./go-grpc-interop-client -use_tls=false  -server_port=60011 -test_case=status_code_and_message
2017/01/25 12:55:50 &{0xc420186500}.UnaryCall(_, response_status:<code:2 message:"test status message" > ) = _, <nil>, want _, rpc error: code = 2 desc = test status message

No errors in the server log

propose: export GrpcServer local_addr functions

Hi @stepancheg

I see that GrpcServer has some public functions (local_addr, is_alive), but users can't use them in the generated server.

I think it is better to export these functions or public the GrpcServer directly for outside usage. E,g in our tests, we want to use "127.0.0.1:0" to avoid duplicated listening port problems if use the fixed port, but we can't know the real listening address if we use "127.0.0.1:0" unless local_addr is exported.

interop: large_unary test does not consistently succeed

(meta-note: would it be possible to have an interop label for issues? Also, steps for building and running the interop tests are coming up in a PR)

When testing grpc-rust interop with the grpc-go client, the large_unary test does not succeed consistently, sometimes failing with a server side panic 'stream not found'

/Users/stevej% ./go-grpc-interop-client -use_tls=false  -test_case=large_unary -server_port=60011
2017/01/25 12:49:58 LargeUnaryCall done
/Users/stevej% ./go-grpc-interop-client -use_tls=false  -test_case=large_unary -server_port=60011
2017/01/25 12:49:59 LargeUnaryCall done
/Users/stevej% ./go-grpc-interop-client -use_tls=false  -test_case=large_unary -server_port=60011
2017/01/25 12:49:59 /TestService/UnaryCall RPC failed: rpc error: code = 2 desc = unexpected EOF
/Users/stevej% ./go-grpc-interop-client -use_tls=false  -test_case=large_unary -server_port=60011
2017/01/25 12:50:00 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:60011: getsockopt: connection refused"; Reconnecting to {127.0.0.1:60011 <nil>}
2017/01/25 12:50:00 /TestService/UnaryCall RPC failed: rpc error: code = 14 desc = grpc: the connection is unavailable

Here is the server log:

/Users/stevej/src/grpc-rust/interop% RUST_BACKTRACE=1 ../target/debug/grpc-interop                                                     (git)-[stevej/interop_improvements]
thread '<unnamed>' panicked at 'stream not found: 1', ../src/libcore/option.rs:705
stack backtrace:
   1:        0x10c45bf4a - std::sys::imp::backtrace::tracing::imp::write::h917062bce4ff48c3
   2:        0x10c45f2bf - std::panicking::default_hook::{{closure}}::h0bacac31b5ed1870
   3:        0x10c45db4f - std::panicking::default_hook::h5897799da33ece67
   4:        0x10c45e186 - std::panicking::rust_panic_with_hook::h109e116a3a861224
   5:        0x10c45e024 - std::panicking::begin_panic::hbb38be1379e09df0
   6:        0x10c45df42 - std::panicking::begin_panic_fmt::h26713cea9bce3ab0
   7:        0x10c45dea7 - rust_begin_unwind
   8:        0x10c489e10 - core::panicking::panic_fmt::hcfbb59eeb7f27f75
   9:        0x10c489e7d - core::option::expect_failed::h530ede3d41450938
  10:        0x10bf7aff3 - <core::option::Option<T>>::expect::h989efd4584df5caf
  11:        0x10c063a3e - httpbis::http_common::LoopInner::close_remote::h4897c08ea604bcab
  12:        0x10c064fb9 - httpbis::http_common::LoopInner::process_stream_frame::h4f0563f1432aae44
  13:        0x10c063f90 - httpbis::http_common::LoopInner::process_raw_frame::h72512b2fdbea3350
  14:        0x10c0d77c9 - <httpbis::http_common::ReadLoopData<I, N>>::process_raw_frame::{{closure}}::h8ac4e6a9a470fc80
  15:        0x10c0d6c42 - <httpbis::futures_misc::task_data::TaskRcMut<A>>::with::{{closure}}::hd8418ff3a647b2e0
  16:        0x10c0d6b15 - <futures::task_impl::task_rc::TaskRc<A>>::with::{{closure}}::h5957640af0198563
  17:        0x10c0628ee - futures::task_impl::with::h4d500208dd9714d1
  18:        0x10c00cef8 - <futures::task_impl::task_rc::TaskRc<A>>::with::h5788a6804680fb11
  19:        0x10c018bde - <httpbis::futures_misc::task_data::TaskRcMut<A>>::with::h58f5fe718ee762a6
  20:        0x10c019248 - <httpbis::http_common::ReadLoopData<I, N>>::process_raw_frame::hdcbf98d7fedea675
  21:        0x10c0c7a30 - <httpbis::http_common::ReadLoopData<I, N>>::read_process_frame::{{closure}}::hdead651237ee6cdc
  22:        0x10c0c78d9 - <futures::future::and_then::AndThen<A, B, F> as futures::future::Future>::poll::{{closure}}::{{closure}}::ha5d3b6079c0ac19d
  23:        0x10bfb101c - <core::result::Result<T, E>>::map::haa30dcaec57e0afb
  24:        0x10c0c7765 - <futures::future::and_then::AndThen<A, B, F> as futures::future::Future>::poll::{{closure}}::h5aff83becb316430
  25:        0x10c02a579 - <futures::future::chain::Chain<A, B, C>>::poll::h934fd2c32af9196a
  26:        0x10bf2869b - <futures::future::and_then::AndThen<A, B, F> as futures::future::Future>::poll::h624afae44e01cb9d
  27:        0x10c012360 - <Box<F> as futures::future::Future>::poll::h5663703284cfc5b0
  28:        0x10c07568f - <futures::future::map::Map<A, F> as futures::future::Future>::poll::h60a02ff1edd682f7
  29:        0x10c0a128e - <futures::future::loop_fn::LoopFn<A, F> as futures::future::Future>::poll::h9d909abadeaa99b6
  30:        0x10c0124e0 - <Box<F> as futures::future::Future>::poll::hbe74747cdbd82b2d
  31:        0x10c0033cf - <futures::future::join::MaybeDone<A>>::poll::h8c350fc7330571eb
  32:        0x10c07a23b - <futures::future::join::Join<A, B> as futures::future::Future>::poll::h257a1f5590ca58d7
  33:        0x10c002a6f - <futures::future::join::MaybeDone<A>>::poll::h3be9cb5a48e44920
  34:        0x10c07a61e - <futures::future::join::Join<A, B> as futures::future::Future>::poll::h7b93ebbe7fc432e4
  35:        0x10c074e3f - <futures::future::map::Map<A, F> as futures::future::Future>::poll::h0b7182e2ef4cbfe9
  36:        0x10c02ca9b - <futures::future::chain::Chain<A, B, C>>::poll::hba6b3744f00ea1a3
  37:        0x10bf2881b - <futures::future::and_then::AndThen<A, B, F> as futures::future::Future>::poll::h8559062ece4a4531
  38:        0x10c0293de - <futures::future::chain::Chain<A, B, C>>::poll::h8f1d4633982e819c
  39:        0x10c0a9e8b - <futures::future::then::Then<A, B, F> as futures::future::Future>::poll::hb174db5a4c6c7c90
  40:        0x10c0124e0 - <Box<F> as futures::future::Future>::poll::hbe74747cdbd82b2d
  41:        0x10c02bc8e - <futures::future::chain::Chain<A, B, C>>::poll::haff940f1c66c0cd6
  42:        0x10c0a9e4b - <futures::future::then::Then<A, B, F> as futures::future::Future>::poll::h590b8b9950fe08d8
  43:        0x10c0a1c17 - <futures::future::map_err::MapErr<A, F> as futures::future::Future>::poll::h215309a7cb9600ab
  44:        0x10c2fd641 - <Box<F> as futures::future::Future>::poll::h0bf7c3a7376c0fb7
  45:        0x10c31424c - <futures::task_impl::Spawn<F>>::poll_future::{{closure}}::h25b21ab6d3e3c43c
  46:        0x10c31434e - <futures::task_impl::Spawn<T>>::enter::{{closure}}::hbd6ae331b59e3928
  47:        0x10c3145d5 - futures::task_impl::set::{{closure}}::h24e11cc8086d0707
  48:        0x10c2ed3db - <std::thread::local::LocalKey<T>>::with::hf28f9eabb1a35cb7
  49:        0x10c3029e7 - futures::task_impl::set::h37e185db962d9e2c
  50:        0x10c2eb2b3 - <futures::task_impl::Spawn<T>>::enter::hef2312ed513d9ba9
  51:        0x10c2eae12 - <futures::task_impl::Spawn<F>>::poll_future::hf33a49ae53a9d9f4
  52:        0x10c313cc6 - tokio_core::reactor::Core::dispatch_task::{{closure}}::h8206a940250cd144
  53:        0x10c2e7285 - <scoped_tls::ScopedKey<T>>::set::hc614969f549da2f2
  54:        0x10c30e829 - tokio_core::reactor::Core::dispatch_task::h0611009a070b78ad
  55:        0x10c30da68 - tokio_core::reactor::Core::dispatch::hb52174544f2ea540
  56:        0x10c30d67e - tokio_core::reactor::Core::poll::h43f4b499158d6eeb
  57:        0x10bf378a0 - tokio_core::reactor::Core::run::hff9aa04c61170002
  58:        0x10c069c24 - httpbis::server::run_server_event_loop::h2c4616fe62e930bc
  59:        0x10c0d86da - httpbis::server::HttpServer::new::{{closure}}::h676f8b24bb44f4b0
  60:        0x10c09900b - <std::panic::AssertUnwindSafe<F> as core::ops::FnOnce<()>>::call_once::h39220ce80d5b6b28
  61:        0x10bf9ae96 - std::panicking::try::do_call::h8c72af1898eb6da3
  62:        0x10c45f87a - __rust_maybe_catch_panic
  63:        0x10bf9a110 - std::panicking::try::h80e0693523141e0b
  64:        0x10bf98695 - std::panic::catch_unwind::ha44a07ca6ac143a1
  65:        0x10c0d27b8 - std::thread::Builder::spawn::{{closure}}::hef856fb921630796
  66:        0x10c001590 - <F as alloc::boxed::FnBox<A>>::call_box::h4ce7df1bead467ca
  67:        0x10c45d634 - std::sys::imp::thread::Thread::new::thread_start::ha102a6120fc52763
  68:     0x7fffbd9d7aaa - _pthread_body
  69:     0x7fffbd9d79f6 - _pthread_start

Changing value in service struct

Hi Stepan!

Is there a way to change some internal value in service struct?
I tried to add mut to fn Set(&mut self, req: Value) -> GrpcResult<Value> signature, but of course it doesn't compiles because generated traits mismatch.

error[E0053]: method `Set` has an incompatible type for trait
  --> src/test/bin/main.rs:22:5
   |
22 |     fn Set(&mut self, req: Value) -> GrpcResult<Value> {
   |     ^ types differ in mutability
   |
   = note: expected type `fn(&MutableServiceImpl, deqalib::test::Value) -> std::result::Result<deqalib::test::Value, grpc::error::GrpcError>`
   = note:    found type `fn(&mut MutableServiceImpl, deqalib::test::Value) -> std::result::Result<deqalib::test::Value, grpc::error::GrpcError>`

error: aborting due to previous error

Example proto:

package test;

service MutableService {
    rpc Set (Value) returns (Value) {}
}

message Value {
    optional string value = 1;
}

Example source:

extern crate grpc;
extern crate deqalib;

use grpc::result::GrpcResult;
use deqalib::test::*;
use deqalib::test_grpc::*;
use std::thread;

struct MutableServiceImpl {
    value: String,
}

impl MutableServiceImpl {
    fn new() -> MutableServiceImpl {
        MutableServiceImpl { value: String::new() }
    }
}

impl MutableService for MutableServiceImpl {
    fn Set(&mut self, req: Value) -> GrpcResult<Value> {
        self.value = req.get_value().to_owned();
        Ok(Value::new())
    }
}

fn main() {
    let _server = MutableServiceServer::new(50051, MutableServiceImpl::new());
    loop {
        thread::park();
    }
}

EOF from streaming client causes panic

(I have a fix for this in a branch I'll put up for review shortly)

How to reproduce: implement a gRPC method that takes a stream and returns a stream and SIGKILL the client while processing the response stream.

DEBUG:grpc::http_common: received frame: Stream(Data(DataFrame { data: [0, 0, 0, 0, 43, 8, 1, 18, 5, 8, 244, 3, 16, 10, 18, 5, 8, 232, 7, 16, 100, 18, 4, 8, 0, 16, 0, 26, 5, 8, 244, 3, 16, 100, 26, 6, 8, 232, 7, 16, 232, 7, 26, 4, 8, 0, 16, 0], flags: 0, stream_id: 1, padding_len: None }))
DEBUG:grpc::http_common: received frame: Stream(Data(DataFrame { data: [0, 0, 0, 0, 43, 8, 1, 18, 5, 8, 244, 3, 16, 10, 18], flags: 0, stream_id: 1, padding_len: None }))
DEBUG:tokio_core::reactor: consuming notification queue
DEBUG:tokio_core::reactor: dropping I/O source: 19
DEBUG:tokio_core::reactor: consuming notification queue
DEBUG:tokio_core::reactor: dropping I/O source: 23
INFO:grpc::http_server: connection end: Err(IoError(Error { repr: Os { code: 54, message: "Connection reset by peer" } }))
WARN:grpc::server: connection end: IoError(Error { repr: Os { code: 54, message: "Connection reset by peer" } })
DEBUG:tokio_core::reactor: loop process - 48 events, Duration { secs: 1, nanos: 127582556 }
DEBUG:tokio_core::reactor: loop poll - Duration { secs: 0, nanos: 212791 }
DEBUG:tokio_core::reactor: loop time - Instant { t: 642707366147923 }
DEBUG:tokio_core::reactor: notifying a task handle
DEBUG:tokio_core::reactor: notifying a task handle
DEBUG:tokio_core::reactor: notifying a task handle
DEBUG:tokio_core::reactor: notifying a task handle
DEBUG:tokio_core::reactor: notifying a task handle
DEBUG:tokio_core::reactor: notifying a task handle
DEBUG:tokio_core::reactor: notifying a task handle
DEBUG:tokio_core::reactor: notifying a task handle
DEBUG:tokio_core::reactor: notifying a task handle
DEBUG:tokio_core::reactor: notifying a task handle
DEBUG:tokio_core::reactor: notifying a task handle
WARN:grpc::http_server: failed to write to channel, probably connection is closed: channel has been disconnected
WARN:grpc::http_server: failed to write to channel, probably connection is closed: channel has been disconnected
[..]
a flood of these warn messages occur followed by a panic
[...]

thread '<unnamed>' panicked at 'poll after eof', ../src/libcore/option.rs:705
stack backtrace:
   1:        0x1078e6d88 - std::sys::backtrace::tracing::imp::write::h6f1d53a70916b90d
   2:        0x1078e974f - std::panicking::default_hook::{{closure}}::h137e876f7d3b5850
   3:        0x1078e86c5 - std::panicking::default_hook::h0ac3811ec7cee78c
   4:        0x1078e8cd6 - std::panicking::rust_panic_with_hook::hc303199e04562edf
   5:        0x1078e8b74 - std::panicking::begin_panic::h6ed03353807cf54d
   6:        0x1078e8a92 - std::panicking::begin_panic_fmt::hc321cece241bb2f5
   7:        0x1078e89f7 - rust_begin_unwind
   8:        0x107912fa0 - core::panicking::panic_fmt::h27224b181f9f037f
   9:        0x10791300d - core::option::expect_failed::h8606bc228cd3f504
  10:        0x1074d74f5 - <core::option::Option<T>>::expect::h50f1038f976109fa
  11:        0x10758d064 - <grpc::futures_misc::stream_with_eof_and_error::StreamWithEofAndError<S, F> as futures::stream::Stream>::poll::h1138dbf5b6f1045b
  12:        0x107541cb0 - <Box<S> as futures::stream::Stream>::poll::h99583395fd082fa9
  13:        0x107584384 - <grpc::grpc_frame::GrpcFrameFromHttpFramesStreamRequest as futures::stream::Stream>::poll::h4df6615273bee0c6
  14:        0x10748c820 - <Box<S> as futures::stream::Stream>::poll::hbe60631b02667eb8
  15:        0x107454582 - <futures::stream::and_then::AndThen<S, F, U> as futures::stream::Stream>::poll::h8c0ac950d7328286
  16:        0x10748c860 - <Box<S> as futures::stream::Stream>::poll::hfa402fefaf618f38
  17:        0x1074aa73b - <futures::stream::map::Map<S, F> as futures::stream::Stream>::poll::h6d118c382818bb98
  18:        0x1074a98f3 - <futures::stream::flatten::Flatten<S> as futures::stream::Stream>::poll::h35c231ddc73bb119
  19:        0x10748c7e0 - <Box<S> as futures::stream::Stream>::poll::h768c6509e58ee796
  20:        0x10748c7e0 - <Box<S> as futures::stream::Stream>::poll::h768c6509e58ee796
  21:        0x107453521 - <futures::stream::and_then::AndThen<S, F, U> as futures::stream::Stream>::poll::h6dbbe69f89e6466d
  22:        0x107541d30 - <Box<S> as futures::stream::Stream>::poll::hbe60631b02667eb8
  23:        0x107574b6b - <futures::stream::map::Map<S, F> as futures::stream::Stream>::poll::h1d05ccecc34a947d
  24:        0x10757e15d - <futures::stream::then::Then<S, F, U> as futures::stream::Stream>::poll::h66c0dc49ca0cc677
  25:        0x10758edd5 - <grpc::futures_misc::stream_concat::Concat<S1, S2> as futures::stream::Stream>::poll::hf21c768e0339297a
  26:        0x10758dd01 - <grpc::futures_misc::stream_concat::Concat<S1, S2> as futures::stream::Stream>::poll::h50449c6c191ee722
  27:        0x10759014b - <grpc::futures_misc::stream_concat3::Concat3<S1, S2, S3> as futures::stream::Stream>::poll::heb8ae2092ce37e85
  28:        0x107541cb0 - <Box<S> as futures::stream::Stream>::poll::h99583395fd082fa9
  29:        0x10757bd5a - <futures::stream::for_each::ForEach<S, F> as futures::future::Future>::poll::h4df34ab063428f94
  30:        0x107551317 - <futures::future::chain::Chain<A, B, C>>::poll::hd96363bfc990cb52
  31:        0x1074c320b - <futures::future::and_then::AndThen<A, B, F> as futures::future::Future>::poll::h3841d17e232c31d3
  32:        0x10757a78a - <futures::future::map_err::MapErr<A, F> as futures::future::Future>::poll::he880d6081e7a3dac
  33:        0x107762981 - <Box<F> as futures::future::Future>::poll::hdcde478ef4e8da40
  34:        0x1077797fc - <futures::task_impl::Spawn<F>>::poll_future::{{closure}}::h4593987aece2335f
  35:        0x10777984e - <futures::task_impl::Spawn<T>>::enter::{{closure}}::h96233ed4af8f7a98
  36:        0x107779965 - futures::task_impl::set::{{closure}}::h06a53b1d516ec9fe
  37:        0x107755694 - <std::thread::local::LocalKey<T>>::with::hafd993f9e3933e97
  38:        0x1077685d7 - futures::task_impl::set::hd0fa5ec781774660
  39:        0x107754d4c - <futures::task_impl::Spawn<T>>::enter::haa5fa2f5d202f777
  40:        0x107754b92 - <futures::task_impl::Spawn<F>>::poll_future::h0ccf585be6916a7e
  41:        0x107779446 - tokio_core::reactor::Core::dispatch_task::{{closure}}::h80049e866b2a20c1
  42:        0x107751e0e - <scoped_tls::ScopedKey<T>>::set::h4537c6755a99f191
  43:        0x10777409f - tokio_core::reactor::Core::dispatch_task::he97843a353c5112d
  44:        0x107773308 - tokio_core::reactor::Core::dispatch::h756d49fd8a309683
  45:        0x107772f1b - tokio_core::reactor::Core::poll::hb80b0fedd628e39d
  46:        0x1074c8437 - tokio_core::reactor::Core::run::h7f6313bcbc91d331
  47:        0x10758340b - grpc::server::run_server_event_loop::h76e03d25a0f0cd66
  48:        0x10759dd68 - grpc::server::GrpcServer::new::{{closure}}::hcd41d52da0bae7d4
  49:        0x107577dbb - <std::panic::AssertUnwindSafe<F> as core::ops::FnOnce<()>>::call_once::h5c9d794b2db5435f
  50:        0x1074e3d16 - std::panicking::try::do_call::h39ac82e24b244e58
  51:        0x1078e9d0a - __rust_maybe_catch_panic
  52:        0x1074e39a0 - std::panicking::try::he0cef71b38587938
  53:        0x1074e2c05 - std::panic::catch_unwind::h29077e42a5630281
  54:        0x10759a37d - std::thread::Builder::spawn::{{closure}}::hd0e2341b25be272b
  55:        0x107534850 - <F as alloc::boxed::FnBox<A>>::call_box::h4a3944545481bfc9
  56:        0x1078e7f34 - std::sys::thread::Thread::new::thread_start::h759e10bc4abc7e72
  57:     0x7fffce7f7aba - _pthread_body
  58:     0x7fffce7f7a06 - _pthread_start

tags and releases

Currently the version of grpc-rust in crates.io is 4 months out of date. While that's fine, in addition releases are not tagged in this project, which means that it's non-trivial to go back in time and figure out what version of the examples should be used for a particular version of the grpc library.

Is it possible for you to tag the commits that correspond to the released code on creates.io. Also would you mind cutting a new release?

How to register multiple services?

Hello @stepancheg,

I'm curious to know how to register multiple services in one gRPC server.
I found that the current compiler generates Servers that only serve one service.

E.g. LongTestsAsyncServer

    // new() creates a gRPC server and a LongTestsServer.
    let _server = LongTestsAsyncServer::new(23432, LongTestsServerImpl{});

LongTestsAsyncServer listens on 23432 and only responds to rpcs that are
defined in long_tests_pb.proto. It seems there is no api that allows adding
other services.

I think the problem is LongTestsAsyncServer is a service server, and meanwhile
it's a gRPC server.

Maybe it should be generated to something like Go or Java, so that we can add
more services later.

    s := grpc.NewServer()
    pb.RegisterLongTestsServer(s, &server{})
    // pb.RegisterOtherServer(s, &otherserver{})
    // and more ...
    server = ServerBuilder.forPort(port)
        .addService(new GreeterImpl())
        // .addService(new OtherImpl())
        // and more ...
        .build()
        .start();

Thanks and happy new year 🎉!

server: support multi threads

Hi @stepancheg

For #27, we discuss that supporting multi thread can improve the server performance. I do a little work to verify the feasibility. I just do:

  1. Create the listener https://github.com/stepancheg/grpc-rust/blob/master/http2/src/server.rs#L83 like tokio-proto does https://github.com/tokio-rs/tokio-proto/blob/e42b3bfbf5f35402feb595363f42fcba4ef14079/src/tcp_server.rs#L152 . In unix, we can use reuse_port to let multi threads bind the same address.
  2. Add a worker size configuration in GrpcServerConf. If > 1, spawn threads, clone the HttpServerConf and service_definition, then move to the new thread to create the HttpServer. Here we only need to care arg addr whose trait is ToSocketAddrs so we must use <A: ToSocketAddrs + Clone + Send + 'static>.

It works fine, if you think this change is ok, I can send you a PR.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.