Code Monkey home page Code Monkey logo

Comments (16)

BurntSushi avatar BurntSushi commented on May 30, 2024

Good points. I'll explain my rationale.

The first design decision I made was that I wanted to expose exactly one send method. I didn't want to get into a situation where you had to pick between two variations of send where one possibly deadlocked and the other returned an error. I think having two choices adds too much complexity to what should be a very simple API.

The second design decision I made was, "sending a value that can never be received is either a bug or an intentional leak." This may be a bad decision, but it is one that has a large body of evidence that suggests it may not be a horrible idea. (e.g., Go.) The idea here is too encourage users of chan to write code that doesn't enter that state. I was motivated to do things this way because so many uses of the std::sync::mpsc channel are tx.send(...).unwrap(). In other words, it makes the common case verbose.

OK, that was my thinking. It may be bad thinking, but there it is. Now I'll actually respond. :-)

Working on a memory constrained system (~32MB total RAM), being able to leak threads is bad, as even their stack space alone is super expensive. If a server does contain a bug and a receiving thread panics, leaving a sender in limbo is a very expensive cost for something that would otherwise be recoverable.

This is interesting. Is your server built to be resistant against bugs in the code? My initial instinct here is, well, if it's a bug then you should fix that instead of ignoring it and pressing on.

I use rendezvous channels in pipe to back a Read/Write interface that sends Vec buffers across threads. Imagine a case where a Read consumer doesn't finish reading to EOF (an error is encountered when processing the data), and drops the Read/Receiver end. The writer would normally encounter a broken pipe error, but deadlocking would occur with chan, and leak resources from the writing thread:

I was actually thinking about this the other day, and I think this is a pretty compelling use case for figuring out how to do error handling in a concurrent program. I think you're suggesting the path of least resistance: if enough errors occur such that all receivers hang up, then the senders should return an error too, which gives the senders an opportunity to wind down gracefully.

Another way to approach this same problem is to add an error channel that inverts control. When your readers encounter an error, then they send the error back to the sender. So your senders would have to look something like this:

for {
  chan_select! {
    send.send(...) => {},
    recv_error.recv() -> err => { /* handle error */ },
  }
}

But this suffers from a couple problems that I can't think of an immediate solution to:

  • How does a sender know when to quit? If there are multiple receivers, then multiple errors could be reported. A single error might not necessarily indicate that the senders should all shut down.
  • Each error will be received by exactly one sender.
  • The receiver has to explicitly send the error back to the sender, which is much less convenient than using try! and letting the receiving end of the channel get dropped.
  • The caller must wire the extra channel through their code.

I guess those are pretty damning reasons not to take that path. Hmm.

In sum, my concerns:

  1. Having two distinct send methods is distracting and too complex IMO.
  2. Having a send return a Result means most uses of send will call unwrap.
  3. Having all senders simply hangup when all receivers quit is a very binary message. What if the senders need the error information that caused the receivers to die?

I think my current position is to make send return a Result, but I find the aforementioned concerns very troubling.

from chan.

BurntSushi avatar BurntSushi commented on May 30, 2024

With respect to performance: no, almost no decisions were made based on performance. I'm glad you don't see it as too much of an issue. I would obviously like to optimize the channels (and select), but I think channels will only ever be good for coarse grained concurrency. (I'm not sure that lock free channels can provide the same semantics provided by the channels in this library.)

from chan.

arcnmx avatar arcnmx commented on May 30, 2024

Thanks for the detailed design rationale!

This is interesting. Is your server built to be resistant against bugs in the code? My initial instinct here is, well, if it's a bug then you should fix that instead of ignoring it and pressing on.

It's somewhat of a weak argument, but being able to detect these bugs (and handle them gracefully or panic) seems better than deadlocking to me. Deadlocking leaks resources, where a panic or the ability to detect the bug via Result at least lets the thread clean up after itself. Perhaps your receiving thread panics due to a bug in a dependency - the only way to be resistant to these bugs would be to have the ability to detect the unexpected receiver drop on the sender's side (and vice-versa).

I was motivated to do things this way because so many uses of the std::sync::mpsc channel are tx.send(...).unwrap(). In other words, it makes the common case verbose.

Makes sense, though I personally disagree that it's a burden. Rust tends to take a very manual approach to error handling - if it's not a bug that the compiler can statically determine won't happen, providing a way to detect and handle it seems fair. A resilient program should use try or panic! I suppose I'd make my case like this:

  • A deadlock makes the least sense of the options to me. Fail fast if the library can detect it, with a potential caveat / leniency for checks that would affect performance.
  • A panic would at least point out the cause of the bug very obviously. A deadlock is more subtle and confusing to a user trying to debug their program.
  • Rust has a history of providing interfaces that allow the user to detect these kinds of contract violation bugs. For example the slice interface, where we have...
    • Index syntax that panics on invalid use.
    • get() that returns None on invalid use.
    • unsafe get_unchecked() that allows you to pinky swear promise you know what you're doing.
  • Given the above, I actually kind of like the idea of having, say, panicking send and then send_checked. However, a single send with Result and unwrap accomplishes the same thing, it's just more verbose on the calling side for your common case.

Another way to approach this same problem is to add an error channel that inverts control. When your readers encounter an error, then they send the error back to the sender. So your senders would have to look something like this:

This is also fair, but the complexity involved with that solution suggests that it may be worth supporting in some fashion rather than expecting users to implement their own. It does cover my use case, which is designed around spsc (unix pipe behaviour).

Having all senders simply hangup when all receivers quit is a very binary message. What if the senders need the error information that caused the receivers to die?

It is, but this message is already used the other way around - a channel is considered closed when all senders drop. As a mpmc library, I don't see any reason not to support channel closure the other way around too.

from chan.

BurntSushi avatar BurntSushi commented on May 30, 2024

Frankly, those are some pretty compelling points. I was hoping to find a way to justify the current function signature, but that seems like folly.

I'll branch and release a 0.2.x with a new send that returns a Result, hopefully tomorrow. (The other big win with exposing this functionality is that functions like tick_ms can clean up after themselves if the receivers hang up. Similarly for the chan_signal crate!)

from chan.

arcnmx avatar arcnmx commented on May 30, 2024

Heh, sorry if it seems like I'm forcing my opinions on you... I'm just a big fan of giving the library user some control over situations like this (and am especially concerned due to working on constrained resource systems). I look forward to trying that branch out!

from chan.

BurntSushi avatar BurntSushi commented on May 30, 2024

@arcnmx Sorry I haven't got to this yet. I started thinking about an implementation, and I got a little stuck with chan_select!. How should it handle a channel send? Right now, a send doesn't return a value. If it changes to return Result<(), TrySendError>, then how does that interact with chan_select!?

We could do something similar to recv and allow users to write channel.send(val) or channel.send(val) -> return_val => { ... } where return_val is the Result. But if we do that, then users might be encouraged to just write channel.send(val) and completely ignore the Result. If the Result is ignored, then bad things happen---you might drop into an infinite loop if your select is inside a loop.

The other problem here is that the things we can do inside of chan_select! are pretty limited because of how the macro syntax stuff works.

from chan.

arcnmx avatar arcnmx commented on May 30, 2024

No worries!

If the Result is ignored, then bad things happen---you might drop into an infinite loop if your select is inside a loop.

To be fair, is this any worse than the current behaviour of deadlocking? You'd presumably also get a lint warning about the unused result, no different than calling send() manually without handling it. The short channel.send(val) form could simply be disallowed, or it could unwrap/panic. Maybe the alternatives should be channel.send(val).unwrap() vs channel.send(val) -> result instead? I'm not too familiar with the syntax, or what the macro is capable of...

EDIT: I tackled that from the wrong angle, didn't I? How does it currently work?

from chan.

arcnmx avatar arcnmx commented on May 30, 2024

Breaking it down...

Current version:
channel.send(val) => {} if the receiver is not ready: does nothing, waits, hits some other branch
channel.send(val) => {} if the receiver is ready: runs body
channel.send(val) => {} if the receiver is dropped: never runs body

Result version:
channel.send(val) => {} if the receiver isn't ready: does nothing, as above
channel.send(val) => {} if the receiver is ready: runs body
channel.send(val) => {} if the receiver is dropped: never runs body
channel.send(val) -> result => {} if the receiver is dropped: runs body with Err(..)
channel.send(val) -> result => {} if the receiver isn't ready: does nothing
channel.send(val) -> result => {} if the receiver is ready: runs body with Ok(())

So... I'm actually not completely against just silently allowing the send to be ignored, as the send simply may not happen. I guess alternatives are:

  1. To panic instead
  2. Panic as above but adjust the syntax to say unwrap or give some indication that you're unwrapping.
  3. Just require channel.send(val) -> result syntax. _ can be used to truly ignore the result? This seems strange though since loops would just call your body repeatedly.

from chan.

BurntSushi avatar BurntSushi commented on May 30, 2024

One tricky part here is the channel.send(val) => {}, if the receiver is dropped: never runs body case. While accurate for both the current and result versions, it kind of misses the key difference: in the current version, the send will block forever while in the result version, the send will be triggered immediately at each invocation of chan_select!. To your point, I guess neither is strictly better, although I feel like deadlock is probably easier to observe.

I wonder if we could be more creative. If select determines that all receivers for a sender have hung up, then it could unsubscribe from that channel after passing the initial error. This will require some gymnastics in the current select impl, but I think it's doable.

With all that said, it is going to be difficult to get the syntax right in the macro. There's really no obvious "do this if it's a receive" and "do this if it's a send." It's all pretty implicit. Blech.

from chan.

arcnmx avatar arcnmx commented on May 30, 2024

Hm, is the select macro not magical and treat both the same way (as in, doesn't actually send unless the receiver is ready / it knows send wouldn't block)?

EDIT: Or are you only referring to where the explicit result handle case where you'd get Err(..) repeatedly? That is awkward, yeah...

from chan.

BurntSushi avatar BurntSushi commented on May 30, 2024

The implementation, which is Select in the code, can do whatever it pleases. I'm pretty confident any kind of semantics can be encoded there.

The issue is the syntax in the macro. At the macro level, there's really no distinction between a send and a receive. I don't have much more to say than that because the macro is effectively write-once code. I'll have to dig into it, play with it and re-discover its limitations. :-)

from chan.

arcnmx avatar arcnmx commented on May 30, 2024

Aha, gotcha! It's still important to determine the semantics we want, though, since that drives what's required from the macro syntax.

from chan.

insanitybit avatar insanitybit commented on May 30, 2024

Finally getting around to writing this. I think that panicking is the right way to go, or exposing a try_send(). The reason to deadlock is that it exposes a bug, but I feel that panicking or exposing the Result does the same thing. In a deadlock I have no way, in any part of my program, to handle the problem. I can catch panics, restart the service, or handle it explicitly in the case of a Result.

Even in the case of Go my understanding (from our conversation on IRC) is that the runtime will try to panic on deadlock, so really it seems that the desired behavior is already panic.

I think the most dangerous thing a program can do is deadlock/ hang - it eats up a threads resources, a single bug can lead to a completely exhausted resource pool, and the entire service can 'die' without crashing, leading to a very confusing time when everything's on fire but all of your services are up and running.

I personally think that the solution is to panic in send or panic and send and expose a Result. Panicking seems like the natural thing to do because, as you said, if there's no one to receive a message there is a bug. I think that by choosing to panic by default the macro issue is also moot, correct?

from chan.

BurntSushi avatar BurntSushi commented on May 30, 2024

I think that by choosing to panic by default the macro issue is also moot, correct?

I guess so. But what happens if you want to use chan_select! without panicing? Do we just not offer that? Maybe that's OK.

from chan.

insanitybit avatar insanitybit commented on May 30, 2024

Yeah, that's pretty much what I was thinking.

from chan.

BurntSushi avatar BurntSushi commented on May 30, 2024

This crate is now deprecated. See: https://users.rust-lang.org/t/ann-chan-is-deprecated-use-crossbeam-channel-instead/19251

from chan.

Related Issues (19)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.