Code Monkey home page Code Monkey logo

beetle's People

Contributors

amtoine avatar andreivolt avatar arqu avatar b5 avatar dependabot[bot] avatar dignifiedquire avatar faassen avatar fabricedesre avatar flub avatar frando avatar huitseeker avatar matheus23 avatar mishmosh avatar onsagerhe avatar ppodolsky avatar ramfox avatar rklaehn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

beetle's Issues

Add timeouts to rpc calls

Currently we don't do that. Hyper / tonic had a default timeout for all calls. 10 seconds?

I don't think that is a good idea, but some timeout might be necessary. Otherwise iroh gw will hang waiting for a response sometimes.

Timeouts should be done explicitly in the handler functions, on a per call basis.

Errors from p2p when receiving local content

With the latest main, when receiving local content you get lots of errors from the p2p service.

I think this is related to the resolver racing retrieval from p2p and from the store, and then just dropping the client side future when the store resolves first.

The server side rpc endpoint remains functional. You can still do things like status and p2p peers. So I think this might just be a matter of too much logging when things happen that are expected. E.g. stream error received: stream no longer needed should probably not be logged at error level.

❯ cargo run -p iroh-p2p
   Compiling iroh-p2p v0.1.3 (/Users/rklaehn/projects_git/iroh/iroh-p2p)
    Finished dev [unoptimized + debuginfo] target(s) in 6.75s
     Running `target/debug/iroh-p2p`
Starting iroh-p2p, version 0.1.3
/ip4/0.0.0.0/tcp/4444
/ip4/0.0.0.0/udp/4445/quic-v1
  2022-12-21T12:07:25.189962Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302

  2022-12-21T12:07:25.190057Z ERROR quic_rpc::transport::http2: Flume receiver dropped
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:308

  2022-12-21T12:07:25.190079Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302

  2022-12-21T12:07:25.190088Z ERROR quic_rpc::transport::http2: Flume receiver dropped
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:308

  2022-12-21T12:07:25.190009Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302

  2022-12-21T12:07:25.190245Z ERROR quic_rpc::transport::http2: Flume receiver dropped
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:308

  2022-12-21T12:07:25.190321Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302

  2022-12-21T12:07:25.190341Z ERROR quic_rpc::transport::http2: Flume receiver dropped
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:308

  2022-12-21T12:07:25.190379Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302

  2022-12-21T12:07:25.190390Z ERROR quic_rpc::transport::http2: Flume receiver dropped
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:308

  2022-12-21T12:07:25.190451Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302

  2022-12-21T12:07:25.190467Z ERROR quic_rpc::transport::http2: Flume receiver dropped
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:308

  2022-12-21T12:07:25.190923Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302

  2022-12-21T12:07:25.190939Z ERROR quic_rpc::transport::http2: Flume receiver dropped
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:308

  2022-12-21T12:07:25.190965Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302

  2022-12-21T12:07:25.190975Z ERROR quic_rpc::transport::http2: Flume receiver dropped
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:308

  2022-12-21T12:07:25.191110Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302

  2022-12-21T12:07:25.191123Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302

  2022-12-21T12:07:25.191139Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302

  2022-12-21T12:07:25.191159Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302

  2022-12-21T12:07:25.191138Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302

  2022-12-21T12:07:25.191245Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302

  2022-12-21T12:07:25.191248Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302

  2022-12-21T12:07:25.191265Z ERROR quic_rpc::transport::http2: Flume receiver dropped
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:308

  2022-12-21T12:07:25.209372Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: connection reset
    at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302

  2022-12-21T12:07:25.210634Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: connection reset

Telemetry for iroh-embed

iroh-embed avoids doing anything with tracing and metrics for now. It should provide convenient ways to hook those up with the caller's tracing and metrics systems and have a good description of how to do this.

example config file ?

there's nothing in the docs, and there's no github wiki

my laptop or vps doesn't seed to other peers by default

Things that should work, but we're not sure

This is a summary of things @ppodolsky encountered, which in theory should work but seems to exhibit odd behavior and we should check/investigate and write tests for.

  • iroh <> kubo swarm connect: iroh sees kubo as a peer, but not the other way around.
  • Errors happened between iroh@main(7a11aa8) and kubo 0.17. On kubo 0.18 errors didn't show, but still doesn't count iroh as a peer.
  • Error msg: [/ip4/127.0.0.1/tcp/4401] failed to negotiate security protocol: message did not have trailing newline
  • test case: run kubo & iroh and do a manual ipfs swarm connect to iroh. ipfs swarm list should show the iroh node there.

embedded usage should not print anything to stdout

Currently when using iroh-embed some listening addresses get printed to stdout:

ip4/0.0.0.0/tcp/0
/ip4/0.0.0.0/udp/0/quic-v1

(plus currently a got rid of channel! due to quic-rpc).

Nothing should be printed to stdout.

Ipfs add broken on main

I was doing some checks on n0-computer/iroh#603 to see if a hamt directory would appear on the gateway. It does not. But it seems that directory adding on main is also broken.

I created a directory and tried to add it on main.

iroh on  main [$?] via 🦀 v1.65.0 
❯ cargo run -p iroh -- add --recursive --offline testdir_many_files/nohamt1 
    Finished dev [unoptimized + debuginfo] target(s) in 0.21s
     Running `target/debug/iroh add --recursive --offline testdir_many_files/nohamt1`
[1/2] Calculating size...
[2/2] Importing content 0 B...
/ipfs/bafybeieopdpzqhyyxeuuipsdxnxlgo5q5ontpwe2j7xdojql3zk6du3jzy

Requesting this from the gateway does not give a directory as expected:
image

Requesting this via iroh get also gives an error:

iroh on  main [$?] via 🦀 v1.65.0 
❯ cargo run -p iroh -- get /ipfs/bafybeieopdpzqhyyxeuuipsdxnxlgo5q5ontpwe2j7xdojql3zk6du3jzy
    Finished dev [unoptimized + debuginfo] target(s) in 0.21s
     Running `target/debug/iroh get /ipfs/bafybeieopdpzqhyyxeuuipsdxnxlgo5q5ontpwe2j7xdojql3zk6du3jzy`
Error: path contains non-relative component

iroh-gateway.log hits 140Gb and fills in my disk space

My iroh instance (downloaded from the site) ran all night, and I ended up with 140Gb of space taken up by iroh-gateway.log. The log kept on logging the following in a loop:

2023-01-31T07:34:44.209230Z ERROR iroh_gateway::rpc: gateway rpc accept error: AcceptBiError(A(RemoteDropped))
at iroh-gateway/src/rpc.rs:63

I stopped Iroh and deleted the log. I wanted to surface in case it is helpful.

feat: metadata-only / outbound in-mem store

Rather than adding data to the store when we iroh add, we can remove duplication by only storing the graph metadata, filestore or database location, and block offsets in our store.

This helps us in the DeltaChat iroh-share use case because we know the DeltaChat data already exists on the local device, so we don't need to store it again internally. This will remove duplication and speed up "add" time (since we don't have to write the block data to disk).

This implementation can be ephemeral since in the iroh-share use case the graph/encoded data only needs to exist as long as it takes to transfer the data.

RPC client side should handle server shutdown gracefully

On termination the following traceback has been reported. I'm speculating this is because the rpc client side does not handle a shutting down server very well. The server now shuts down when dropped, so we need to improve this interaction better in iroh.

2022-12-21T09:01:08.965023Z  WARN  tokio-runtime-workers-6 iroh_resolver::resolver: failed to stop session ContextId(4): Open(A(Hyper(hyper::Error(Canceled, "request has been canceled"))))

Stack backtrace:
   0: backtrace::backtrace::libunwind::trace
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.67/src/backtrace/libunwind.rs:93:5
      backtrace::backtrace::trace_unsynchronized
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.67/src/backtrace/mod.rs:66:5
   1: backtrace::backtrace::trace
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.67/src/backtrace/mod.rs:53:14
   2: anyhow::backtrace::capture::Backtrace::create
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/anyhow-1.0.68/src/backtrace.rs:216:13
   3: anyhow::backtrace::capture::Backtrace::capture
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/anyhow-1.0.68/src/backtrace.rs:204:17
   4: anyhow::error::<impl core::convert::From<E> for anyhow::Error>::from
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/anyhow-1.0.68/src/error.rs:547:25
   5: <core::result::Result<T,F> as core::ops::try_trait::FromResidual<core::result::Result<core::convert::Infallible,E>>>::from_residual
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/result.rs:2105:27
   6: iroh_rpc_client::network::P2pClient::stop_session_bitswap::{{closure}}::{{closure}}
             at /Users/pasha/.cargo/git/checkouts/iroh-0d305f337f85df22/b4f5c3a/iroh-rpc-client/src/network.rs:72:9
   7: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/future/mod.rs:91:19
   8: <tracing::instrument::Instrumented<T> as core::future::future::Future>::poll
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tracing-0.1.37/src/instrument.rs:272:9
   9: iroh_rpc_client::network::P2pClient::stop_session_bitswap::{{closure}}
             at /Users/pasha/.cargo/git/checkouts/iroh-0d305f337f85df22/b4f5c3a/iroh-rpc-client/src/network.rs:70:5
  10: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/future/mod.rs:91:19
  11: <iroh_unixfs::content_loader::FullLoader as iroh_unixfs::content_loader::ContentLoader>::stop_session::{{closure}}
             at /Users/pasha/.cargo/git/checkouts/iroh-0d305f337f85df22/b4f5c3a/iroh-unixfs/src/content_loader.rs:232:13
  12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/future/mod.rs:91:19
  13: <core::pin::Pin<P> as core::future::future::Future>::poll
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/future/future.rs:124:9
  14: iroh_resolver::resolver::Resolver<T>::with_dns_resolver::{{closure}}::{{closure}}
             at /Users/pasha/.cargo/git/checkouts/iroh-0d305f337f85df22/b4f5c3a/iroh-resolver/src/resolver.rs:612:67
  15: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/future/mod.rs:91:19
  16: tokio::runtime::task::core::Core<T,S>::poll::{{closure}}
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/core.rs:223:17
  17: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/loom/std/unsafe_cell.rs:14:9
  18: tokio::runtime::task::core::Core<T,S>::poll
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/core.rs:212:13
  19: tokio::runtime::task::harness::poll_future::{{closure}}
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/harness.rs:481:19
  20: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/panic/unwind_safe.rs:271:9
  21: std::panicking::try::do_call
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:492:40
  22: ___rust_try
  23: std::panicking::try
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:456:19
  24: std::panic::catch_unwind
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panic.rs:137:14
  25: tokio::runtime::task::harness::poll_future
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/harness.rs:469:18
  26: tokio::runtime::task::harness::Harness<T,S>::poll_inner
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/harness.rs:198:27
  27: tokio::runtime::task::harness::Harness<T,S>::poll
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/harness.rs:152:15
  28: tokio::runtime::task::raw::poll
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/raw.rs:255:5
  29: tokio::runtime::task::raw::RawTask::poll
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/raw.rs:200:18
  30: tokio::runtime::task::LocalNotified<S>::run
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/mod.rs:459:9
  31: tokio::runtime::scheduler::multi_thread::worker::Context::run_task::{{closure}}
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/scheduler/multi_thread/worker.rs:464:13
  32: tokio::runtime::coop::with_budget
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/coop.rs:102:5
      tokio::runtime::coop::budget
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/coop.rs:68:5
      tokio::runtime::scheduler::multi_thread::worker::Context::run_task
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/scheduler/multi_thread/worker.rs:463:9
  33: tokio::runtime::scheduler::multi_thread::worker::Context::run
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/scheduler/multi_thread/worker.rs:433:24
  34: tokio::runtime::scheduler::multi_thread::worker::run::{{closure}}
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/scheduler/multi_thread/worker.rs:406:17
  35: tokio::macros::scoped_tls::ScopedKey<T>::set
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/macros/scoped_tls.rs:61:9
  36: tokio::runtime::scheduler::multi_thread::worker::run
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/scheduler/multi_thread/worker.rs:403:5
  37: tokio::runtime::scheduler::multi_thread::worker::Launch::launch::{{closure}}
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/scheduler/multi_thread/worker.rs:365:45
  38: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/blocking/task.rs:42:21
  39: tokio::runtime::task::core::Core<T,S>::poll::{{closure}}
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/core.rs:223:17
  40: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/loom/std/unsafe_cell.rs:14:9
  41: tokio::runtime::task::core::Core<T,S>::poll
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/core.rs:212:13
  42: tokio::runtime::task::harness::poll_future::{{closure}}
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/harness.rs:481:19
  43: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/panic/unwind_safe.rs:271:9
  44: std::panicking::try::do_call
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:492:40
  45: ___rust_try
  46: std::panicking::try
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:456:19
  47: std::panic::catch_unwind
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panic.rs:137:14
  48: tokio::runtime::task::harness::poll_future
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/harness.rs:469:18
  49: tokio::runtime::task::harness::Harness<T,S>::poll_inner
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/harness.rs:198:27
  50: tokio::runtime::task::harness::Harness<T,S>::poll
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/harness.rs:152:15
  51: tokio::runtime::task::raw::poll
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/raw.rs:255:5
  52: tokio::runtime::task::raw::RawTask::poll
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/raw.rs:200:18
  53: tokio::runtime::task::UnownedTask<S>::run
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/mod.rs:496:9
  54: tokio::runtime::blocking::pool::Task::run
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/blocking/pool.rs:159:9
  55: tokio::runtime::blocking::pool::Inner::run
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/blocking/pool.rs:510:17
  56: tokio::runtime::blocking::pool::Spawner::spawn_thread::{{closure}}
             at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/blocking/pool.rs:468:13
  57: std::sys_common::backtrace::__rust_begin_short_backtrace
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/sys_common/backtrace.rs:122:18
  58: std::thread::Builder::spawn_unchecked_::{{closure}}::{{closure}}
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/thread/mod.rs:514:17
  59: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/panic/unwind_safe.rs:271:9
  60: std::panicking::try::do_call
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:492:40
  61: ___rust_try
  62: std::panicking::try
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:456:19
  63: std::panic::catch_unwind
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panic.rs:137:14
  64: std::thread::Builder::spawn_unchecked_::{{closure}}
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/thread/mod.rs:513:30
  65: core::ops::function::FnOnce::call_once{{vtable.shim}}
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/ops/function.rs:248:5
  66: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/alloc/src/boxed.rs:1940:9
      <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/alloc/src/boxed.rs:1940:9
      std::sys::unix::thread::Thread::new::thread_start
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/sys/unix/thread.rs:108:17
  67: __pthread_start

Rework put_many to use a stream

Now that we have quic-rpc, we can finally do what I wanted to do before the ipfs camp: turn put_many into a client streaming request.

That gets rid of an issue in the current put_many where it produces chunks that exceed the frame size. And is the right thing to do anyway.

Graceful termination

Currently tokio tasks are brutally cancelled. We'd prefer to gracefully terminate.

See also tools linked in #126 which may or may not be applicable.

service "version" unification

Right the version and watch RPC calls use the version information pulled in from env!("CARGO_PKG_VERSION").
Our metrics, also track the "build version" using git_version.
What do we want to show to folks when they ask for a "version"? This is primarily seen in the iroh status -w CLI call

Design: tags for naming and garbage collection

We have design work that needs to be done around how we plan to do garbage collection. This also includes adding size limits for the store, the ability to delete content, and "named" CIDs.

refactor: `iroh-share` uses `iroh-embed`

iroh-share currently imports the p2p and store directly, it should instead but using iroh-embed to spin itself up.

We also need to add gossipsub to the API, for use in iroh-share

  • #16
    - way to get NetworkEvents from p2p node in P2pService
    - add gossipsub_subscribe, gossipsub_publish, etc to iroh-api
  • refactor iroh-share to use iroh-embed

Make iroh-embed not touch the filesystem

While for some usecases it is fine and desirable to use the filesystem it should also be possible to use iroh-embed without touching the filesystem. Places that currently always use the filesystem:

  • P2pService stores the cryptographic identity in a file on disk.
  • RocksStoreService stores the entire store on disk in RocksDB.

feat(rpc, api): add `network_events`, and `Gossipsub` IPC methods

After chats with @dignifiedquire and @b5, and after considering API notes from @flub on the WIP PR concerning these refactors, we've decided to adjust this goal.

We are going to limit the exposure of some of our lower level APIs and have settled on these changes:

  1. We need a public pub/sub interface:
  • a subscribe method that returns a stream of GossipsubEvents on that topic
  • a publish method that you can send messages on a topic
  • an unsubscribe method that you can use to unsubscribe to a topic
  • an add_peer method that explicitly adds the given peer to your pub/sub network
  1. We need to make adjustments to certain RPC methods to get this to work properly:
  • gossipsub_subscribe should return a stream of GossipsubEvents
  1. A way to configure the get process to tell it how you want your data fetched. For now, I'm scoping this down to get_from_peers, that allows you to use a list of PeerIds as your providers, rather than fetching the providers off the DHT or getting the providers from something like cid.contact
  • get_from_providers
  • LoaderFromProviders ContextLoader

previous issue contents for posterity:

feat(rpc, api): add network_events, and Gossipsub IPC methods

  • need a network_events method that returns a stream of NetworkEvents emitted by the p2p node
  • add p2p/rpc test~
  • expose gossipsub_subscribe to the iroh-api
  • expose gossipsub_publish to iroh-api

Exposing/adding these methods to IPC so we can use iroh-embed inside iroh-share (iroh-share uses Gossipsub & expects to be able to inspect Gossipsub network events). Ignoring exposing the other Gossipsub RPC methods for now, and only implementing what is needed for the use case.

Stream returned from resolve_recursive_with_paths hangs for large non-hamt directory

When creating a large directory and then traversing it again, the roundtrip test hangs as soon as the directory has more than 2048 entries.

This seems to be related to some bounded channel in the resolver:

        let (session_closer_s, session_closer_r) = async_channel::bounded(2048);

Setting this value to a larger value makes the test pass. Not sure what is happening? Something just accumulates in this channel is not pulled out, and when it is full things hang?

feat: in memory store implementation

This is the first "pass" at adapting our code to better fit the DeltaChat/iroh-share use case.

Once we have an in-mem store implementation we can remove the RocksDB dependency. This dependency (and how long it takes to build) was a major blocker in our goal to get iroh-share into DeltaChat.

This feat should be followed up by #18

Don't race local against network immediately

I think it would be better to give the local resolution a chance before firing off an expensive network operation.

Maybe have a small delay before the network operation to give local resolution a chance to complete, or only do the network resolution once the local resolution has failed.

Allow connecting to iroh-one from iroh-cli

We now have the ability with quic-rpc to listen simultaneously on a mem and a http2 socket. So it should be relatively easy to make iroh-cli talk to iroh one, by having iroh-one open http2 sockets in addition to the mem transports.

But I would prefer to not wait with merging quic-rpc before doing this.

Improve logging of hyper errors

We should look at hyper errors in detail to check if they are noteworthy or just part of normal operation, and then log them at different error level.

Same for RpcServerErrors returned from accept_one.

Refine status monitoring

It used to be based on tonic-health, which of course no longer makes sense. We need to do something basic ourselves.

Currently it is just hammering the version endpoint, but I guess that needs some refinement.

Basically the tonic health thing allows a service to announce itself that it is currently non-functional, that is the biggest difference.

https://github.com/grpc/grpc/blob/master/doc/health-checking.md

Allow for custom behaviours

It would be valuable to add custom behaviours to the embedded mode of iroh. We (ceramic network) are exploring embedding iroh into a Rust implementation of our Ceramic node. Having the ability to write our own protocols (aka behaviours) will enable us to leverage iroh's features while customizing it for our own network needs as well.

Design

By design, behaviours are meant to be composable. Therefore it makes the most since to extend iroh's network logic via behaviours. The goals of a change would be:

  • have the ability to compose custom behaviours with the existing NodeBehaviour,
  • have the ability to handle behaviour events in hosting application.

The second point is a bit vague and I will likely need to explore it more before understanding exactly what hosting applications would need from their custom behaviours.

Implementation Ideas

I took a look at the code and I have a rough plan for how we could accomplish this feature.

  • Make the existing behaviour parameterized by an additional custom behaviour, i.e. NodeBehaviour<B: NetworkBehaviour>
  • Add custom_behaviour field to existing behaviour via a Toggle behaviour. i.e. custom_behaviour: Toggle<B>
  • Plumb adding a custom behaviour out through the IrohBuilder API so consumers can specify their custom behaviour. Should be possible to do this in a backward compatible way such that only new code wishing to use this feature needs to update to consume these new APIs.

Those are my ideas, I am willing to submit a PR for this work (in fact I have a hacky version working already), however I wanted to discuss first before dropping a PR that changes lots of API surface area.

Remove load balancers

I am reasonably sure that we don't need the load balanced clients anymore. But we should test the performance and remove them after merging quic-rpc.

We could also move the load balancers as a building block into quic-rpc. We might need them again.

Actyx compatibility

Actyx is now a well funded research org for resilient software.

It would be nice if they would use iroh-embed instead of ipfs-embed.

Here is a list of things that we would need to support their use case:

  • mdns
  • advanced peer manager
  • private swarms
  • raw block read and write
  • purely local store api, preferably syncronous
  • gossipsub access
  • broadcast to local peers

Not everything has to be exactly like it is in ipfs-embed. @rkuhn will be able to adapt it a bit and maybe even help a bit with things like the peer manager. But at least there needs to be a functional equivalent for each of the features.

Build each crate separateley in CI

Sometimes subtle dependencies are missed for our various crates: since cargo features are additive if one crate adds a dependency with a certain feature, another crate can accidentally use this feature even if they forget to enable it. This way we sometimes end up with individual crates not being buildable.

We should add a cargo check -p xxx for each individual crate to CI so we can make sure this is caught early.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.