n0-computer / beetle Goto Github PK
View Code? Open in Web Editor NEWLicense: Other
License: Other
Currently we don't do that. Hyper / tonic had a default timeout for all calls. 10 seconds?
I don't think that is a good idea, but some timeout might be necessary. Otherwise iroh gw will hang waiting for a response sometimes.
Timeouts should be done explicitly in the handler functions, on a per call basis.
They might not be faster than TCP / HTTP2 / hyper, but win in terms of latency
With the latest main, when receiving local content you get lots of errors from the p2p service.
I think this is related to the resolver racing retrieval from p2p and from the store, and then just dropping the client side future when the store resolves first.
The server side rpc endpoint remains functional. You can still do things like status
and p2p peers
. So I think this might just be a matter of too much logging when things happen that are expected. E.g. stream error received: stream no longer needed
should probably not be logged at error level.
❯ cargo run -p iroh-p2p
Compiling iroh-p2p v0.1.3 (/Users/rklaehn/projects_git/iroh/iroh-p2p)
Finished dev [unoptimized + debuginfo] target(s) in 6.75s
Running `target/debug/iroh-p2p`
Starting iroh-p2p, version 0.1.3
/ip4/0.0.0.0/tcp/4444
/ip4/0.0.0.0/udp/4445/quic-v1
2022-12-21T12:07:25.189962Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302
2022-12-21T12:07:25.190057Z ERROR quic_rpc::transport::http2: Flume receiver dropped
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:308
2022-12-21T12:07:25.190079Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302
2022-12-21T12:07:25.190088Z ERROR quic_rpc::transport::http2: Flume receiver dropped
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:308
2022-12-21T12:07:25.190009Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302
2022-12-21T12:07:25.190245Z ERROR quic_rpc::transport::http2: Flume receiver dropped
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:308
2022-12-21T12:07:25.190321Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302
2022-12-21T12:07:25.190341Z ERROR quic_rpc::transport::http2: Flume receiver dropped
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:308
2022-12-21T12:07:25.190379Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302
2022-12-21T12:07:25.190390Z ERROR quic_rpc::transport::http2: Flume receiver dropped
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:308
2022-12-21T12:07:25.190451Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302
2022-12-21T12:07:25.190467Z ERROR quic_rpc::transport::http2: Flume receiver dropped
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:308
2022-12-21T12:07:25.190923Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302
2022-12-21T12:07:25.190939Z ERROR quic_rpc::transport::http2: Flume receiver dropped
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:308
2022-12-21T12:07:25.190965Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302
2022-12-21T12:07:25.190975Z ERROR quic_rpc::transport::http2: Flume receiver dropped
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:308
2022-12-21T12:07:25.191110Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302
2022-12-21T12:07:25.191123Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302
2022-12-21T12:07:25.191139Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302
2022-12-21T12:07:25.191159Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302
2022-12-21T12:07:25.191138Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302
2022-12-21T12:07:25.191245Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302
2022-12-21T12:07:25.191248Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: stream error received: stream no longer needed
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302
2022-12-21T12:07:25.191265Z ERROR quic_rpc::transport::http2: Flume receiver dropped
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:308
2022-12-21T12:07:25.209372Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: connection reset
at /Users/rklaehn/.cargo/registry/src/github.com-1ecc6299db9ec823/quic-rpc-0.3.0/src/transport/http2.rs:302
2022-12-21T12:07:25.210634Z ERROR quic_rpc::transport::http2: Network error: error reading a body from connection: connection reset
iroh-embed avoids doing anything with tracing and metrics for now. It should provide convenient ways to hook those up with the caller's tracing and metrics systems and have a good description of how to do this.
there's nothing in the docs, and there's no github wiki
my laptop or vps doesn't seed to other peers by default
This is a summary of things @ppodolsky encountered, which in theory should work but seems to exhibit odd behavior and we should check/investigate and write tests for.
[/ip4/127.0.0.1/tcp/4401] failed to negotiate security protocol: message did not have trailing newline
kubo
& iroh
and do a manual ipfs swarm connect
to iroh
. ipfs swarm list
should show the iroh node there.Currently when using iroh-embed some listening addresses get printed to stdout:
ip4/0.0.0.0/tcp/0
/ip4/0.0.0.0/udp/0/quic-v1
(plus currently a got rid of channel!
due to quic-rpc).
Nothing should be printed to stdout.
explore designs for an alternative CLI API that maps more closely to POSIX style commands.
I was doing some checks on n0-computer/iroh#603 to see if a hamt directory would appear on the gateway. It does not. But it seems that directory adding on main is also broken.
I created a directory and tried to add it on main.
iroh on main [$?] via 🦀 v1.65.0
❯ cargo run -p iroh -- add --recursive --offline testdir_many_files/nohamt1
Finished dev [unoptimized + debuginfo] target(s) in 0.21s
Running `target/debug/iroh add --recursive --offline testdir_many_files/nohamt1`
[1/2] Calculating size...
[2/2] Importing content 0 B...
/ipfs/bafybeieopdpzqhyyxeuuipsdxnxlgo5q5ontpwe2j7xdojql3zk6du3jzy
Requesting this from the gateway does not give a directory as expected:
Requesting this via iroh get also gives an error:
iroh on main [$?] via 🦀 v1.65.0
❯ cargo run -p iroh -- get /ipfs/bafybeieopdpzqhyyxeuuipsdxnxlgo5q5ontpwe2j7xdojql3zk6du3jzy
Finished dev [unoptimized + debuginfo] target(s) in 0.21s
Running `target/debug/iroh get /ipfs/bafybeieopdpzqhyyxeuuipsdxnxlgo5q5ontpwe2j7xdojql3zk6du3jzy`
Error: path contains non-relative component
Currently it only works for files.
My iroh instance (downloaded from the site) ran all night, and I ended up with 140Gb of space taken up by iroh-gateway.log. The log kept on logging the following in a loop:
2023-01-31T07:34:44.209230Z ERROR iroh_gateway::rpc: gateway rpc accept error: AcceptBiError(A(RemoteDropped))
at iroh-gateway/src/rpc.rs:63
I stopped Iroh and deleted the log. I wanted to surface in case it is helpful.
For the 0.2.0 release: https://docs.rs/crate/iroh-embed/0.2.0/builds/706345 (iroh-embed as an example here).
Using non Send apis is quite inconvenient. So we should look into making the API Send, or if that is not possible at least offer a Send alternative.
https://docs.rs/crate/iroh/0.1.3/builds/687794
Looks like the protoc
version is an old one, but we can not change that.
Doing #67 would fix this
Rather than adding data to the store
when we iroh add
, we can remove duplication by only storing the graph metadata, filestore or database location, and block offsets in our store.
This helps us in the DeltaChat iroh-share
use case because we know the DeltaChat data already exists on the local device, so we don't need to store it again internally. This will remove duplication and speed up "add" time (since we don't have to write the block data to disk).
This implementation can be ephemeral since in the iroh-share
use case the graph/encoded data only needs to exist as long as it takes to transfer the data.
On termination the following traceback has been reported. I'm speculating this is because the rpc client side does not handle a shutting down server very well. The server now shuts down when dropped, so we need to improve this interaction better in iroh.
2022-12-21T09:01:08.965023Z WARN tokio-runtime-workers-6 iroh_resolver::resolver: failed to stop session ContextId(4): Open(A(Hyper(hyper::Error(Canceled, "request has been canceled"))))
Stack backtrace:
0: backtrace::backtrace::libunwind::trace
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.67/src/backtrace/libunwind.rs:93:5
backtrace::backtrace::trace_unsynchronized
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.67/src/backtrace/mod.rs:66:5
1: backtrace::backtrace::trace
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.67/src/backtrace/mod.rs:53:14
2: anyhow::backtrace::capture::Backtrace::create
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/anyhow-1.0.68/src/backtrace.rs:216:13
3: anyhow::backtrace::capture::Backtrace::capture
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/anyhow-1.0.68/src/backtrace.rs:204:17
4: anyhow::error::<impl core::convert::From<E> for anyhow::Error>::from
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/anyhow-1.0.68/src/error.rs:547:25
5: <core::result::Result<T,F> as core::ops::try_trait::FromResidual<core::result::Result<core::convert::Infallible,E>>>::from_residual
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/result.rs:2105:27
6: iroh_rpc_client::network::P2pClient::stop_session_bitswap::{{closure}}::{{closure}}
at /Users/pasha/.cargo/git/checkouts/iroh-0d305f337f85df22/b4f5c3a/iroh-rpc-client/src/network.rs:72:9
7: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/future/mod.rs:91:19
8: <tracing::instrument::Instrumented<T> as core::future::future::Future>::poll
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tracing-0.1.37/src/instrument.rs:272:9
9: iroh_rpc_client::network::P2pClient::stop_session_bitswap::{{closure}}
at /Users/pasha/.cargo/git/checkouts/iroh-0d305f337f85df22/b4f5c3a/iroh-rpc-client/src/network.rs:70:5
10: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/future/mod.rs:91:19
11: <iroh_unixfs::content_loader::FullLoader as iroh_unixfs::content_loader::ContentLoader>::stop_session::{{closure}}
at /Users/pasha/.cargo/git/checkouts/iroh-0d305f337f85df22/b4f5c3a/iroh-unixfs/src/content_loader.rs:232:13
12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/future/mod.rs:91:19
13: <core::pin::Pin<P> as core::future::future::Future>::poll
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/future/future.rs:124:9
14: iroh_resolver::resolver::Resolver<T>::with_dns_resolver::{{closure}}::{{closure}}
at /Users/pasha/.cargo/git/checkouts/iroh-0d305f337f85df22/b4f5c3a/iroh-resolver/src/resolver.rs:612:67
15: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/future/mod.rs:91:19
16: tokio::runtime::task::core::Core<T,S>::poll::{{closure}}
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/core.rs:223:17
17: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/loom/std/unsafe_cell.rs:14:9
18: tokio::runtime::task::core::Core<T,S>::poll
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/core.rs:212:13
19: tokio::runtime::task::harness::poll_future::{{closure}}
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/harness.rs:481:19
20: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/panic/unwind_safe.rs:271:9
21: std::panicking::try::do_call
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:492:40
22: ___rust_try
23: std::panicking::try
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:456:19
24: std::panic::catch_unwind
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panic.rs:137:14
25: tokio::runtime::task::harness::poll_future
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/harness.rs:469:18
26: tokio::runtime::task::harness::Harness<T,S>::poll_inner
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/harness.rs:198:27
27: tokio::runtime::task::harness::Harness<T,S>::poll
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/harness.rs:152:15
28: tokio::runtime::task::raw::poll
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/raw.rs:255:5
29: tokio::runtime::task::raw::RawTask::poll
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/raw.rs:200:18
30: tokio::runtime::task::LocalNotified<S>::run
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/mod.rs:459:9
31: tokio::runtime::scheduler::multi_thread::worker::Context::run_task::{{closure}}
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/scheduler/multi_thread/worker.rs:464:13
32: tokio::runtime::coop::with_budget
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/coop.rs:102:5
tokio::runtime::coop::budget
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/coop.rs:68:5
tokio::runtime::scheduler::multi_thread::worker::Context::run_task
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/scheduler/multi_thread/worker.rs:463:9
33: tokio::runtime::scheduler::multi_thread::worker::Context::run
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/scheduler/multi_thread/worker.rs:433:24
34: tokio::runtime::scheduler::multi_thread::worker::run::{{closure}}
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/scheduler/multi_thread/worker.rs:406:17
35: tokio::macros::scoped_tls::ScopedKey<T>::set
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/macros/scoped_tls.rs:61:9
36: tokio::runtime::scheduler::multi_thread::worker::run
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/scheduler/multi_thread/worker.rs:403:5
37: tokio::runtime::scheduler::multi_thread::worker::Launch::launch::{{closure}}
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/scheduler/multi_thread/worker.rs:365:45
38: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/blocking/task.rs:42:21
39: tokio::runtime::task::core::Core<T,S>::poll::{{closure}}
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/core.rs:223:17
40: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/loom/std/unsafe_cell.rs:14:9
41: tokio::runtime::task::core::Core<T,S>::poll
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/core.rs:212:13
42: tokio::runtime::task::harness::poll_future::{{closure}}
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/harness.rs:481:19
43: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/panic/unwind_safe.rs:271:9
44: std::panicking::try::do_call
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:492:40
45: ___rust_try
46: std::panicking::try
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:456:19
47: std::panic::catch_unwind
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panic.rs:137:14
48: tokio::runtime::task::harness::poll_future
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/harness.rs:469:18
49: tokio::runtime::task::harness::Harness<T,S>::poll_inner
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/harness.rs:198:27
50: tokio::runtime::task::harness::Harness<T,S>::poll
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/harness.rs:152:15
51: tokio::runtime::task::raw::poll
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/raw.rs:255:5
52: tokio::runtime::task::raw::RawTask::poll
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/raw.rs:200:18
53: tokio::runtime::task::UnownedTask<S>::run
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/task/mod.rs:496:9
54: tokio::runtime::blocking::pool::Task::run
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/blocking/pool.rs:159:9
55: tokio::runtime::blocking::pool::Inner::run
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/blocking/pool.rs:510:17
56: tokio::runtime::blocking::pool::Spawner::spawn_thread::{{closure}}
at /Users/pasha/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.23.0/src/runtime/blocking/pool.rs:468:13
57: std::sys_common::backtrace::__rust_begin_short_backtrace
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/sys_common/backtrace.rs:122:18
58: std::thread::Builder::spawn_unchecked_::{{closure}}::{{closure}}
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/thread/mod.rs:514:17
59: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/panic/unwind_safe.rs:271:9
60: std::panicking::try::do_call
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:492:40
61: ___rust_try
62: std::panicking::try
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:456:19
63: std::panic::catch_unwind
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panic.rs:137:14
64: std::thread::Builder::spawn_unchecked_::{{closure}}
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/thread/mod.rs:513:30
65: core::ops::function::FnOnce::call_once{{vtable.shim}}
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/ops/function.rs:248:5
66: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/alloc/src/boxed.rs:1940:9
<alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/alloc/src/boxed.rs:1940:9
std::sys::unix::thread::Thread::new::thread_start
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/sys/unix/thread.rs:108:17
67: __pthread_start
Now that we have quic-rpc, we can finally do what I wanted to do before the ipfs camp: turn put_many into a client streaming request.
That gets rid of an issue in the current put_many where it produces chunks that exceed the frame size. And is the right thing to do anyway.
Currently tokio tasks are brutally cancelled. We'd prefer to gracefully terminate.
See also tools linked in #126 which may or may not be applicable.
We removed the a bunch of test for the iroh
CLI in n0-computer/iroh#674. We need to resurrect those tests as end-to-end tests.
This is part of #101.
issue that just tracks the status of: deltachat/deltachat-core-rust#3489
Can close it (and the DeltaChat milestone) when that pr merges
libp2p update issue for tracking: libp2p/rust-libp2p#3196
What need to do in iroh: CTRL + F in project code for https://github.com/n0-computer/beetle/issues/47
and follow comment guidelines
Right the version
and watch
RPC calls use the version information pulled in from env!("CARGO_PKG_VERSION")
.
Our metrics, also track the "build version" using git_version
.
What do we want to show to folks when they ask for a "version"? This is primarily seen in the iroh status -w
CLI call
We have design work that needs to be done around how we plan to do garbage collection. This also includes adding size limits for the store, the ability to delete content, and "named" CIDs.
iroh-share
currently imports the p2p and store directly, it should instead but using iroh-embed
to spin itself up.
We also need to add gossipsub to the API, for use in iroh-share
NetworkEvents
from p2p node in P2pService
gossipsub_subscribe
, gossipsub_publish
, etc to iroh-api
iroh-share
to use iroh-embed
While for some usecases it is fine and desirable to use the filesystem it should also be possible to use iroh-embed without touching the filesystem. Places that currently always use the filesystem:
P2pService
stores the cryptographic identity in a file on disk.RocksStoreService
stores the entire store on disk in RocksDB.After chats with @dignifiedquire and @b5, and after considering API notes from @flub on the WIP PR concerning these refactors, we've decided to adjust this goal.
We are going to limit the exposure of some of our lower level APIs and have settled on these changes:
subscribe
method that returns a stream of GossipsubEvents
on that topicpublish
method that you can send messages on a topicunsubscribe
method that you can use to unsubscribe to a topicadd_peer
method that explicitly adds the given peer to your pub/sub networkgossipsub_subscribe
should return a stream of GossipsubEvents
get
process to tell it how you want your data fetched. For now, I'm scoping this down to get_from_peers
, that allows you to use a list of PeerIds as your providers, rather than fetching the providers off the DHT or getting the providers from something like cid.contactget_from_providers
LoaderFromProviders
ContextLoaderprevious issue contents for posterity:
feat(rpc, api): add network_events, and Gossipsub IPC methods
network_events
method that returns a stream of NetworkEvents
emitted by the p2p nodegossipsub_subscribe
to the iroh-api
gossipsub_publish
to iroh-api
Exposing/adding these methods to IPC so we can use iroh-embed
inside iroh-share
(iroh-share
uses Gossipsub & expects to be able to inspect Gossipsub network events). Ignoring exposing the other Gossipsub RPC methods for now, and only implementing what is needed for the use case.
When creating a large directory and then traversing it again, the roundtrip test hangs as soon as the directory has more than 2048 entries.
This seems to be related to some bounded channel in the resolver:
let (session_closer_s, session_closer_r) = async_channel::bounded(2048);
Setting this value to a larger value makes the test pass. Not sure what is happening? Something just accumulates in this channel is not pulled out, and when it is full things hang?
Redb looks promising - few dependencies and very lightweight. But it requires some not so small changes because it is not a WAL based database.
It looks like a good candidate for a blob store. Implement a comprehensive standalone blob store, similar to https://github.com/actyx/ipfs-sqlite-block-store to see how it performs.
This is the first "pass" at adapting our code to better fit the DeltaChat/iroh-share use case.
Once we have an in-mem store implementation we can remove the RocksDB dependency. This dependency (and how long it takes to build) was a major blocker in our goal to get iroh-share
into DeltaChat.
This feat should be followed up by #18
I think it would be better to give the local resolution a chance before firing off an expensive network operation.
Maybe have a small delay before the network operation to give local resolution a chance to complete, or only do the network resolution once the local resolution has failed.
We now have the ability with quic-rpc to listen simultaneously on a mem and a http2 socket. So it should be relatively easy to make iroh-cli talk to iroh one, by having iroh-one open http2 sockets in addition to the mem transports.
But I would prefer to not wait with merging quic-rpc before doing this.
We should look at hyper errors in detail to check if they are noteworthy or just part of normal operation, and then log them at different error level.
Same for RpcServerErrors returned from accept_one.
Write a test that:
bafyRootCID/somefile
IpfsRequest
It used to be based on tonic-health, which of course no longer makes sense. We need to do something basic ourselves.
Currently it is just hammering the version endpoint, but I guess that needs some refinement.
Basically the tonic health thing allows a service to announce itself that it is currently non-functional, that is the biggest difference.
https://github.com/grpc/grpc/blob/master/doc/health-checking.md
Right now, each service implements almost the exact same check
and watch
methods. It would be nice to abstract over them so check
and watch
are only written once.
It would be valuable to add custom behaviours to the embedded mode of iroh. We (ceramic network) are exploring embedding iroh into a Rust implementation of our Ceramic node. Having the ability to write our own protocols (aka behaviours) will enable us to leverage iroh's features while customizing it for our own network needs as well.
By design, behaviours are meant to be composable. Therefore it makes the most since to extend iroh's network logic via behaviours. The goals of a change would be:
The second point is a bit vague and I will likely need to explore it more before understanding exactly what hosting applications would need from their custom behaviours.
I took a look at the code and I have a rough plan for how we could accomplish this feature.
NodeBehaviour<B: NetworkBehaviour>
custom_behaviour
field to existing behaviour via a Toggle
behaviour. i.e. custom_behaviour: Toggle<B>
IrohBuilder
API so consumers can specify their custom behaviour. Should be possible to do this in a backward compatible way such that only new code wishing to use this feature needs to update to consume these new APIs.Those are my ideas, I am willing to submit a PR for this work (in fact I have a hacky version working already), however I wanted to discuss first before dropping a PR that changes lots of API surface area.
We don't know yet where this will happen (on the cli or on the store side), but to turn the store into a store that optionally just stores metadata, we do need to extend the builders.
I am reasonably sure that we don't need the load balanced clients anymore. But we should test the performance and remove them after merging quic-rpc.
We could also move the load balancers as a building block into quic-rpc. We might need them again.
Actyx is now a well funded research org for resilient software.
It would be nice if they would use iroh-embed instead of ipfs-embed.
Here is a list of things that we would need to support their use case:
Not everything has to be exactly like it is in ipfs-embed. @rkuhn will be able to adapt it a bit and maybe even help a bit with things like the peer manager. But at least there needs to be a functional equivalent for each of the features.
Sometimes subtle dependencies are missed for our various crates: since cargo features are additive if one crate adds a dependency with a certain feature, another crate can accidentally use this feature even if they forget to enable it. This way we sometimes end up with individual crates not being buildable.
We should add a cargo check -p xxx
for each individual crate to CI so we can make sure this is caught early.
This triggers abuse behavior detection from some hosting providers like Hetzner.
Can we add a flag in the p2p config for that if we don't want to block these addresses in all situations?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.