Code Monkey home page Code Monkey logo

etcd-rs's Introduction

etcd client for Rust

github crates.io docs.rs build status dependency status

An etcd (API v3) client for Rust backed by tokio and tonic.

Supported APIs

  • KV
    • Put
    • Range
    • Delete
    • Transaction
    • Compact
  • Lease
    • Grant
    • Revoke
    • KeepAlive
    • TimeToLive
  • Watch
    • WatchCreate
    • WatchCancel
  • Auth
    • Authenticate
    • RoleAdd
    • RoleGrantPermission
    • UserAdd
    • UserGrantRole
    • AuthEnable
    • AuthDisable
  • Cluster
    • MemberAdd
    • MemberRemove
    • MemberUpdate
    • MemberList
  • Maintenance
    • Alarm
    • Status
    • Defragment
    • Hash
    • Snapshot
    • MoveLeader

Usage

Add following dependencies in your project cargo.toml:

[dependencies]
etcd-rs = "1.0"
use etcd_rs::Client;

#[tokio::main]
async fn main() {
    let cli = Client::connect(ClientConfig {
        endpoints: [
            "http://127.0.0.1:12379",
            "http://127.0.0.1:22379",
            "http://127.0.0.1:32379",
        ],
        ..Default::default()
    }).await;
    
    cli.put(("foo", "bar")).await.expect("put kv");
    
    let kvs = cli.get("foo").await.expect("get kv").take_kvs();
    assert_eq!(kvs.len(), 1);
}

Development

requirements:

  • Makefile
  • docker
  • docker-compose

Start local etcd cluster

make setup-etcd-cluster

stop cluster

make teardown-etcd-cluster

Run tests

make test

for specified case:

TEST_CASE=test_put_error make test-one

License

This project is licensed under the MIT license.

etcd-rs's People

Contributors

diggyk avatar direktor799 avatar dreamacro avatar dtzxporter avatar fogti avatar forsworns avatar georgehahn avatar jamesbirtles avatar jdhoek avatar kennytm avatar leshow avatar mfontanini avatar protryon avatar r3v2d0g avatar returnstring avatar zarvd avatar zkonge avatar znewman01 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

etcd-rs's Issues

get key will cost 2+ seconds

It is a good lib for me, thanks.

And I try to get key from etcd by "client.kv().range(RangeRequest::new(KeyRange::key(key))).await", but it cost me more than 2 seconds, while the etcdctl tools can only cost 10ms.

What would be wrong? my config without tls and auth

Tonic conflict

Hello,

I'm using tonic 0.7.1 for my project but etcd-rs is using tonic 0.6 and this is causing a conflict. Is there a way to address this? It does look like there is some breaking changes, at least in module visibility in tonic.

Example for transactions

It would be really helpful if the documents describe how to perform operations in a transaction

use it has err Service was not ready

first thanks for your work !

I started etcd ok , the restful can used .

bu when use this client it report:

"grpc-status: Unknown, grpc-message: \"Service was not ready: transport error: buffer\\\'s worker closed unexpectedly\""`', src/etcd_client.rs:179:13
stack backtrace:
   0: backtrace::backtrace::libunwind::trace
             at /Users/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/libunwind.rs:88
   1: backtrace::backtrace::trace_unsynchronized
             at /Users/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/mod.rs:66
   2: std::sys_common::backtrace::_print_fmt
             at src/libstd/sys_common/backtrace.rs:77
   3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
             at src/libstd/sys_common/backtrace.rs:59
   4: core::fmt::write
             at src/libcore/fmt/mod.rs:1052
   5: std::io::Write::write_fmt
             at /rustc/41f41b2354778375dc72f7ed1d9323626580dc4d/src/libstd/io/mod.rs:1426
   6: std::io::impls::<impl std::io::Write for alloc::boxed::Box<W>>::write_fmt
             at src/libstd/io/impls.rs:156
   7: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:62
   8: std::sys_common::backtrace::print
             at src/libstd/sys_common/backtrace.rs:49
   9: std::panicking::default_hook::{{closure}}
             at src/libstd/panicking.rs:204
  10: std::panicking::default_hook
             at src/libstd/panicking.rs:221
  11: std::panicking::rust_panic_with_hook
             at src/libstd/panicking.rs:472
  12: rust_begin_unwind
             at src/libstd/panicking.rs:380
  13: std::panicking::begin_panic_fmt
             at src/libstd/panicking.rs:334
  14: any_client::etcd_client::test_etcd_kv
             at src/etcd_client.rs:179
  15: any_client::etcd_client::test_etcd_kv::{{closure}}
             at src/etcd_client.rs:173
  16: core::ops::function::FnOnce::call_once
             at /rustc/41f41b2354778375dc72f7ed1d9323626580dc4d/src/libcore/ops/function.rs:232
  17: <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once
             at /rustc/41f41b2354778375dc72f7ed1d9323626580dc4d/src/liballoc/boxed.rs:1015
  18: __rust_maybe_catch_panic
             at src/libpanic_unwind/lib.rs:86
  19: std::panicking::try
             at /rustc/41f41b2354778375dc72f7ed1d9323626580dc4d/src/libstd/panicking.rs:281
  20: std::panic::catch_unwind
             at /rustc/41f41b2354778375dc72f7ed1d9323626580dc4d/src/libstd/panic.rs:394
  21: test::run_test_in_process
             at src/libtest/lib.rs:539
  22: test::run_test::run_test_inner::{{closure}}
             at src/libtest/lib.rs:452

my code is

// new client 
 pub fn new(addr: &str) -> EtcdCli {
        match Runtime::new()
            .unwrap()
            .block_on(Client::connect(ClientConfig {
                endpoints: vec![addr.to_owned()],
                auth: None,
            })) {
            Ok(c) => EtcdCli { client: c },
            Err(e) => panic!("conn master server has err:{}", e.to_string()),
        }
    }


//set

pub fn set(&self, key: &str, value: &str, ttl: u64) -> ASResult<KValue> {
        let mut req = PutRequest::new(key, value);
        req.set_lease(ttl);
        match Runtime::new().unwrap().block_on(self.client.kv().put(req)) {
            Ok(mut r) => match r.take_prev_kv() {
                Some(kv) => {
                    return Ok(KValue {
                        key: kv.key_str().to_string(),
                        value: kv.value_str().to_string(),
                    })
                }
                None => return Err(err_generic()),
            },
            Err(e) => {
                return Err(err_code(INTERNAL_ERR, e.to_string()));
            }
        }
    }

Upgrade to tokio 0.3

Any plans to support tokio 0.3? I can start a branch with some changes I'm testing.

panicked when some etcd server node shutdown

panicked logs

thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: Status { code: Unknown, message: "transport error: error trying to connect: tcp connect error: Connection refused (os error 111)" }', .../etcd-rs-0.5.0/src/lease/mod.rs:109:70

code in crate

res = client.lease_keep_alive(request).fuse() => res.unwrap().into_inner()

Service was not ready

Thanks for your job, is a great project.
And when I use it throw an exception, It's same with use it has err Service was not ready #16

Response(
        Status {
            code: Unknown,
            message: "Service was not ready: transport error: buffer\'s worker closed unexpectedly",
        },
    )

And my code is same with issues#16

use etcd_rs::{Client, ClientConfig, PutRequest};

use tokio::runtime::Runtime;


pub struct EtcdPersist {
  client: Client,
}

impl EtcdPersist {
  pub async fn new() -> anyhow::Result<Self> {
    // Create the runtime
    let rt = Runtime::new().unwrap();
    // Spawn a future onto the runtime
    let etcd_client = rt.block_on(async {
      Client::connect(ClientConfig {
        endpoints: vec!["http://127.0.0.1:2379".to_owned()],
        // auth: Some(("user".to_owned(), "password".to_owned())),
        auth: None,
        tls: None,
      }).await.unwrap()
    });
    Ok(Self { client: etcd_client })
  }
}

Because I use actix-web, the actix-web use [email protected]

├── actix-cors v0.5.4
│   ├── actix-web v3.3.2
│   │   ├── actix-codec v0.3.0
│   │   │   ├── bitflags v1.2.1
│   │   │   ├── bytes v0.5.6
│   │   │   ├── futures-core v0.3.12
│   │   │   ├── futures-sink v0.3.12
│   │   │   ├── log v0.4.14
│   │   │   │   └── cfg-if v1.0.0
│   │   │   ├── pin-project v0.4.27
│   │   │   │   └── pin-project-internal v0.4.27
│   │   │   │       ├── proc-macro2 v1.0.24
│   │   │   │       │   └── unicode-xid v0.2.1
│   │   │   │       ├── quote v1.0.8
│   │   │   │       │   └── proc-macro2 v1.0.24 (*)
│   │   │   │       └── syn v1.0.60
│   │   │   │           ├── proc-macro2 v1.0.24 (*)
│   │   │   │           ├── quote v1.0.8 (*)
│   │   │   │           └── unicode-xid v0.2.1
│   │   │   ├── tokio v0.2.25
│   │   │   │   ├── bytes v0.5.6
│   │   │   │   ├── futures-core v0.3.12
│   │   │   │   ├── iovec v0.1.4
│   │   │   │   ├── lazy_static v1.4.0
│   │   │   │   ├── memchr v2.3.4
│   │   │   │   ├── mio v0.6.23

your project use [email protected]

If my project use #[actix_web::main] to run, the etcd-rs project can't start.

thread 'main' panicked at 'there is no reactor running, must be called from the context of a Tokio 1.x runtime', d:/opt/scoop\persist\rustup\.cargo\registry\src\github.com-1ecc6299db9ec823\tower-0.4.4\src\buffer\service.rs:70:9
stack backtrace:

but when create a new tokio runtime, this runtime will lose before created. throw Service was not ready: transport error: buffer\'s worker closed unexpectedly error

How to fix it, do you have any idea? thanks

About grand_lease and watcher

Prepared

etcd-rs tag:1.0.1
docker: docker-compose[node 1, node 2, node3]
image

Notice

I didn't use cancel watch, just test the ttl invoke delete event.

Test Code

image

Process

1:success

image

2: failure

image

The client cannot work in a three-node ETCD cluster if one node is stoped

Three-node ETCD cluster if only two nodes are alive.The client instance has an error.

status: Unknown, message: "Service was not ready: transport error: buffered service failed: load balancer discovery error: error trying to connect: tcp connect error: Connection refused (os error 111)", details: [], metadata: MetadataMap { headers: {} }

Are you considering implementing health-check to automatically insert and remove unavailable nodes

How do I add a cluster?

How do I add a cluster to the client?

let etcd_adress = vec![
        "127.0.0.1:23791".to_owned(),
        "127.0.0.1:23792".to_owned(),
        "127.0.0.1:23793".to_owned(),
    ]
Client::builder().endpoints(etcd_adress.clone()).build()

returns: RpcFailure(RpcStatus { status: Unavailable, details: Some("Name resolution failure") })

but Client::builder().add_endpoint( "127.0.0.1:23791").build() with each of the upper ips works.

FYI this is the docker-compose I use to start the cluster:

version: '3'

services:

  etcd1:
    image: quay.io/coreos/etcd:latest
    environment:
            ETCD_NAME: node1

            ETCD_ADVERTISE_CLIENT_URLS: http://etcd1:2379
            ETCD_LISTEN_CLIENT_URLS: http://0.0.0.0:2379

            ETCD_INITIAL_ADVERTISE_PEER_URLS: http://etcd1:2380
            ETCD_LISTEN_PEER_URLS: http://0.0.0.0:2380

            ETCD_DATA_DIR: /etcd-data/etcd1.etcd
            ETCDCTL_API: 3
            ETCD_DEBUG: 1
            
            ETCD_INITIAL_CLUSTER: node3=http://etcd3:2380,node1=http://etcd1:2380,node2=http://etcd2:2380
            ETCD_INITIAL_CLUSTER_STATE: new
            ETCD_INITIAL_CLUSTER_TOKEN: etcd-ftw           
    ports:
      - 23791:2379
      - 23801:2380


  etcd2:
    image: quay.io/coreos/etcd:latest
    environment:
            ETCD_NAME: node2

            ETCD_INITIAL_ADVERTISE_PEER_URLS: http://etcd2:2380
            ETCD_LISTEN_PEER_URLS: http://0.0.0.0:2380

            ETCD_ADVERTISE_CLIENT_URLS: http://etcd2:2379
            ETCD_LISTEN_CLIENT_URLS: http://0.0.0.0:2379
            
            ETCD_DATA_DIR: /etcd-data/etcd2.etcd
            ETCDCTL_API: 3
            ETCD_DEBUG: 1

            ETCD_INITIAL_CLUSTER: node3=http://etcd3:2380,node1=http://etcd1:2380,node2=http://etcd2:2380
            ETCD_INITIAL_CLUSTER_STATE: new
            ETCD_INITIAL_CLUSTER_TOKEN: etcd-ftw
    ports:
      - 23792:2379
      - 23802:2380


  etcd3:
    image: quay.io/coreos/etcd:latest
    environment:
            ETCD_NAME: node3

            ETCD_INITIAL_ADVERTISE_PEER_URLS: http://etcd3:2380
            ETCD_LISTEN_PEER_URLS: http://0.0.0.0:2380
            
            ETCD_ADVERTISE_CLIENT_URLS: http://etcd3:2379
            ETCD_LISTEN_CLIENT_URLS: http://0.0.0.0:2379
            

            ETCD_DATA_DIR: /etcd-data/etcd3.etcd
            ETCDCTL_API: 3
            ETCD_DEBUG: 1
            
            ETCD_INITIAL_CLUSTER: node3=http://etcd3:2380,node1=http://etcd1:2380,node2=http://etcd2:2380
            ETCD_INITIAL_CLUSTER_STATE: new
            ETCD_INITIAL_CLUSTER_TOKEN: etcd-ftw
    ports:
      - 23793:2379
      - 23803:2380

Thanks in Advance!

Watch stream api uses 100% CPU after etcd server shutdown and did't support reconnection

Hello,

I've been using etcd-rs crate for my project and found that if I set watch to some key and shutdown etcd server my application starts using 100% CPU as well all other tasks stops polling. And after etcd server is up again - there is no reconnection and application still unresponsive.

Is there any plans for adding reconnection feature to etcd-rs crate as for example etcdctl does?

client shutdown() panic

 let mut client = Client::connect(ClientConfig {
                    endpoints: vec![
                        config_clone.etcd.get_http_address()
                    ],
                    auth: None,
                    tls: None,
                }).await.unwrap();

let mut stream = client.watch(KeyRange::prefix(prefix_clone.clone())).await.unwrap();

client.shutdown().await.unwrap();

Error message:

tokio-runtime-worker' panicked at 'called Result::unwrap() on an Err value: SendError(Ok(None))', /Users/caiwenhui/.cargo/registry/src/mirrors.ustc.edu.cn-61ef6e0cd06fb9b8/etcd-rs-0.5.0/src/watch/mod.rs:122

connect_with_token doesn't need to be async?

Looking through the code (v1.0.0-alpha.3), it seems like Client::connect_with_token doesn't actually await anything. Maybe there are plans to do some async work here in the future? If not it would be nice if this could be made sync

Support auto refresh token

Currently, there are two types of parameters for etcd's auth_token, simple (the default) and jwt (recommended for production environments https://etcd.io/docs/v3.5/op-guide/configuration/#auth). These two modes require the functionality to refresh tokens in specific situations.

For simple, when the etcd server restarts, tonic returns an Unauthenticated status code, and the entire client cannot make any more requests after that.

For jwt, when the token's time limit exceeds the TTL (which is usually not very long), the entire client cannot make any more requests.

I looked at the code for etcd-rs and found that it does not perform any refreshes afterward besides obtaining a token when connecting for the first time. I also looked at the tonic interceptor trait and found that retrieving the response seems impossible.

In the etcd-go client, there is some code for refreshing the token.

https://github.com/etcd-io/etcd/blob/53b48bbd5795210af2620ac757d9529b34a09e48/client/v3/retry_interceptor.go#L273-L281

Therefore, I would like to request that the Unauthenticated status code be added to automatically refresh the token, or at the very least, provide a manual refresh method.

Return references instead of clones

Hey 👋!

I saw that you wrote // FIXME perf in the sources and it seems that those are all related to cloning data. May I ask why the methods don't return references, allowing to the library's users to clone the returned data if wanted? 😃

Thank you ❤️!

Tokio thread panics when `Client` is not connected to cluster

I am trying to catch the case where the Client class can't successfully connect to the Etcd cluster and exit with a user friendly error message. However, my current approach leads to a panic in one of Tokio's threads, which is output to stderr.

Minimal reproduction case:

[dependencies]
etcd-rs = "0.2.0-alpha.5"
tokio = "0.2.0-alpha.6"
use std::process;

use etcd_rs::{Client, ClientConfig, KeyRange, RangeRequest};

async fn get(client: &Client, name: &str) -> Result<(), String> {
    let req = RangeRequest::new(KeyRange::key(name));

    let mut resp = match client.kv().range(req).await {
        Ok(resp) => resp,
        Err(_) => return Err("Failed to connect to Etcd cluster.".to_owned())
    };
    let kvs = resp.take_kvs();

    match kvs.get(0) {
        Some(v) => {
            match std::str::from_utf8(v.value()) {
                Ok(v) => {
                    println!("{}", v);
                    return Ok(())
                },
                Err(e) => panic!("Invalid UTF-8 sequence: {}", e),
            };
        },
        None => {
            eprintln!("No key with name '{}' found.", name);
            process::exit(4);
        }
    }
}

async fn connect(endpoints: Vec<String>) -> Client {
    let client = Client::connect(ClientConfig {
        endpoints,
        auth: None,
    }).await;

    match client {
        Ok(c) => c,
        Err(_) => {
            eprintln!("Failed to create Etcd client.");
            process::exit(2);
        }
    }
}

#[tokio::main]
async fn main() {
    let client: Client = connect(vec!["https://localhost:2379".to_owned()]).await;

    match get(&client, "foo").await {
        Err(e) => {
            eprintln!("{}", e);
            process::exit(2);
        },
        _ => (),
    }
}

When the endpoint is available, this code works (provided a key named foo exists).

When the endpoint is not available, the connect method succeeds, but client.kv().range(req).await fails with an Err, and I attempt to gracefully exit, but in the terminal I see this:

thread 'Failed to connect to Etcd cluster.tokio-runtime-worker-4
' panicked at 'called `Result::unwrap()` on an `Err` value: Status { code: Unknown, message: "Client: buffered service failed: load balancer discovery error: error trying to connect: Connection refused (os error 111)" }', src/libcore/result.rs:1165:5

My own message and the message from the Tokio thread get jumbled together.

Is this panic something that can be caught and handled in etcd-rs, or is my approach wrong?

client watch multiple key range will panic

panic position:

 thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', etcd-rs/src/watch/mod.rs:182:66

watch.tunnel.resp_receiver can not take again:

tunnel.resp_receiver.take()

Panic occurs when I try to watch multiple key ranges.

  • Package Version: etcd-rs = "0.2.2"
  • Error Message: thread 'main' panicked at 'take the unique watch response receiver', src/watch/mod.rs:125:9

Below is the code to reproduce the error.

use tokio::stream::StreamExt;

use etcd_rs::*;

async fn watch(client: &Client) -> Result<()> {
    println!("watch key value modification");

    {
        let mut inbound = client.watch(KeyRange::key("foo")).await;

        // print out all received watch responses
        tokio::spawn(async move {
            while let Some(resp) = inbound.next().await {
                println!("watch response: {:?}", resp);
            }
        });
    }

    {
        let mut inbound = client.watch(KeyRange::key("foo2")).await;

        tokio::spawn(async move {
            while let Some(resp) = inbound.next().await {
                println!("watch response: {:?}", resp);
            }
        });
    }

    let key = "foo";
    client.kv().put(PutRequest::new(key, "bar")).await?;
    client.kv().put(PutRequest::new(key, "baz")).await?;
    client
        .kv()
        .delete(DeleteRequest::new(KeyRange::key(key)))
        .await?;

    Ok(())
}

#[tokio::main]
async fn main() -> Result<()> {
    let client = Client::connect(ClientConfig {
        endpoints: vec!["http://127.0.0.1:2379".to_owned()],
        auth: None,
        tls: None,
    })
    .await?;

    watch(&client).await?;

    client.shutdown().await?;

    Ok(())
}

Change error type to `Send + Sync + 'static`

I'm getting an error when using this library with tokio because the error type is Box<dyn Error> instead of Box<dyn Error + Send + Sync + 'static>

Here is a minimal reproduce, hacked together from your examples:

        tokio::spawn(async move {
            if let Ok(client) = Client::connect(ClientConfig {
                endpoints: vec!["http://127.0.0.1:2379".to_owned()],
                auth: None,
            })
            .await
            {
                let req = PutRequest::new("key", "fooo");
                if let Err(err) = client.kv().put(req).await {
                    println!("{:?}", err);
                }
            }
        })

tokio::spawn will have an error saying:

123 |         T: Future + Send + 'static,
    |                     ---- required by this bound in `tokio::task::spawn::spawn`
    |
    = help: the trait `std::marker::Send` is not implemented for `dyn std::error::Error`
note: future is not `Send` as this value is used across an await

I can submit a PR to fix this if you'd like.

Handle edge case where all node(s) may be unavailable.

We're using this in a scenario where we resolve the endpoints from a dns entry. It would be nice to be able to have something like this baked into the library, so that in the event all nodes become unavailable, the library can recover by querying the dns entry again.

How to set revision in watch request?

    {
        let mut inbound = client.watch(KeyRange::key("foo")).await.unwrap();

        // print out all received watch responses
        tokio::spawn(async move {
            while let Some(resp) = inbound.next().await {
                println!("watch response: {:?}", resp);
            }
        });
    }

Refine APIs

Now:

cli.kv().put(PutRequest...)
cli.watch().watch(WatchCreateRequest...)
cli.lease().grant(...)

which are tedious.

Expected:

cli.put(T: Into<PutRequest>) // and implements most commonly used `T` such as `(&str, &str)`
cli.watch(...)
cli.grant_lease(...)

memory leak when connect interuped

at debug model,a lot of debug message print in log:

tonic-0.4.3/src/codec/decode.rs-> line:231]  decoder inner stream error: Status { code: Unknown, message: "h2 protocol error: broken pipe" } 

at the same time, memory using by tonic rise up and stand there,every time interupt appeared,memory usage upgrade.

Client::connect spawns tasks for watch

Hi, thanks for the library! I'm using it in a project, and it's been working well.

I've noticed, however, that Client::connect spawns a task for the "watch" stream ("lease" too).

This has a couple of drawbacks:

  • General overhead: why spawn if you're not going to use it? Also, it means that I need to use #[tokio::test(threaded_scheduler)] in my integration tests.

  • Extra logging noise: when I shut down a runtime, (e.g. in a CLI client and integration tests), there's a panic along the lines of:

    thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: Status { code: Unknown, message: "transport error: connection error: broken pipe" }', /home/zjn/.cargo/registry/src/github.com-1ecc6299db9ec823/etcd-rs-0.2.1/src/watch/mod.rs:78:31
    

What would you think about the following?

  1. having an option in ClientConfig to disable these
  2. lazily spawning these tasks when a client uses "watch"
  3. a method to trigger clean shutdown of the task (e.g. using a tokio::sync::oneshot to signal to the watch task that it should wrap up)

In particular, I think (2+3) go well together---you could have a RwLock<Option<Client>>. In most calls, you'd just use the Some(Client) that was there (a cheap read lock), but you might sometimes need to grab the write lock the first time you use one of these clients. The shutdown bit would grab the write lock too.

Let me know what you thing (I may be able to follow up with a PR).

About create directories with KV API

What‘s the way to create some directories ?
i want to create some directories and create some keys in it.
i try to use kv api to implement it but not work.
Do you have any examples?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.