Code Monkey home page Code Monkey logo

libkv's Introduction

libkv

GoDoc Build Status Coverage Status Go Report Card

libkv provides a Go native library to store metadata.

The goal of libkv is to abstract common store operations for multiple distributed and/or local Key/Value store backends.

For example, you can use it to store your metadata or for service discovery to register machines and endpoints inside your cluster.

You can also easily implement a generic Leader Election on top of it (see the docker/leadership repository).

As of now, libkv offers support for Consul, Etcd, Zookeeper (Distributed store) and BoltDB (Local store).

Usage

libkv is meant to be used as an abstraction layer over existing distributed Key/Value stores. It is especially useful if you plan to support consul, etcd and zookeeper using the same codebase.

It is ideal if you plan for something written in Go that should support:

  • A simple metadata storage, distributed or local
  • A lightweight discovery service for your nodes
  • A distributed lock mechanism

You can find examples of usage for libkv under in docs/examples.go. Optionally you can also take a look at the docker/swarm or docker/libnetwork repositories which are using docker/libkv for all the use cases listed above.

Supported versions

libkv supports:

  • Consul versions >= 0.5.1 because it uses Sessions with Delete behavior for the use of TTLs (mimics zookeeper's Ephemeral node support), If you don't plan to use TTLs: you can use Consul version 0.4.0+.
  • Etcd versions >= 2.0 because it uses the new coreos/etcd/client, this might change in the future as the support for APIv3 comes along and adds more capabilities.
  • Zookeeper versions >= 3.4.5. Although this might work with previous version but this remains untested as of now.
  • Boltdb, which shouldn't be subject to any version dependencies.

Interface

A storage backend in libkv should implement (fully or partially) this interface:

type Store interface {
	Put(key string, value []byte, options *WriteOptions) error
	Get(key string) (*KVPair, error)
	Delete(key string) error
	Exists(key string) (bool, error)
	Watch(key string, stopCh <-chan struct{}) (<-chan *KVPair, error)
	WatchTree(directory string, stopCh <-chan struct{}) (<-chan []*KVPair, error)
	NewLock(key string, options *LockOptions) (Locker, error)
	List(directory string) ([]*KVPair, error)
	DeleteTree(directory string) error
	AtomicPut(key string, value []byte, previous *KVPair, options *WriteOptions) (bool, *KVPair, error)
	AtomicDelete(key string, previous *KVPair) (bool, error)
	Close()
}

Compatibility matrix

Backend drivers in libkv are generally divided between local drivers and distributed drivers. Distributed backends offer enhanced capabilities like Watches and/or distributed Locks.

Local drivers are usually used in complement to the distributed drivers to store informations that only needs to be available locally.

Calls Consul Etcd Zookeeper BoltDB
Put X X X X
Get X X X X
Delete X X X X
Exists X X X X
Watch X X X
WatchTree X X X
NewLock (Lock/Unlock) X X X
List X X X X
DeleteTree X X X X
AtomicPut X X X X
Close X X X X

Limitations

Distributed Key/Value stores often have different concepts for managing and formatting keys and their associated values. Even though libkv tries to abstract those stores aiming for some consistency, in some cases it can't be applied easily.

Please refer to the docs/compatibility.md to see what are the special cases for cross-backend compatibility.

Other than those special cases, you should expect the same experience for basic operations like Get/Put, etc.

Calls like WatchTree may return different events (or number of events) depending on the backend (for now, Etcd and Consul will likely return more events than Zookeeper that you should triage properly). Although you should be able to use it successfully to watch on events in an interchangeable way (see the docker/leadership repository or the pkg/discovery/kv package in docker/docker).

TLS

Only Consul and etcd have support for TLS and you should build and provide your own config.TLS object to feed the client. Support is planned for zookeeper.

Roadmap

  • Make the API nicer to use (using options)
  • Provide more options (consistency for example)
  • Improve performance (remove extras Get/List operations)
  • Better key formatting
  • New backends?

Contributing

Want to hack on libkv? Docker's contributions guidelines apply.

Copyright and license

Copyright ยฉ 2014-2016 Docker, Inc. All rights reserved, except as follows. Code is released under the Apache 2.0 license. The README.md file, and files in the "docs" folder are licensed under the Creative Commons Attribution 4.0 International License under the terms and conditions set forth in the file "LICENSE.docs". You may obtain a duplicate copy of the same license, titled CC-BY-SA-4.0, at http://creativecommons.org/licenses/by/4.0/.

libkv's People

Contributors

aboch avatar abronan avatar ahmetb avatar allencloud avatar aluzzardi avatar calavera avatar chenchun avatar crazy-max avatar errordeveloper avatar ilkermutlu avatar justincormack avatar kimtaehee avatar mavenugo avatar mrjana avatar rijnhard avatar runcom avatar springi99 avatar stweil avatar thajeztah avatar vieux avatar william-leez avatar wrotki avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

libkv's Issues

Add mapping layer for keys

I think it would be useful to have a simple mapping function for each store to allow access to arbitrary keys. This is especially useful for stores like consul where currently only a subset of keys is accessible (e.g. /kv). There are additional keys available below /catalog/services for example.
I propose some changes to the API of libkv:

  • Add namespace constants to libkv. These are for example const (KV = 0, AGENT = 1, CATALOG = 2, SESSIONS=3, HEALTHCHECKS=4, ACLS=5, EVENTS=6, STATUS=7 ) These are Consul namespaces which, except for KV, are not accessible by libkv. There could be more.
  • Add an optional func map(namespace int, key string) (int, string) to each store's configuration object. This function allow mapping of namespaces and strings to different values as required by the store implementation.
  • Add a new ...NS() function to Store for each function accepting a key string or directory string. This function should have an additional first parameter namespace int which accepts the constants defined above. It applies the map-function to its parameters namespace and key and then calls the approriate store methods according to the mapped namespace and key.

If sane defaults are provided this should not require any change for any existing code outside of libkv.
It should provide allow to support more stores and their features.
Beside support for more namespaces in Consul the mapping function should allow schema changes and migrations from one store to another.

question

HI
I have a doubt regarding the backend KV store supported by libkv, today it supports etcd, cosul, zookeeper. Can it support any other backend which implement the Store interface ?? Like say suppose I give the options to docker daemon
DOCKER_OPTS="--cluster-store=other://192.168.56.10:4001/...

Will this be supported, like user can specify anything running remotely which can act as a remote KV store?
Thanks

question about locker on etcd

here https://github.com/docker/libkv/blob/master/store/etcd/etcd.go#L470

why not use CompareAndSwap() to update the ttl, please๏ผŸ

If the leader key was force removed by mistake๏ผŒthe lock-watcher will get notice soon, and then create the leader key becoming the lock-owner.

but after defaultUpdateTime seconds, the old lock-owner update the leader key again, Update() will run success. now two member get the lock.

In swarm0.3, the question will casue two swarm-leader born.

besides, I think holdLock() should not return when Update() return error. calling Update() is because it want to hold the lock, it should having several opportunity to hold the lock before leader key expired. but now it will lost lock when update failed once.

like swarm0.3, leader want to hold the lock until it exit, and it's very often to update key failed for one time because of some unexpected reason(such as network) in one day. it should not lost the lock for this reason i think.

thanks.

config.TLS object format?

Can someone share an example of what the config.TLS object should look like?
I'm looking at the etcd client code and I don't see anything like that.

Am I missing something here? I'm trying to pass my ca-cert, client-cert and client-key files to the client

cycle error

I just tried compiling the example, and I get a cycle error:

import cycle not allowed
package github.com/odedlaz/hello
    imports github.com/docker/libkv
    imports github.com/docker/libkv/store/boltdb
    imports github.com/docker/libkv

this is the code I used:

package main

import (
    "fmt"
    "log"
    "time"

    "github.com/docker/libkv"
    "github.com/docker/libkv/store"
    "github.com/docker/libkv/store/boltdb"
    "github.com/docker/libkv/store/consul"
    "github.com/docker/libkv/store/etcd"
    "github.com/docker/libkv/store/zookeeper"
)

func init() {
    // Register consul store to libkv
    consul.Register()

    // We can register as many backends that are supported by libkv
    etcd.Register()
    zookeeper.Register()
    boltdb.Register()
}

func main() {
    client := "localhost:8500"

    // Initialize a new store with consul
    kv, err := libkv.NewStore(
        store.CONSUL, // or "consul"
        []string{client},
        &store.Config{
            ConnectionTimeout: 10 * time.Second,
        },
    )
    if err != nil {
        log.Fatal("Cannot create store consul")
    }

    key := "foo"
    err = kv.Put(key, []byte("bar"), nil)
    if err != nil {
        fmt.Errorf("Error trying to put value at key: %v", key)
    }

    pair, err := kv.Get(key)
    if err != nil {
        fmt.Errorf("Error trying accessing value at key: %v", key)
    }

    err = kv.Delete(key)
    if err != nil {
        fmt.Errorf("Error trying to delete key %v", key)
    }

    log.Info("value: ", string(pair.Value))
}

health endpoint query

Is there a API in libkv by which we can query the health of the etcd cluster? Also how about the APIs for adding/deleting members to/from the cluster?

DynamoDB backend for libkv

Would be great to have a backend for libkv based on Amazon DynamoDB. DynamoDB doesn't support watches but this could be implemented with some sort of proxy + SQS and/or use the DynamoDB streams feature to process inserts/updates/deletes to the table. Benefit is that it avoids having to manage a K/V store

Unknown field "KeepAlive".

" Godeps/_workspace/src/github.com/docker/libkv/store/etcd/etcd.go:83: unknown net.Dialer field 'KeepAlive' in struct litera"

The struct net.Dialer doesn't contain any field named with 'KeepAlive'.

Please help.

Normalize values across backends

I noticed that the backend drivers for etcd and consul (haven't looked at ZK or Bolt) behave differently with regard to key values on puts. Since Consul base64 encodes key values by default and etcd doesn't, etcd key values cannot contain many special characters. This defeats the purpose of an abstraction layer IMO, since as it stands now a user needs to write different code to use libkv with different backends.

I propose that libkv should normalize value inputs so that all backends behave identically. If there's extra processing that needs to be done to accommodate different backends, libkv should do that on behalf of the client. Either escaping unusable characters or b64 encoding the whole value on backends that don't do this automatically are my first thoughts, but I'm sure there are plenty of others.

The flip side of my argument is that normalized values for some backends would no longer be plaintext to non-libkv clients, which could cause trouble for some use cases. I still think this is preferable to the raw values that libkv uses now. Perhaps there could be a boolean 'sanitize' toggle in libkv.NewStore() to maintain backwards compatibility?

Just wanted to drop this here to get opinions. If it makes sense and would be accepted, I'll write it up and do a PR.

Fix Lock abstraction across distributed store backends

The Lock abstraction is inconsistent across distributed backends mostly because etcd expects the key not to exist for the mechanism to work.

This is due to the custom Lock implementation using atomic key creation as a fence for lock seekers. etcd api v3 will have it's own Lock implementation so the behavior should be fixed when migrating to the new API.

Related to #103

Libkv should allow to independently vendor store backends

We recently added the boltdb backend and now we have to vendor 10K additional LoC regardless of the fact that we use boltDB or not (in swarm for example).

Ultimately we should vendor only the stores that are relevant to someone's usage (vendoring consul/etcd/zookeeper together makes sense as it is the case to vendor boltdb/leveldb/rocksdb together potentially, etc.)

The only solution I see for now is to remove the libkv.go entrypoint and let the user import and handle the logic of instantiating the Store. The experience is not seamless but I prefer this to the solution of vendoring 10K LoC that we don't use ๐Ÿ˜•

CI failure with go-etcd

Ci fails on travis with:

panic: codecgen version mismatch: current: 2, need 3. Re-generate file: /home/travis/gopath/src/github.com/coreos/go-etcd/etcd/response.generated.go

Sounds like the tests are passing fine locally, might be something related to the environment of the CI build.

Etcd ignores error code and exits prematurely on Watch

(From @springi99)

Hi,

I am newbie here, i don't know whether is this the right forum for my problem...
I have problem with using etcd as kv store:

docker -D -d --kv-store=etcd:10.0.0.105:4001 --label=com.docker.network.driver.overlay.bind_interface=eth1

failed because it cannot find keys in etcd. I tried to find a solution for this:

in vendor/src/github.com/docker/libkv/store/etcd/etcd.go in Get() you should add error code 100 as well:

-                       if etcdError.ErrorCode == 102 || etcdError.ErrorCode == 104 {
+                       if etcdError.ErrorCode == 102 || etcdError.ErrorCode == 104 || etcdError.ErrorCode == 100 {

furthermore in WatchTree you should not return back when List() returns with error:

        current, err := s.List(directory)
        if err != nil {
 -               return nil, err
 +               log.Info("watchtree error: ", directory)
        }

with these modification i can start docker with etcd. I am able to create overlay network, publish and attach service to a running container. The problem comes when i try to unpublish the service. in etcd the endpoint top directory is not deleted:

                        {
                            "createdIndex": 130,
                            "dir": true,
                            "key": "/docker/libnetwork/endpoint",
                            "modifiedIndex": 130,
                            "nodes": [
                                {
                                    "createdIndex": 130,
                                    "dir": true,
                                    "key": "/docker/libnetwork/endpoint/be2f0baf7de54e0499ee061c862bb3c405defa2d39e6fd8abfa8cbebbcbe451d",
                                    "modifiedIndex": 130
                                },
                                {
                                    "createdIndex": 144,
                                    "dir": true,
                                    "key": "/docker/libnetwork/endpoint/7e784942ec0c4430425bd6a68423c42d61db7eff249bced95f2a519cfa000dd0",
                                    "modifiedIndex": 144
                                }
                            ]
                        },

and it seems veth pairs are not deleted, too.

Actually i did not try whether traffic goes through or not. Did you meet with this problem? Thanks your help in advance,

robert.

Watch/WatchTree error return is useless, should be changed for an error channel

The error return on Watch/WatchTree is virtually useless as we only return (without error) on Get/List failures for etcd and consul. This should be made more consistent across the 3 stores by returning the error through a channel.

Overall, this is an easier pattern to encourage Watches to be resilient to failures:

watchCh, errCh := store.Watch("key", nil)

for {
  select {
    case event := <-watchCh:
      // Do something with the event
    case err := <-errCh:
      // store may be down or an intermittent network issue occured
      log.Error(err)
      // sleep, break or continue
  }
}

Add support for etcd authentication

Starting with version 2.1, etcd supports username/password based authentication. We would really like to use this feature (in order to use a hosted etcd cluster) and I'd like to discuss how to do that with libkv.

Currently, libkv supports etcd >= 2.0. In order to use authentication, we'd have to change that requirement to etcd >= 2.1. Would this be an option for you? If so, I'm happy to provide a PR with the implementation.

Flaky tests on travis

Tests are consistent locally but they seem to be flaky on travis for seemingly random reasons. Might be due to tight timeouts considering the resources allocated for a worker. This should be investigated and fixed.

Zookeeper AtomicPut doesn't respect store.WriteOptions parameter

In My case, I set TTL in writeOption and expect zookeeper to create ephemeral node.
AtomicPut doesn't use this parameter at all, and seems set ephemeral = false as default.
if err = s.createFullPath(parts, false); err != nil {
...

However, it seems the Put API create ephemeral node if TTL
if opts != nil && opts.TTL > 0 {
s.createFullPath(store.SplitKey(strings.TrimSuffix(key, "/")), true)
} else {
s.createFullPath(store.SplitKey(strings.TrimSuffix(key, "/")), false)
}

Is this a bug in AtomicPut?

libkv should have a way to watch a directory that doesn't exist yet.

We need a way to watch a directory that doesn't exist yet.

  • This seems to work with the Consul backend.
  • On etcd, if the directory doesn't exist, it throws an error.
  • Haven't tested zk yet.

Basically, either all the backends need to handle watching directories that don't exist, or we need a way to create directories so the watcher can create, then watch. I'm guessing the latter is going to be easier.

/cc @mavenugo this problem is blocking libnetwork functioning correctly with etcd backend

incorrect docs license grant

Similar to moby/spdystream#57

Docs released under Creative commons.

is an incorrect grant of license.

Please clearly specify scope of the license (which files), license name (e.g. "CC-BY-SA-4.0") and URL of the license. Also please commit complete text of the license.
Thanks.

Mechanism to abort Lock

Right now, Lock() block until it succeeds.

We need a way to abort.

Consul for instance provides a way to pass a stopCh - when closed, Lock exits immediately.

Expose recursive parameter of WatchTree through WatchOptions

Watches can be used to watch recursively over a set of keys and their child keys. This should be exposed through libkv using a single Watch call instead of having to rely on an additional specialized WatchTree.

We can do this by setting a WatchOptions with a Recursive parameter in libkv/store.go.

puts that would succeed on consul & zookeeper fail on etcd

Since etcd enforces a distinction between "directory" keys and "file" keys, some sequences of puts that would succeed on the other backends will fail on etcd.

Put("/path/to/", "Hello")
Put("/path/to/new/node", "World!")

will succeed on Consul, but the second call will fail on etcd, since /path/to is a file.

Likewise

Put("/path/to/new/node", "Hello")
Put("/path", "World!")

will fail on etcd since /path is a directory.

I've discussed the next-gen API for etcd with some of their maintainers, and I think they're planning to remove the directory/file distinction for v3. So, this issue may sort itself out on that new API.

I'm not sure there is a satisfactory resolution that doesn't involve a trade off.

One thing that could be done is to automatically change directories to files and vice versa, but this would result in data loss, and so is probably not acceptable.

Another possibility would be for libkv to append a file node suffix to every key when using etcd. E.g.

/path/to/  -> /path/to/__data__
/path/to/new/node -> /path/to/new/node/__data__

This leads to etcd backend behaving the same way as Consul & Zookeeper, however, it makes it very difficult to allow interoperation with other clients of the data store that are not libkv-based, since the keys they see are different.

We could also just do nothing, and warn developers that writing interoperable code requires that they plan their key use carefully so they always write to a leaf node in the tree.

Add support for ACL

Hello,

I want to use this library as part of our codebase internally, however while looking at it i noticed there is no ACL support. Doing a bit more documentation i noticed that off the 3 supported stores only ETCD didn't support the usage of ACL, however they seem to be integrating this feature in 2.1. Internally we use zookeeper and we rely heavily on ACL's to secure access to sensible keys.

Are you considering adding support anytime soon?

Thank you,
Cosmin

Redis/RedisCluster store

I'm working on implementing Redis/RedisCluster here in libkv, I just need to understand if this would be something you like/need so I can go ahead.
Also to discuss whether to support directly RedisCluster via https://godoc.org/gopkg.in/redis.v3#ClusterClient or just the single instance client (I'd go with the cluster)
(even if redis support many data types I think the simple key/value would be ok here)

Add support for local, fast K/V storage

libkv could provide users with more usage patterns by adding support for local K/V stores like BoltDB/LevelDB/RocksDB. They are optimized for local access and can be used as the default storage when running a single Manager instance into a Swarm cluster for example. It can also be used by libnetwork as a LocalScope driver (see moby/libnetwork#461).

So far those backends only support operations like Get, Put and Delete so we should find a way to integrate these into the existing Storage interface.

Possible solutions:

  • We split the Storage interface and go back to multiple fine grained interfaces (one for simple Storage operations for Get/Put/Delete, one for Atomic operations: AtomicPut/AtomicDelete, etc.)
  • We keep the interface as it is and throw an exception for other calls that cannot be implemented using these local backends.

Watchtree sends back whole list of specified directory in every event, how can a watcher know the changes?

When there is a change in specified directory, WatchTree will list the directory and sends all of them back.
I have a watcher which wants to know under a sepcified directory
a. kv pair added
b. kv pair deleted
c. kv pair modified

I was using github.com/coreos/etcd/client watcher, achieved above by checking response.Node and response.PrevNode.

Now to support more backends, I find libkv.
It solves my problem well, except this issue, I have no idea how to solve it yet.

Any comment or suggestion, please?
Thank you.

Wrong stopCh type in Watch/WatchTree

As defined in the interface https://github.com/docker/libkv/blob/master/store/store.go#L64-L69 the stopCh won't work as intended.

I see from the tests that it's not used at all and I guess you're not using it anywhere else.
https://github.com/docker/libkv/blob/master/testutils/utils.go#L102-L103

This poppedup to me when trying to use a watch in my project.

At least in etcd the stopCh must be a send-only type ch. for the purpose of stopping the watch. Also in libkv it's the intended usage: https://github.com/docker/libkv/blob/master/store/etcd/etcd.go#L227

I can take care of this refactor. Note that may result in breaking API changes.

Request: Non Recursive "directory" listing

While .List(key) is useful if I want everything under the directory/root passed to it, there are some aspects of an ls type action which are very useful but missing:

  1. Non-Recursive Listing or depth specification
  2. What type of children to return

For the first I want to be able to get just the immediate children. For the second I want to be able to get back "directories", "keys", or both. For example something like:

kv.List("foo/bar/","directory",false)

...where false is a flag for recursion, and the return is a list of sub directories in "foo/bar". Alternatively making the second argument "keys" would only return keys, and "any" would return both. Passing true for the third argument would indicate I want all of whatever type under that prefix.

Alternatively we could instead of 'recurse boolwe could use adepthparameter wheredepth=0means no recursion, anddepth=5` means you wild get listing for up to 5 levels down from the given prefix.

Obviously the example would break the existing API so I'd be open to another name such as ListFilter or something ti identify it is a bit stronger than a basic List call

Migrate etcd backend to APIv3 client

We should migrate the etcd backend to use the new APIv3 client using grpc.

This would solve #16 and #20 and smooth the library usage especially on the key handling.

Only problem for now is the Lock call that is not yet supported by the new client, so we might have to wait for that first or use our own implementation of Lock.

Support for tokens

I have consul installed on a public ip so I have bootstrapped with a ACL master token.

Is there any way I can make the docker daemon to use a master token or is there any other way to secure the consul server and the docker client setup ?

generic configuration?

I want to implement a new vault backend.
The issue is that vault needs a lot of extra configuration that conventional key-value stores don't need.

I think we need to move from a base struct that each backend uses, to a loosely-typed config json that will be unmarshled by the backend itself and not the generic store.

I'll happily do this if you think that's the right move.
Thoughts?

WatchTree (zookeeper) doesn't send event when value of a existing child key changed

[root@localhost ~]# zookeepercli --servers=localhost:2181 -c ls /nsre/Units GISU-5

And I have a program watch on nsre/Units/.

  1. add a new key/value pair
    zookeepercli --servers=localhost:2181 --debug=1 -c create /nsre/Units/GISU-6 test
    zookeepercli --servers=localhost:2181 -c ls /nsre/Units
    GISU-5
    GISU-6
    My program can receive event from watch channel as expected.
  2. change value of a existing key
    zookeepercli --servers=localhost:2181 --debug=1 -c set /nsre/Units/GISU-6 changeit
    zookeepercli --servers=localhost:2181 --debug=1 -c get /nsre/Units/GISU-6
    changeit
    My program can not receive any event!
    I think this is not expected behavior of WatchTree, right?

It works if etcd is used as store backend.

Missing MAINTAINERS file

I'm working on preparing the open source repositories for the new centralized maintainers file, but noticed this repository does not yet have a MAINTAINERS file.

I can create a PR to fix this, but I'm not sure who should be included as maintainer(s).

@abronan are you the only maintainer, or should I add other people here?

for reference, see docker/opensource#35 and moby/moby#18321

Enhancement: Automatic and periodic renew of Key lease for basic Put operation

libkv should allow to automatically renew the lease for a key (using TTLs) as long as the client is still up and running. We may want to return a stopChan to stop the process of automatically renewing the key if needed.

The same way we have PeriodicRenew of sessions using Consul, it can be useful to not explicitely invoke a renewTTL every T interval time but just know that the key lives until the client dies and the TTL expires.

Some work has been done for the Lock() call which automatically renew the lease for a Key to keep the lock under custody. If a signal is sent to the stopChan, the Lock process stops renewing the TTL and/or Unlocks the key. This work should be extended for the basic Put operation of supported backends.

Add Irmin as a backend driver

Irmin is a git-like distributed storage written in Ocaml. It implements pretty-much all the relevant calls to be implemented in libkv (minus Compare And Swap and Lock/Unlock that can be implemented using the CAS, see mirage/irmin#288).

What's interesting is that Irmin provides the way to directly manipulate the data on remote nodes using Git. This way we can write the data for discovery using libkv but we can also manipulate the node list (or any other useful data) directly through git. It makes even more sense to manipulate specific metadata that would impact the cluster behavior (specific labels or feature switches).

Everything is also versioned and still available (useful for cluster analysis and pattern detection). We can also do snapshots of the cluster state.

Design review on redis/rediscluster store

This is a following up discussion on the design review of redis driver.
the original feature requirements are discussed here: #9
For store interface, please refer to here: https://github.com/docker/libkv/blob/master/store/store.go#L63

Redis is an in-memory key/value storage, a single thread server which supports rich data structures and lua script. It also can grant ttl(time-to-live) for each key and evict the expired keys automatically. This feature enables us to impl such functions straightforward.

Put(key string, value []byte, options *WriteOptions) error
Get(key string) (*KVPair, error)
Delete(key string) error
Exists(key string) (bool, error)

Redis also provided a scan feature which lookups all keyspace with a pattern given. It can be used to impl List and DeleteTree methods with the following file hierarchy:

set /foo bar
set /dir1/foo bar
set /dir1/dir2/foo bar

so if we call List("/"), we need to scan all keyspace and return those keys matched "/".
if we call List("/dir1/dir2"), we need to use pattern "/dir1/dir2/
" instead.
In this case, DeleteTree will be performed in two steps: 1. list the tree 2. batch delete all keys in the tree.
But that really depends how atomic we want here. If we need this operation atomaitcally, we need to move these two functions into a lua script (which will be discussed later)

For Lock implementation, redis did provide such features called "set if not exist" and "set if exist" http://redis.io/commands/set
so the one who create the key owns the lock. Release the lock actually means delete the key.
The impl can be trivial as well (for handling ttl, we just need a goroutine to refresh it's expiration time through calling set)

redisclient.Do("set", $key, $value, "NX", "EX", $ttl_in_second)
// once we hold the key, we can have a dedlicated goroutine to handle ttl
ticker := time.NewTick( ttl / 3)
for range ticker.T {
    redisclient.Do("set", $key, $value, "XX", "EX", $ttl_in_second) // update ttl only when key exist.
}

Watch API basically allows client to receive events regarding to the changes of a key (or a directory). In redis, we can borrow a keyspace notification feature(http://redis.io/topics/notifications). Since keyspace notification will deliver any events of the whole entire keyspace, client need to filter out thoese irrelevants.

func (r *Redis) WatchXXX(key string, stopCh <-chan struct{}) (<-chan *KVPair, error){
    psc := redislib.PubSubConn{client: r.client}
    psc.PSubscribe("__keyevent*__:*") // doing a pattern subscribe for all keyevent notification
    respChan :=make(chan *KVPair)
    go func(){
        for {
            // watch stopChan
            select{
                case <-stopCh:
                    // doing unsubscribe and close respChan..
                default:
            }
            switch n := psc.Receive().(type) {
                    case redis.PMessage:
                        // filter the key part and enque to respChan if found any of interest

            }
        }
    }()
    return respChan, nil
}

Script allows multiple non-blocking commands to be run without being interrupted by any coming client requests.
Here is a simple script example from stackoverflow
So that is really useful to impl DeleteTree, AtomicPut and AtomicDelete.

Feel free to modify this issue and please let me know how you think about this. thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.