Code Monkey home page Code Monkey logo

summitdb's Issues

Kubernetes Statefulset

SummitDB is something I've been looking for for ages: RedisDB-like with JSON and secondary index support.
I've successfully set up a cluster on localhost with various ports and tested HA by killing nodes and/or storage, all being successful.
The real destination for this is a Kubernetes cluster. I've managed to create a StatefulSet and there were some tricks required:

  • listen address should be the POD_IP, not 0.0.0.0 and, of course, not 127.0.0.1.

It clusters beautifully, but there one more issue left. The first node that initiates the clustering is stored in cluster database (Raft?) by IP:port.
In a containerized environment IPs are ephemeral, so this will kill the cluster at first node loss.

Now the question is: is it possible to have an "advertise address" parameter that will be used by Raft peering? This way we can have stable Node IDs by using hostnames, which come with a guarantee in a Kubernetes Statefulset.

Can't join cluster: "peer already known"

I tried summitdb in docker swarm, I first create a master service:

docker service create --name summitdb-master --network redis didasy/summitdb

Then I created a slave service

docker service create --name summitdb-slave --network redis didasy/summitdb -join summitdb-master:7481

The slave service won't go live, and when I check the log in the container it says

1:M 16 Dec 13:50:08.292 * SummitDB 0.4.0
1:N 16 Dec 13:50:08.303 * Node at :7481 [Follower] entering Follower state (Leader: "")
1:N 16 Dec 13:50:08.305 # failed to join node at summitdb-master:7481: peer already known

And this is from the master

1:M 16 Dec 13:40:30.375 * SummitDB 0.4.0
1:N 16 Dec 13:40:30.379 * Enable single node
1:N 16 Dec 13:40:30.385 * Node at :7481 [Follower] entering Follower state (Leader: "")
1:N 16 Dec 13:40:31.860 # Heartbeat timeout from "" reached, starting election
1:N 16 Dec 13:40:31.860 * Node at :7481 [Candidate] entering Candidate state
1:N 16 Dec 13:40:31.863 * Election won. Tally: 1
1:N 16 Dec 13:40:31.863 * Node at :7481 [Leader] entering Leader state
1:N 16 Dec 13:41:06.715 * Received add peer request from :7481
1:N 16 Dec 13:41:12.989 * Received add peer request from :7481
1:N 16 Dec 13:41:18.775 * Received add peer request from :7481
1:N 16 Dec 13:41:24.787 * Received add peer request from :7481
1:N 16 Dec 13:41:30.614 * Received add peer request from :7481
1:N 16 Dec 13:41:36.400 * Received add peer request from :7481
1:N 16 Dec 13:41:42.387 * Received add peer request from :7481
1:N 16 Dec 13:41:48.273 * Received add peer request from :7481
1:N 16 Dec 13:41:54.142 * Received add peer request from :7481
1:N 16 Dec 13:41:59.856 * Received add peer request from :7481
1:N 16 Dec 13:42:06.035 * Received add peer request from :7481
1:N 16 Dec 13:42:12.315 * Received add peer request from :7481
1:N 16 Dec 13:42:18.084 * Received add peer request from :7481
1:N 16 Dec 13:42:23.928 * Received add peer request from :7481
1:N 16 Dec 13:42:30.097 * Received add peer request from :7481
1:N 16 Dec 13:42:36.040 * Received add peer request from :7481
1:N 16 Dec 13:42:42.056 * Received add peer request from :7481
1:N 16 Dec 13:42:48.084 * Received add peer request from :7481
1:N 16 Dec 13:42:53.991 * Received add peer request from :7481
1:N 16 Dec 13:43:00.307 * Received add peer request from :7481
1:N 16 Dec 13:43:06.486 * Received add peer request from :7481
1:N 16 Dec 13:43:12.698 * Received add peer request from :7481
1:N 16 Dec 13:43:18.652 * Received add peer request from :7481
1:N 16 Dec 13:50:02.489 * Received add peer request from :7481
1:N 16 Dec 13:50:08.305 * Received add peer request from :7481
1:N 16 Dec 13:50:13.995 * Received add peer request from :7481
1:N 16 Dec 13:50:19.776 * Received add peer request from :7481
1:N 16 Dec 13:50:26.151 * Received add peer request from :7481
1:N 16 Dec 13:50:32.171 * Received add peer request from :7481
1:N 16 Dec 13:50:38.239 * Received add peer request from :7481
1:N 16 Dec 13:50:44.169 * Received add peer request from :7481
1:N 16 Dec 13:50:50.178 * Received add peer request from :7481
1:N 16 Dec 13:50:56.440 * Received add peer request from :7481
1:N 16 Dec 13:51:02.211 * Received add peer request from :7481
1:N 16 Dec 13:51:08.136 * Received add peer request from :7481
1:N 16 Dec 13:51:14.125 * Received add peer request from :7481
1:N 16 Dec 13:51:20.501 * Received add peer request from :7481
1:N 16 Dec 13:51:26.751 * Received add peer request from :7481
1:N 16 Dec 13:51:32.841 * Received add peer request from :7481
1:N 16 Dec 13:51:39.070 * Received add peer request from :7481
1:N 16 Dec 13:51:45.378 * Received add peer request from :7481
1:N 16 Dec 13:51:51.512 * Received add peer request from :7481
1:N 16 Dec 13:51:57.270 * Received add peer request from :7481
1:N 16 Dec 13:52:03.060 * Received add peer request from :7481
1:N 16 Dec 13:52:08.764 * Received add peer request from :7481
1:N 16 Dec 13:52:14.560 * Received add peer request from :7481
1:N 16 Dec 13:52:20.328 * Received add peer request from :7481
1:N 16 Dec 13:52:26.403 * Received add peer request from :7481
1:N 16 Dec 13:52:32.108 * Received add peer request from :7481
1:N 16 Dec 13:52:38.341 * Received add peer request from :7481
1:N 16 Dec 13:52:44.404 * Received add peer request from :7481
1:N 16 Dec 13:52:50.498 * Received add peer request from :7481
1:N 16 Dec 13:52:56.272 * Received add peer request from :7481
1:N 16 Dec 13:53:02.023 * Received add peer request from :7481
1:N 16 Dec 13:53:08.048 * Received add peer request from :7481
1:N 16 Dec 13:53:13.750 * Received add peer request from :7481
1:N 16 Dec 13:53:19.464 * Received add peer request from :7481
1:N 16 Dec 13:53:25.150 * Received add peer request from :7481
1:N 16 Dec 13:53:30.980 * Received add peer request from :7481
1:N 16 Dec 13:53:36.856 * Received add peer request from :7481
1:N 16 Dec 13:53:42.812 * Received add peer request from :7481
1:N 16 Dec 13:53:48.662 * Received add peer request from :7481
1:N 16 Dec 13:53:54.494 * Received add peer request from :7481
1:N 16 Dec 13:54:00.571 * Received add peer request from :7481
1:N 16 Dec 13:54:06.400 * Received add peer request from :7481
1:N 16 Dec 13:54:12.089 * Received add peer request from :7481

missing server means new leader complains forever; needs to avoid spamming its logs

checking on the raft fault tolerance functionality, at 56ec060

in terminal0:
 ./summitdb-server                                                                          
                                                                                            
in terminal1:
 ./summitdb-server -p 7482 -dir data2 -join :7481                                           
                                                                                            
in terminal2:
 summitdb-server -p 7483 -dir data3 -join :7482                                             
                                                                                            
kill term0 server.       term1 takes over.  

of concern: newly elected leader will complain forever about not being able to contact the term0 server on port 7481. This server may be permanently gone. It is pointless to fill up the logs with useless chatter.

even starting a new third server with:
./summitdb-server -p 7484 -dir data4 -join :7482

so that now the leader knows about full bank of 3 servers, but it still complains about not being able to reach 7481. log space is massively wasted with pages and pages of:

90632:N 18 Jan 23:59:35.457 # Failed to heartbeat to :7481: dial tcp :7481: getsockopt: con\
nection refused                                                                             
90632:N 18 Jan 23:59:43.644 # Failed to AppendEntries to :7481: dial tcp :7481: getsockopt:\
 connection refused                                                                         
90632:N 18 Jan 23:59:45.849 # Failed to heartbeat to :7481: dial tcp :7481: getsockopt: con\
nection refused                                                                             
90632:N 18 Jan 23:59:53.942 # Failed to AppendEntries to :7481: dial tcp :7481: getsockopt:\
 connection refused                                                                         
90632:N 18 Jan 23:59:56.288 # Failed to heartbeat to :7481: dial tcp :7481: getsockopt: con\
nection refused                                                                             
90632:N 19 Jan 00:00:04.256 # Failed to AppendEntries to :7481: dial tcp :7481: getsockopt:\
 connection refused                                                                         
90632:N 19 Jan 00:00:06.715 # Failed to heartbeat to :7481: dial tcp :7481: getsockopt: con\
nection refused                                                                             
90632:N 19 Jan 00:00:14.556 # Failed to AppendEntries to :7481: dial tcp :7481: getsockopt:\
 connection refused                                                                         
90632:N 19 Jan 00:00:17.150 # Failed to heartbeat to :7481: dial tcp :7481: getsockopt: con\
nection refused 

It seems fine to complain a couple of times. But once the new leader gets the same server count back, it should certainly be quiet about loosing an old node.

Q: transactions or pipelined commands?

Does MULTI actually do a transaction, or does only do pipelining? It appears to only save up and then submit a set of commands at once.

In a multi-statement transaction, I expect to be able to read and then write based on that read. In a simple example with two variables x and y representing bank accounts, consistently transfer a balance x -= 10 and y += 10, where no reader in the middle sees an in-progress value. Typically the read is based on locks or an MVCC implementation.

thought: Maybe this is just phrasing thing; the MULTI/EXEC/DISCARD could be described as "PIPELINING" or "PIPELINING A SEQUENCE OF COMMANDS" instead of "TRANSACTION" commands.

Alternatively, are there actual multi-statement transactions available, say with the javascript? I just couldn't figure out how to read the value of a GET after I started a MULTI.

HTTP 307 Code when Follower Command Submit

Hi
first of all thank you for this clean, efficient distributed db !

I was wondering if a standard redirection can not be applied in the case where a follower receive a command submit. Like the following I saw Sample in rqlite . The follower in this case redirect to the leader using a 307 code.

This way even tools like curl, wget would understand, and the dev experience would increase.

What do you think ?

Q: SummitDB as embedded DB?

I have an application that requires the use of a database, and Summit is an awesome DB -- just one binary to deploy with my application, but I'm wondering if I can go another step down and have that db be embedded and not run as a standalone application.Do you have any examples of how to embed it instead of running it as a standalone server. It would also be nice if it could take advantage of cmux so I don't have my application listening on two ports. Thanks!!

The raft.db will grow unlimitedly?

Hi, Josh! This is an excellent project, but something confuses me.

I know raft peers in summitdb will truncate the log list(in the memory) and do raft snapshot automatically at some time point.

But the raft.db file seems to store the raft peer's status and all the raft log entries even after a raft snapshot.

How to prevent the raft.db file from growing unlimitedly?

Thank you, and I look forward to hearing from you.

document consistency guarantees

The leadership changes section made me nervous because it looks like there is some reliance on the nodes perceived Raft status (which may not be the Raft status). I traced the code and indeed on proposal, the node's local Raft status is determined to decide whether that node is the leader or not. During network partitions however, a node may believe it is (still) the leader when in fact it isn't. To accommodate that, one usually employs a system of time-based leadership leases (which you then pay for with mandatory downtime under some scenarios as the above), but I didn't see that here.

I haven't dug deeper, but there are likely issues in this project when reads or writes take place in this state, jeopardizing correctness. If those issues are handled anywhere, I'd appreciate a pointer.

Storage engines

Awesome project! any thought to having a pluggable storage engine? where default is memory. Having a boltdb option would be very nice.

Is it neccessary to open buntdb in file mode?

Hi, Tidwall! This is an excellent project, but something confuses me.

In machine.go("line 134, db, err := buntdb.Open(file)"), the buntdb is opening in file mode rather than the pure memory mode.

It seems that the only usage of the buntdb database file is to generate the Raft snapshot. (Am I correct? Maybe I have missed something)

However, the Raft snapshot may generate from the Raft peer's log(/data/raft.db) together with its last snapshot files.

It seems redundant to log the command twice in both the raft's log and the buntdb. It may influence the performance of summitdb.

Thank you, and I look forward to hearing from you.

Q: blobs

Everything a string - SummitDB stores only strings which are exact binary representations of what the user stores.

I'm not that familiar with the redis protocol... how do I store a binary blob of []byte data that isn't necessarily even utf8? -- would I need to encode it first, for example using base64? Ugh, I'm hoping not!

Thanks for this terrific looking project. The raft and fencing tokens in particular mean I might be able to avoid deploying zookeeper. (happiness!)

Docker Images?

I have made changes and added docker support and published the docker image.
Maybe it would better if you/I create an organization in docker and upload the image or you upload it under your name and we can add it in the README.

Here is the link to the repo,
https://hub.docker.com/r/pyros2097/summitdb/

I'll make a pull request for this also. I have started a cli so that people don't need to use redis-cli.

Getting Started - FreeBSD Incorrect Download Path

On the release notes page, in the FreeBSD "getting started" section, it has a curl command for downloading the release. I've found that from the packaging code and the downloads below that, it should have a .zip extension rather than a .tar.gz extension. I looked for where that might be edited so I could send a PR but couldn't find it. Thanks!

What happend when a command is committed but has errors when apply it?

Hi, Tidwall! This is an excellent project, but something confuses me.

Suppose there is a situation: a command is committed, but some errors happen when apply this command to the database.

SummitDB could go normally if all the raft peers have the same errors when apply the same command. But what if some raft peers behave differently to others? For example, some raft peers apply the command correctly, while others encounter some errors.(errors may be different from different peers either)

Is it neccessary to use two-phase commit or three-phase commit to deal with this problem? It seems that SummitDB just ignore this problem. (Am I correct, or I have miss something?)

Thank you, and I look forward to hearing from you.

Unable to join cluster

Hi, I found this project is really interesting.
However when I'm trying to create a cluster of two summitdb server, first server was okay. But then when I started the second one to join the first I was unable to get it work:

X:\>summitdb -p 7482 -join localhost:7481
9896:M 18 Mar 02:51:44.764 [1m*[0m SummitDB 0.4.0
9896:N 18 Mar 02:51:44.834 [1m*[0m Node at :7482 [33m[Follower][0m entering Follower state (Leader: "")
9896:N 18 Mar 02:51:45.880 [33m#[0m Heartbeat timeout from "" reached, starting
election
9896:N 18 Mar 02:51:45.880 [1m*[0m Node at :7482 [36m[Candidate][0m entering Candidate state
9896:N 18 Mar 02:51:45.927 [33m#[0m Failed to make RequestVote RPC to :7481: dial tcp :7481: connectex: The requested address is not valid in its context.
9896:N 18 Mar 02:51:47.449 [33m#[0m Election timeout reached, restarting election
... and so on

Can someone point me out what might I've been doing wrong?
Thank You.

FYI: works fine on AArch64...

Hi @tidwall ,

I've been looking at SummitDB and found it ran just fine on AArch64. This eventuality is not normally something I consider comment-worthy. However, in this case an initial run didn't look great so I opened an issue asking if this project was maintained. Given that this project is maintained, and it runs fine on AArch64, I thought it only polite to document it here and conclude this episode. :)

can't create cluster over localhost:7777 tunneled connection

I think the "peer already known" logic needs to take into account the port as well as the host; or perhaps it just needs to treat localhost specially. I setup an ssh tunnel (using ssh -L 7777:localhost:7481 remotehost) between machines in EC2 to run some benchmarks, but I can't seem to make a cluster over the tunnel:

$  summitdb -join localhost:7777
24510:M 23 Jan 06:17:45.894 * summitdb 0.3.2
24510:N 23 Jan 06:17:45.897 * Node at :7481 [Follower] entering Follower state (Leader: "")
24510:N 23 Jan 06:17:45.898 # failed to join node at localhost:7777: peer already known
$ 

hmm... actually, upon further investigation, this errors seems to be coming from the vendored raft here: https://github.com/tidwall/summitdb/blob/master/vendor/github.com/hashicorp/raft/raft.go#L1101

I will continue to investigate. Ideas about how to approach this and workaround thoughts welcome.

Is it (already) possible to retrieve a list of all the peers for a cluster?

I know you can get the current state of any one peer but I'd like to be able to keep a list of all the peers in case the initial host that a client connects to goes down and reconnect to one of the other hosts without having to know that list in advance.

The context is this: https://github.com/thisisaaronland/go-artisanal-integers#summitdb

Also: This project (go-artisanal-integers) is as a dumb and silly as it sounds except for the part where it's been a running gag for going on 5 years now...

https://github.com/thisisaaronland/go-artisanal-integers#see-also

active project?

Hi,

I see a test failure running SummitDB on AArch64, and I am wondering if this project is actively maintained?

Releases?

Is is possible to get releases for different OS's (linux primarily) as I don't have go installed? I Just want to test it out and don't want to install the entire go toolchain for that.

Anyways the project seems great. Now I need to build an ORM on top of it and test it out. In our projects we generally need to have most of our working set in memory so this seems ideal but I see that bunt db uses locks so won't it be slow to do multiple writes on the same key? and How does it handle concurrency? and wouldn't having disk persistence reduce the number of writes per second (like redis AOF)?

Will you support authentication and TLS?

Hi, first of all, thank you for creating this DB. It's the perfect use case for us.

However, I don't see TLS certs and authentication options on summitdb-server. It looks like redcon already supports TLS, so maybe it's just a simple upgrade?

That said, I don't see summitdb supporting AUTH command yet. It would be super cool if you are willing to consider adding this option. It's difficult for me to sell it without authentication feature.

support for list data structure

Hi,
I would like to create a message broker/queue on top of summitDB.
I can see that everything in summitDB is essentially a string, I wanted to know if you have plans of adding other structures like lists.
This would make it easier to create a queue via RPUSH and LPOP or rpoplpush[1] .

Alternatively, if you have no plans of adding lists. Do you have ideas how I would emulate a queue using strings? ๐Ÿ˜ฉ
I was thinking;

  1. create a queue as a string
    ./redis-cli -p 7481 SET mykeyQueue "foo bar baz"

  2. get item from queue
    ./redis-cli -p 7481 GET mykeyQueue

  3. perfom RPUSH in golang

myStr := "foo bar  baz"
myList := strings.Fields(myStr)
myList := append(myList, "newValue") // this is equivalent to RPUSH
// convert back to string
myStr := strings.Join(myList, " ")
  1. set it back into the queue
    ./redis-cli -p 7481 SET mykeyQueue "foo bar baz newValue"

... performing LPOP will be kind of the opposite of the above.

thoughts??
Or should I use the builtin json[2] support to create my queue instead.

  1. https://redis.io/commands/rpoplpush#pattern-reliable-queue
  2. https://github.com/tidwall/summitdb#json-documents

Q: read the last FENCE token without incrementing?

To check if I have won an election, I'd like to read the value of the FENCE counter -- without adding to the fence token. Is this possible?

2nd question: is it possible to listen on a FENCE for changes, or does one need to poll?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.