mikn / greddis Goto Github PK
View Code? Open in Web Editor NEWA high-performance redis client for Golang
License: MIT License
A high-performance redis client for Golang
License: MIT License
Randomly got the below stacktrace when running the benchmarks. Not exactly sure how it could happen other than some magical fuckery by the bufio library.
Keeping it on the bug tracker to see if anyone runs into it.
panic: runtime error: slice bounds out of range [:9995941] with capacity 7453449
goroutine 265 [running]:
internal/poll.(*FD).Write(0xc0000cb180, 0xc003e08fdb, 0x9886a5, 0x71bb09, 0x0, 0x0, 0x0)
/usr/lib/go-1.13/src/internal/poll/fd_unix.go:268 +0x432
net.(*netFD).Write(0xc0000cb180, 0xc003e08fdb, 0x9886a5, 0x71bb09, 0x1000, 0x0, 0x0)
/usr/lib/go-1.13/src/net/fd_unix.go:220 +0x4f
net.(*conn).Write(0xc000010228, 0xc003e08fdb, 0x9886a5, 0x71bb09, 0x0, 0x0, 0x0)
/usr/lib/go-1.13/src/net/net.go:196 +0x68
bufio.(*Writer).Write(0xc000127840, 0xc003e08fdb, 0x9886a5, 0x71bb09, 0xb, 0x0, 0x0)
/usr/lib/go-1.13/src/bufio/bufio.go:625 +0x144
github.com/mikn/greddis.(*comm).writeBulkString(0xc0002d01e0, 0xc003e08000, 0x989680, 0x71cae4)
/home/mikn/devel/greddis/command.go:77 +0x183
github.com/mikn/greddis.(*comm).add(0xc0002d01e0, 0x69f880, 0xc00030a1b0, 0x0)
/home/mikn/devel/greddis/command.go:89 +0xe1
github.com/mikn/greddis.(*comm).addUnsafe(0xc0002d01e0, 0x69f880, 0xc00030a1b0, 0xc0002d01e0, 0x3)
/home/mikn/devel/greddis/command.go:116 +0x48
github.com/mikn/greddis.(*client).Set(0xc0002d00f0, 0x7786a0, 0xc0000160e8, 0x71a68b, 0x7, 0x69f880, 0xc00030a1b0, 0x0, 0x6e32c0, 0x4)
/home/mikn/devel/greddis/client.go:86 +0x516
github.com/mikn/greddis/benchmarks.greddisSet.func1(0xc0000da380)
/home/mikn/devel/greddis/benchmarks/main_test.go:419 +0x289
testing.(*B).runN(0xc0000da380, 0x1)
/usr/lib/go-1.13/src/testing/benchmark.go:190 +0xcc
testing.(*B).run1.func1(0xc0000da380)
/usr/lib/go-1.13/src/testing/benchmark.go:230 +0x64
created by testing.(*B).run1
/usr/lib/go-1.13/src/testing/benchmark.go:223 +0x7d
exit status 2
Currently it results in a timeout panic, which is not very constructive.
Currently I'm using a raw []byte
buffer per connection because I hadn't explored the full range of features and utilities in the io
, ioutil
and bufio
packages.
Some core problems that I encountered when trying to use Readers/Writers initially were:
\r\n
)The last problem was solved by adding a conversion buffer onto the ArrayWriter
struct which is used to convert ints to a slice of bytes before writing it to the bufio.Writer
. The second problem can be solved with a combination of io.LimitReader
and ioutil.ReadFull
. So the only problem remaining is how to read up until we encounter a sequence of bytes (\r\n
). This requires a bit more research to achieve.
Currently (in the pubsub implementation) only reads from the connection are done to a []byte
slice as the writing is already replaced by a bufio.Writer
which contains a few optimizations that had little benefit of replicating in my own implementation.
There are a few problems with the current implementation with reading into a []byte slice first, such as:
memcpy
s that could be avoided with a more optimized buffered IO implementation (specifically surrounding discarding consumed information such as the length of a Redis BulkString
, for instance)[]byte
buffer as well as number of consumed bytes internally between functions to make sure that we can do correct truncations of the []byte
buffer and don't reallocate it too oftenBulkString
read or a SimpleString
read, which makes the code more complex and is an effect of not wrapping the []byte
bufferI am also currently encountering some bugs in the pubsub implementation which are hard to figure out because of the growing complexity of the []byte
read buffer.
The reason for this issue is to leave it as "research" spot to figure out whether I can effectively replicate the Read
loop that I wrote in the UnmarshalSimpleString
function without causing too much of a performance hit.
Currently the label is a bit confusing.
This should include concurrency tests to make sure that there are no race conditions and ensure that high-contention doesn't result in race conditions. Would also be beneficial to have comparative benchmarks under these circumstances, but not necessary.
Reader functions String()
and Bytes()
and thus Result.Scan(Scanner)
currently only supports returning values when tokenLen < buf.Size()
the problem is obviously that this is a bit obscure. Should this be documented as an "expected behaviour" and steer people to use Result.Scan(io.Writer)
instead (and throw appropriate error messages) or should we implement support for size up to Redis max size?
The first way would encourage "good behaviour", as in that it makes people think about how much memory they're allocating (and thus copying) and steer them to using various buffers (such as strings.Builder{}
or bytes.Buffer{}
) rather than "blindly" use the primitive types. The second would allow people to shoot themselves in the foot if they want to, and many people may want that. ๐ค
I'm currently leaning towards throwing errors and encourage good behaviour as I'm a bit of a Good Samaritan in that sense.
Possibly grab something from the mysql driver package
Reference here: https://blog.golang.org/go1.13-errors
Currently most of the public interfaces are undocumented after an extensive rewrite to the internals of the code. This is of course unfortunate and should be remedied!
With PubSub support this should be relatively easy to add as Redis Sentinel is mostly a group of servers that publishes the current quorum master over a topic.
Documentation: https://redis.io/topics/sentinel
In a nutshell, the Sentinel implementation should subscribe to all sentinels provided in the connection string, subscribe to the relevant topics, listen for updates and update the "real master" in the connection. The easiest implementation is probably to provide some kind of wrapper for the Pool object that returns connections that are according to the sentinels correct at the time. We may also need to wrap the connection object itself to provide for checks and proper failover behaviour.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.