Code Monkey home page Code Monkey logo

go-ratelimit-manager's People

Contributors

deweydbb avatar dkoston avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar

go-ratelimit-manager's Issues

Encapsulate the setup process in a single NewRequestLimiter() func

it seems like we should have some initialization method for the ratelimit library which sets this key if it doesn't exist.

Right now, you essentially have to do this to get started:

err = pool.Do(radix.FlatCmd(nil, "HSET",
    "status:"+serverConfig.Host,
		host, serverConfig.Host,
		sustainedRequests, 0,
		burstRequests, 0,
		pendingRequests, 0,
		firstSustainedRequest, 0,
		firstBurstRequest, 0,
	)
hostConfig := NewRateLimitConfig(serverConfig.Host,
		serverConfig.SustainedRequestLimit-1,
		serverConfig.SustainedTimePeriod,
		serverConfig.BurstRequestLimit-1,
		serverConfig.BurstTimePeriod)

canMake, sleepTime := requestStatus.CanMakeRequest(pool, requestWeight, hostConfig)

It would be nice to encapsulate that into something like:

config := RequestLimiterConfig{
    host: "api.binance.com",
    ratelimitConfig: RateLimitConfig{
            sustainedLimit: 1200,
            sustainedTimePeriod: 60,
            burstLimit: 20,
            burstTimePeriod: 1,
   }
}
limiter, err := NewRequestLimiter(config)

That would then set the key in redis with all 0s if the key doesn't already exist and you'd be ready to call:

limiter.CanMakeRequest()

Add integration testing to library

We're currently testing the code to make sure it performs however there are some scenarios where integration testing is very important:

  • are we able to read and write from the data store without overwriting, getting stale data, or blocking other readers/writers for unacceptable portions of time

  • does everything execute in order to give us proper wait times

As such, we can spin up a redis container during integration tests and then run those scenarios against our code.

  • you'll need to launch go routines which request requests from go-ratelimit-maanger so that multiple different readers and writers will be working at the same time. I think we should try to have 1,000 goroutines running at the same time to push the limits of the library. These should request some random number of requests

  • you'll also want some way to validate this is all working so you'll have to write a web server which returns 429 when too many requests are being sent and 200 when it's ok and then try to hammer that web server with more requests than are allowed by its ratelimits. I added a separate issue for this.

README discussing project

We should start a basic readme which includes a short brief on what the features of the library are and how we expect it to be used.

I’d prefer to link to an examples/*.go directory with code samples that actually execute. We can put the test server up at helixstream.com to execute http requests against.

Should also include badges for CircleCI, godoc, and coveralls.io

Create test web server with customizable ratelimits

In order to run integration testing, we will need to be able to send requests to a web server and validate that we never hit the ratelimit using our library (and that we do when not using it).

Golang has a built in ratelimiting package that allows for burst and sustained limits:

https://godoc.org/golang.org/x/time/rate

here's a good intro: https://gobyexample.com/rate-limiting

AC:

  • make a web server that listens on a customizable port on all open interfaces
  • add GET /testRateLimit route which returns 200 when not ratelimited, 429 when ratelimited, and 419 if 10 requests are received after the ratelimit is reached

Define host request status struct

host: api.binance.com
sustained requests:
burst requests:
pending:
1st sustained request: unix time stamp
1st burst request: unix time stamp

not storing ip because these datastores will be co-located on one ip address

Build "spike" for saving and retrieving ratelimit request status in K/V store

We don't need to flush out production level code for this but once you've identified one or two K/V stores that seem interesting, it would be useful to build a quick and dirty update to the application to allow these things:

  • check to see if key for this host and IP address exists
  • retrieve key for this IP and host
  • set key for this IP and host

Generally, the easiest way to interact with a key value store is to download and run it's docker image and then connect to "localhost" over the port that docker forwards to your computer.

Here's a guide to setting up redis (k/v store) for developent using docker-compose:

https://cheesyprogrammer.com/2018/01/04/setting-up-a-redis-test-environment-using-docker-compose/

There are lots of go redis client libraries to pick from: https://redis.io/clients#go (the ones with stars are recommended)

Ensure that viewing current status and updating status are done in a transaction

In order to make sure we don't go over the limits, we will need to make sure that the data changes in the database before another process or container has access to read the updated values.

Process 1

Ask for current limits
Update limits

Process 2

Ask for current limits (if process 1 asked first, we shouldn't respond until process 1 has updated the limits)

This should be able to be accomplished by wrapping the get and set calls into a transaction and then committing once they are complete (or if no set is needed)

Define ratelimit config

host: api.binance.com
sustained request limit: 1200
sustained request time period: 60 seconds
burst request limit: 20
burst request time period: 1 second

#unlikely that these are the real rate limits, will probably need a way to describe actual vs. published ratelimits.

Add functions which allow for fine tuning actual limits

We currently support a configuration for sustained and burst limits but these are as advertised. We want to collect information on what the real limits are (when is 429 hit, when is 429 hit) and adjust accordingly and save this data. Need for the 429 and 419 codes to be adjustable as different APIs may give back different codes (401, 403, etc) for bans.

Ideally, this should be an optional feature of the library. My first thought is to report the status code back with the completion of each request as we are already keeping track of request counts. Don’t want to force users to use a specific http library so want to keep the actual requests outside the scope of this package (except for example code and tests)

Have we hit ratelimit func

Have we hit the sustained limit?

  • If yes, how long should we wait
    Have we hit the burst limit?
  • If yes, how long should we wait

Research key value stores

Look at embedded key value stores, use cases for reading and writing data

Come up with list of 5 key value stores and pros and cons for our use case.

refactor to use a getHostKey() func

right now, inside the tests and requestsstatus.go you're calling key := "status:" + h.Host a lot. We should make this a method with signature getHostKey(host string) string in case this needs to be updated in the future, we won't have to change all those lines of code

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.