Code Monkey home page Code Monkey logo

ratelimit's People

Contributors

abhinav avatar cncal avatar kriskowal avatar prashantv avatar rabbbit avatar storozhukbm avatar theperiscope avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ratelimit's Issues

change limit after start

hi
how can change limit after run?
for example i want start with 100 limit and after some work i change it to 500 limit

Unexpected results on CI with higher loop iterations

Hi, I added a modified version of the example tests into my project to see what results we get on our GitLab runners and they are not what I expected when the iteration runs some extra loops.

Results:

=== RUN   TestRateLimit_Example_default
    ratelimit_test.go:20: 1 0s
    ratelimit_test.go:20: 2 0s
    ratelimit_test.go:20: 3 0s
    ratelimit_test.go:20: 4 4ms
    ratelimit_test.go:20: 5 10ms
    ratelimit_test.go:20: 6 10ms
    ratelimit_test.go:20: 7 10ms
    ratelimit_test.go:20: 8 10ms
    ratelimit_test.go:20: 9 10ms
    ratelimit_test.go:20: 10 10ms
    ratelimit_test.go:20: 11 10ms
    ratelimit_test.go:20: 12 10ms
    ratelimit_test.go:20: 13 10ms
    ratelimit_test.go:20: 14 10ms
    ratelimit_test.go:20: 15 10ms
    ratelimit_test.go:20: 16 10ms
    ratelimit_test.go:20: 17 10ms
    ratelimit_test.go:20: 18 10ms
    ratelimit_test.go:20: 19 10ms
    ratelimit_test.go:20: 20 10ms
    ratelimit_test.go:20: 21 10ms
    ratelimit_test.go:20: 22 10ms
    ratelimit_test.go:20: 23 10ms
    ratelimit_test.go:20: 24 10ms
    ratelimit_test.go:20: 25 10ms
    ratelimit_test.go:20: 26 10ms
    ratelimit_test.go:20: 27 10ms
    ratelimit_test.go:20: 28 10ms
    ratelimit_test.go:20: 29 10ms
    ratelimit_test.go:20: 30 10ms
    ratelimit_test.go:20: 31 10ms
    ratelimit_test.go:20: 32 10ms
    ratelimit_test.go:20: 33 58ms
    ratelimit_test.go:20: 34 0s
    ratelimit_test.go:20: 35 0s
    ratelimit_test.go:20: 36 0s
    ratelimit_test.go:20: 37 0s
    ratelimit_test.go:20: 38 2ms
    ratelimit_test.go:20: 39 10ms
    ratelimit_test.go:20: 40 10ms
    ratelimit_test.go:20: 41 10ms
    ratelimit_test.go:20: 42 68ms
    ratelimit_test.go:20: 43 0s
    ratelimit_test.go:20: 44 0s
    ratelimit_test.go:20: 45 0s
    ratelimit_test.go:20: 46 0s
    ratelimit_test.go:20: 47 0s
    ratelimit_test.go:20: 48 2ms
    ratelimit_test.go:20: 49 10ms
--- FAIL: TestRateLimit_Example_default (0.50s)

and

=== RUN   TestRateLimit_Example_withoutSlack
    ratelimit_test.go:46: 1 10ms
    ratelimit_test.go:46: 2 10ms
    ratelimit_test.go:46: 3 10ms
    ratelimit_test.go:46: 4 55.767826ms
    ratelimit_test.go:46: 5 10ms
    ratelimit_test.go:46: 6 10ms
    ratelimit_test.go:46: 7 10ms
    ratelimit_test.go:46: 8 70.044275ms
    ratelimit_test.go:46: 9 10ms
    ratelimit_test.go:46: 10 10ms
    ratelimit_test.go:46: 11 10ms
    ratelimit_test.go:46: 12 10ms
    ratelimit_test.go:46: 13 10ms
    ratelimit_test.go:46: 14 10ms
    ratelimit_test.go:46: 15 10ms
    ratelimit_test.go:46: 16 10ms
    ratelimit_test.go:46: 17 10ms
    ratelimit_test.go:46: 18 10ms
    ratelimit_test.go:46: 19 10ms
    ratelimit_test.go:46: 20 10ms
    ratelimit_test.go:46: 21 10ms
    ratelimit_test.go:46: 22 10ms
    ratelimit_test.go:46: 23 10ms
    ratelimit_test.go:46: 24 10ms
    ratelimit_test.go:46: 25 10ms
    ratelimit_test.go:46: 26 29.903942ms
    ratelimit_test.go:46: 27 10ms
    ratelimit_test.go:46: 28 10ms
    ratelimit_test.go:46: 29 10ms
    ratelimit_test.go:46: 30 10ms
    ratelimit_test.go:46: 31 60.066856ms
    ratelimit_test.go:46: 32 10ms
    ratelimit_test.go:46: 33 10ms
    ratelimit_test.go:46: 34 10ms
    ratelimit_test.go:46: 35 10ms
    ratelimit_test.go:46: 36 59.972692ms
    ratelimit_test.go:46: 37 10ms
    ratelimit_test.go:46: 38 10ms
    ratelimit_test.go:46: 39 10ms
    ratelimit_test.go:46: 40 70.053405ms
    ratelimit_test.go:46: 41 10ms
    ratelimit_test.go:46: 42 10ms
    ratelimit_test.go:46: 43 10ms
    ratelimit_test.go:46: 44 69.93269ms
    ratelimit_test.go:46: 45 10ms
    ratelimit_test.go:46: 46 10ms
    ratelimit_test.go:46: 47 10ms
    ratelimit_test.go:46: 48 70.018764ms
    ratelimit_test.go:46: 49 10ms
--- FAIL: TestRateLimit_Example_withoutSlack (0.90s)

The modified tests are very plain, essentially I just replaced the print statement with t.Log and the loop runs more iterations.

func TestRateLimit_Example_default(t *testing.T) {
	rl := ratelimit.New(100) // per second, some slack.

	rl.Take()                         // Initialize.
	time.Sleep(time.Millisecond * 45) // Let some time pass.

	prev := time.Now()
	for i := 0; i < 50; i++ {
		now := rl.Take()
		if i > 0 {
			t.Log(i, now.Sub(prev).Round(time.Millisecond*2))
		}
		prev = now
	}

	t.Fail() // just to see the output

	// Output:
	// 1 0s
	// 2 0s
	// 3 0s
	// 4 4ms
	// 5 10ms
	// 6 10ms
	// 7 10ms
	// 8 10ms
	// 9 10ms
}

and

func TestRateLimit_Example_withoutSlack(t *testing.T) {
	rl := ratelimit.New(100, ratelimit.WithoutSlack) // per second, no slack.

	prev := time.Now()
	for i := 0; i < 50; i++ {
		now := rl.Take()
		if i > 0 {
			t.Log(i, now.Sub(prev))
		}
		prev = now
	}

	t.Fail() // just to see the output

	// Output:
	// 1 10ms
	// 2 10ms
	// 3 10ms
	// 4 10ms
	// 5 10ms
}

The project is based on Go 1.21 and is built with

ENV CGO_ENABLED=0 \
  GOOS=linux \
  GOARCH=amd64
RUN --mount=target=. \
  --mount=type=cache,target=/root/.cache/go-build \
  go build -a -tags netgo -ldflags "-w -s" -o "/bin/" ./cmd/...

Can you support the take method of non blocking sleep

Can you support the take method of non blocking sleep, return the interval, so that the service can freely control the actions after current rate limit, such as fast return to 504. If you need, I can contribute pr, look forward to your reply, thanks!

Support for increasing/decreasing limits on the fly

This allows to implement nice things!

An example:
Implement an adaptative rate limiting from outside de library, enabling algorithms like:

  1. start with a small ratelimiting value
  2. periodically increase it by a small amount
  3. notice errors, drop the rate limit by X%
  4. potentially change the value of X (increase or decrease, make the algo converge better to optimal ratelimit)
  5. loop back to 2

A suggestion for implementation is to keep a count of how many times Take was called in the given window and, when we increase or decrease, recalculate the sleep time. In case the window is already exploded (eg 10 per second is decreased to 5 per second, but in this second we already did 8 requests), then the next Take will wait for the whole window to finish.

From the current implementation I see there is no time window management, just a basic calculation of how much time we need to sleep in average between calls. This would maybe involve that, something to consider the complexity

Evaluate atomic and mutex based Take implementations.

#15 brought in a new implementation of Take to avoid starving out the older mutex implementation (a major performance win). Before we cut a new release, let's dive a little deeper on the new implementation's behaviors and aim for a single implementation.

can you release a version?

Hello ~ community maintainer:
the community has not released a version for a long time, can you release a version?

[Question] Why sleepFor should >= maxSlack in limiter_atomic.go

Hi maintainers.

I am new to Go. And would like to raise this issue for the question about:
In Take() method in limiter_atomic.go, sleepFor attr represents how much time we should sleep:

newState.sleepFor += t.perRequest - now.Sub(oldState.last)

So for example we set atelimit.New(1), and last Take() triggered 20 seconds ago. Now newState.sleepFor is calculated as -19.

This attr is used later in t.clock.Sleep(newState.sleepFor). Other than that, it's useless ( Not sure if I am correct). So why sleepFor should >= maxSlack ? I think Sleep(-19) is the same as Sleep(-10) ?

Thank you for your reply in advance :)

Why does the example_test fail when I run it locally?

Can you give me some advice?
When I run the test, I find that it fails, the limit is not as expected (10ms)

My Environment:
GOVERSION=go1.16
GOHOSTARCH=amd64
GOHOSTOS=windows
GO111MODULE=on
GOARCH=amd64

Class library version:
go.uber.org/ratelimit v0.2.0

Specific implementation:
$ go test example_test.go
--- FAIL: Example (0.10s)
got:
1 10ms
2 10ms
3 15.903ms
4 4.097ms
5 11.3687ms
6 8.6313ms
7 10ms
8 12.3425ms
9 7.6575ms
want:
1 10ms
2 10ms
3 10ms
4 10ms
5 10ms
6 10ms
7 10ms
8 10ms
9 10ms
FAIL
FAIL command-line-arguments 0.150s
FAIL

When one of my operations has not called the atomicInt64Limiter.task() function for a long time, the speed limit may fail to be called again

Here is my test function:
func TestName(t *testing.T) {
rl := New(1)

prev := time.Now()

var sizes = 0
for i := 0; i < 100000; i++ {
	sizes++
	if sizes >= 1000 {
		now := rl.Take()
		fmt.Println(sizes, i, now.Sub(prev))
		prev = now
		if 4000 < i && i < 5000 {
			time.Sleep(30 * time.Second)
		}
		sizes = 0
	}
}

}
the result i expected:

1000 999 2µs
1000 1999 1s
1000 2999 1s
1000 3999 1s
1000 4999 1s
1000 5999 30.00147s
1000 6999 1s
1000 7999 1s
1000 8999 1s
1000 9999 1s
1000 10999 1s
1000 11999 1s
1000 12999 1s

I get the result:

1000 999 3µs
1000 1999 1s
1000 2999 1s
1000 3999 1s
1000 4999 1s
1000 5999 30.001429s
1000 6999 326µs
1000 7999 34µs
1000 8999 7µs
1000 9999 6µs
1000 10999 6µs
1000 11999 6µs
1000 12999 6µs
1000 13999 6µs
1000 14999 6µs
1000 15999 6µs
1000 16999 6µs
1000 17999 6µs
1000 18999 6µs

I got the desired result after modifying the code here:

pkg : ratelimit
fileName : limiter_atomic_int64.go
funcName : func (t *atomicInt64Limiter) Take() time.Time
line : 76

update
newTimeOfNextPermissionIssue = now - int64(t.maxSlack)
to
newTimeOfNextPermissionIssue = now)

Adding support for an `Allow` method

This package initially seemed ideal for my use case - It supports refilling at a certain rate (so limits such as 10 per 2 minutes are possible) however only exposing Take means that it can't be used in a context where you want to return a 429 HTTP response (this would block the response which is not the behaviour I'm looking for).

I have seen mentioned on other issues that no other features are planned, but I wanted to ask - if I made a pull request which supported an Allow method which simply returns whether a call to Take would block or not, would a merge be considered for this?

Behavior of take in goroutine

It seems that sometimes Take() is not well controlled when it is run inside a goroutine.
Is this expected behavior?
I have confirmed that by defining take before goroutine, the problem does not occur.

func main() {
	rl := ratelimit.New(10)
	eg := errgroup.Group{}
	for {
		eg.Go(func() error {
			rl.Take()
			time.Sleep(time.Second)
			fmt.Println(time.Now().Format("15:04:05.00"))
			return nil
		})
	}
}

// The output shows that the rate limit is not working.
// 16:34:37.16
// 16:34:37.16
// 16:34:37.54
// 16:34:37.54
// 16:34:37.54
// 16:34:37.54
// 16:34:37.54
// 16:34:37.55
// 16:34:37.65
// 16:34:37.75

This is rate shaping, not limiting

A rate limiter drops requests arriving at a rate exceeding a limit.

A rate shaper slows down processing rate to avoid exceeding the desired maximum rate. As it doesn't drop requests, there is a risk of queue overflow.

The optimal choice between the two depends on the type of application and the desired kind of protection.

EDIT: A combination of the two would probably be better.

Revisit mock clock naps

Rather than nap in arbitrary real time, the mock clock should exhaust all goroutines executed in the context of the test until they are blocked on time.

Supply clock interface for testing?

I have some code which uses the ratelimiter that I'd like to test. However, the core functionality of my code is really time-based. For example, I'd like to verify that messages were sent at 5x their original rate. Doing this with the current ratelimiter would be tricky because I'd be relying on time.Now() and time.Sleep(). That does 2 things

  1. Makes tests run longer than they need to
  2. Introduces a race condition

Is it possible for the ratelimiter to take in a Clock interface that could make testing like this easier? For example:

type Clock interface {
	Now() time.Time
	Sleep(duration time.Duration)
}

type TestClock struct {
	t time.Time
}

func (c *TestClock) Now() time.Time {
	return c.t
}

func (c *TestClock) Sleep(duration time.Duration) {
	c.t = c.t.Add(duration)
}

type RealClock struct {}

func (c *RealClock) Now() time.Time {
	return time.Now()
}

func (c *RealClock) Sleep(duration time.Duration) {
	time.Sleep(duration)
}

This would allow me to do something ilke

clock := FakeClock{t: time.Now()}
limiter := ratelimit.New(messagesPerSecond, &clock)
output := myCodeUnderTest(limiter)
// verify the output

I see something like this in the ratelimit_test: https://github.com/uber-go/ratelimit/blob/main/ratelimit_test.go

However, I don't think that this method is public.

If add some check in New function?

when I use ticker := ratelimit.New(0) the code will throw exception, but when I use negative I works ok but loss my mind.
ticker := ratelimit.New(-1) Pre-Request wait 10s. if need add some check in when new a litter?

The "slack" option doesn't make effect when use newAtomicInt64Based version rate limit

Use following test code:
func TestSlack2(t *testing.T) { rl := New(1, WithSlack(2)) // per second, some slack. rl.Take() // Initialize. time.Sleep(time.Second * 4) for i := 0; i < 100; i++ { rl.Take() fmt.Printf("%d\n", time.Now().Unix()) } }
I got those output:
1702546953
1702546953
1702546953
1702546953
1702546953
1702546953
1702546953
1702546953
1702546953
1702546953
1702546953
1702546953
1702546953
1702546953
....

Parity with yab ratelimit

I was trawling through the yab ratelimiting code, and noticed that the interface exposed there is different (and IMO nicer) than the interface in this package.

Is this package intended to be kept updated? If so, would it be possible to update the implementation to bring it up the parity? I'd be more than happy to contribute a PR if it would help. It would be great to be able to use rate limiting logic independently of yab.

Feature Request: Allow updating the limit

Goal: To use this library with an adaptive rate limiter algorithm like Vegas or AIMD

Problem: Right now the library exposes no way to update the rates dynamically.

There are two ways to solve for this

  1. Create a new rate limiter every time I want to update limits
  2. Dynamically update the rate limit on the object

Would like feedback on which is the preferred approach. Or if there is any other way.

X-Rate-Limit

X-Rate-Limit-Scope: 192.168.2.70
X-Rate-Limit-Action: search
X-Rate-Limit-Window: minute
X-Rate-Limit-Limit: 30
X-Rate-Limit-Remaining: 24
X-Rate-Limit-Reset: 2020-05-18T20:19:00.000Z
X-Rate-Limit-Reset-After: 30

atomicInt64Limiter WithoutSlack doesn't block

First off thanks for the library :)

Saw that a new rate limiter was introduced that benchmarked a lot better and pulled it down to try it out.

Noticed that when running WithoutSlack, it just allows everything through instead of waiting because all subsequent Take() calls fall into the case now-timeOfNextPermissionIssue > int64(t.maxSlack)

Easiest way to repro is using your example_test.go:

  • Using rl := ratelimit.New(100) (slack=10):
go test -run Example -count=1
=== RUN   Example
--- PASS: Example (0.09s)
PASS
ok      command-line-arguments  0.207s
  • Using rl := ratelimit.New(100, ratelimit.WithoutSlack)
go test -run Example -count=1   
--- FAIL: Example (0.01s)
got:
1 10ms
2 775µs
3 3µs
4 2µs
5 10µs
6 2µs
7 2µs
8 2µs
9 2µs
want:
1 10ms
2 10ms
3 10ms
4 10ms
5 10ms
6 10ms
7 10ms
8 10ms
9 10ms
FAIL
FAIL    command-line-arguments  0.126s
FAIL
  • Using 0.2.0 rl :=newAtomicBased(100, WithoutSlack)
go test -run Example -count=1
PASS
ok      go.uber.org/ratelimit   0.323s

I am not 100% sure why the other units with mocked clocks are passing, but your example test and my application tests fail consistently with this new limiter. On darwin if that helps.

Take always return without block if it enters the case branch

https://github.com/uber-go/ratelimit/blob/main/limiter_atomic_int64.go#L72

		case t.maxSlack > 0 && now-timeOfNextPermissionIssue > int64(t.maxSlack):
			// a lot of nanoseconds passed since the last Take call
			// we will limit max accumulated time to maxSlack
			newTimeOfNextPermissionIssue = now - int64(t.maxSlack)

you will find the case always (almost) returns true.

The below code is a test for reproduce:

func TestLimiter(t *testing.T) {
	limiter := ratelimit.New(1, ratelimit.Per(time.Second), ratelimit.WithSlack(1))

	for i := 0; i < 25; i++ {
		if i == 1 {
			time.Sleep(2 * time.Second)
		}
		limiter.Take()

		fmt.Println(time.Now().Unix(), i) // burst

	}
}

ratelimit is v0.3.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.