Code Monkey home page Code Monkey logo

ratelimit's People

Contributors

akondapuram avatar arkodg avatar benpope avatar chashikajw avatar chuckcrawford avatar danielhochman avatar debbyku avatar dependabot[bot] avatar devincd avatar dio avatar dweitzman avatar dzy176 avatar freedomljc avatar guilhem avatar jespersoderlund avatar junr03 avatar lmajercak-wish avatar m-rcl avatar mattklein123 avatar mmorel-35 avatar pchelolo avatar petedmarsh avatar peterl328 avatar renuka-fernando avatar stevesloka avatar sunjaybhatia avatar vsabella avatar walbertus avatar ysawa0 avatar zakhenry avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ratelimit's Issues

New tagged release

Now that we have Docker images being published publicly, are there any plans to release a new tagged version so it's possible to pin against a specific version?

At the moment I can only see the 'master' image and would like to specify a particular release number.

Thanks

Docker build fails due to missing vendor folder

When trying to build the image, the Dockerfile expects a vendor folder to exist. I can see this is in the .gitignore file but am not sure what it should contain? Is this folder necessary.

Ratelimiting on multiple time units?

We have a use case where we would like to allow a higher number of requests per second and a lower number of requests per minute so that if a user can have request "bursts", but cannot sustain them.

For a concrete example, here is a sample config:

domain: mongo_cps
descriptors:
  - key: database
    rate_limit:
      unit: second
      requests_per_unit: 10

  - key: database
    rate_limit:
      unit: minute
      requests_per_unit: 100

However, it appears the configuration is fails to load because the time unit is not part of the composite key. That is, we see an error message like

error loading new configuration from runtime: my-config.yaml: duplicate descriptor composite key 'mongo_cps.database'

Does anyone have workarounds for this or would it be reasonable to include the time unit as part of the composite key if others have similar use cases? Thanks!

Benfits of using 2 redis?

Is there any benefit of using separate Redis for PER SECOND limit.
like performance improvement. or its have high usage and hence it's separated out?

Running with HTTPS

Does anyone have any examples of advice on running the rate limiting service with SSL?

unknown domain XXXXXXX

get an problem at envoy 1.11
the ratelimit server debug info and ratelimit do not work,help ,thanks!!!!

time="2019-07-30T17:04:53+08:00" level=debug msg="unknown domain 'node-nginx.default.svc.cluster.local'"
time="2019-07-30T17:04:53+08:00" level=debug msg="starting cache lookup"
time="2019-07-30T17:04:53+08:00" level=debug msg="returning normal response"
time="2019-07-30T17:04:53+08:00" level=debug msg="starting get limit lookup"
time="2019-07-30T17:04:53+08:00" level=debug msg="unknown domain 'node-nginx.default.svc.cluster.local'"
time="2019-07-30T17:04:53+08:00" level=debug msg="starting cache lookup"
time="2019-07-30T17:04:53+08:00" level=debug msg="returning normal response"
time="2019-07-30T17:04:53+08:00" level=debug msg="starting get limit lookup"
time="2019-07-30T17:04:53+08:00" level=debug msg="unknown domain 'node-nginx.default.svc.cluster.local'"
time="2019-07-30T17:04:53+08:00" level=debug msg="starting cache lookup"
time="2019-07-30T17:04:53+08:00" level=debug msg="returning normal response"
time="2019-07-30T17:04:53+08:00" level=debug msg="starting get limit lookup"
time="2019-07-30T17:04:53+08:00" level=debug msg="unknown domain 'node-nginx.default.svc.cluster.local'"
time="2019-07-30T17:04:53+08:00" level=debug msg="starting cache lookup"
time="2019-07-30T17:04:53+08:00" level=debug msg="returning normal response"
time="2019-07-30T17:04:53+08:00" level=debug msg="starting get limit lookup"
time="2019-07-30T17:04:53+08:00" level=debug msg="unknown domain 'node-nginx.default.svc.cluster.local'"
time="2019-07-30T17:04:53+08:00" level=debug msg="starting cache lookup"
time="2019-07-30T17:04:53+08:00" level=debug msg="returning normal response"

my envoy.yaml

admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9214 }

static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 10000 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
stat_prefix: ingress_http
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: node-nginx.default.svc.cluster.local
rate_limits:
- stage: 0
actions:
- {generic_key: {"descriptor_value": "slowpath"}}
http_filters:
- name: envoy.rate_limit
config:
domain: node-nginx.default.svc.cluster.local
stage: 0
rate_limit_service:
grpc_service:
envoy_grpc:
cluster_name: rate_limit_cluster
timeout: 0.25s
- name: envoy.router
clusters:

  • name: node-nginx.default.svc.cluster.local
    connect_timeout: 0.25s
    type: LOGICAL_DNS
    lb_policy: round_robin
    load_assignment:
    cluster_name: node-nginx.default.svc.cluster.local
    endpoints:
    • lb_endpoints:
      • endpoint:
        address:
        socket_address:
        address: 127.0.0.1
        port_value: 80
  • name: rate_limit_cluster
    type: strict_dns
    connect_timeout: 0.25s
    lb_policy: round_robin
    http2_protocol_options: {}
    hosts:
    • socket_address:
      address: 10.0.0.253
      port_value: 8081

config.yaml

domain: "node-nginx.default.svc.cluster.local"
descriptors:

  • key: generic_key
    value: slowpath
    rate_limit:
    unit: second
    requests_per_unit: 1

Public docker image

Is there a public docker image that we can use? I checked dockerhub.com and there are many images there built by 3rd parties, but I could not find one from Lyft.

Running in AWS ECS

Our team is planning on deploying the rate limiting service in our AWS infrastructure and I was wondering if anyone here had any advice. Because this service relies on http2 and gRPC, it doesn't look like any of AWS' load balancers are ideal for it, so we've decided to try out AWS' ECS service discovery feature. Is this how Lyft has deployed the rate limiting service? Or does anyone else have any experience with deploying this service on AWS?

Unable to start docker image.

I cloned the repo, did make boostrap, make compile, and then built the docker image using the dockerfile in the repo. However, when I attempt to start the image, either with docker-compose up or docker run ratelimiter, I get the following error:

standard_init_linux.go:190: exec user process caused "no such file or directory"

I am on Fedora 26 if that is helpful information.

ratelimit as a systemd service; where are the logs ?

Hi,

given a systemd service file:

  • envoy-ratelimit.service
[Unit]
Description=envoy-ratelimit
Requires=network-online.target
After=network-online.target
After=syslog.target

[Service]
Type=simple
User=root
EnvironmentFile=/etc/sysconfig/envoy-ratelimit
ExecStart=/usr/local/sbin/ratelimit $OPTIONS
ExecStop=/bin/kill -s TERM $MAINPID
Restart=on-failure
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=envoy-ratelimit

[Install]
WantedBy=multi-user.target

and its arguments

  • /etc/sysconfig/envoy-ratelimit
OPTIONS=""
RUNTIME_SUBDIRECTORY=ratelimit
LOG_LEVEL=info
PORT=8082
GRPC_PORT=8081
REDIS_SOCKET_TYPE=tcp
REDIS_URL=127.0.0.1:6389
USE_STATSD=false

when I run the ratelimit service via a command I can see the logs:

# RUNTIME_SUBDIRECTORY=ratelimit PORT=8082 REDIS_SOCKET_TYPE=tcp REDIS_URL=172.31.141.233:6389 LOG_LEVEL=WARN USE_STATSD=false /usr/local/sbin/ratelimit 
WARN[0000] statsd is not in use                         
WARN[0000] connecting to redis on tcp 172.31.141.233:6389 with pool size 10 
WARN[0000] Listening for HTTP on ':8082'                
WARN[0000] Listening for debug on ':6070'               
WARN[0000] Listening for gRPC on ':8081'   

when I run the above as a systemd service, can somebody advise where the logs appear ?

# systemctl status envoy-ratelimit.service 
● envoy-ratelimit.service - envoy-ratelimit
   Loaded: loaded (/etc/systemd/system/envoy-ratelimit.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-10-28 22:37:04 CET; 5s ago
  Process: 22110 ExecStop=/bin/kill -s TERM $MAINPID (code=exited, status=0/SUCCESS)
 Main PID: 22161 (ratelimit)
   CGroup: /system.slice/envoy-ratelimit.service
           └─22161 /usr/local/sbin/ratelimit
[root@nl-ams02c-ispweb01 system]# journalctl -f -u envoy-ratelimit
-- Logs begin at Sat 2019-03-16 05:00:02 CET. --

^C

/var/log/messages is also empty of logging from envoy-ratelimit

How to use this in envoy?

This is a beginners question, but I can't find any documentation about this, so my configuration so far is like this:

admin:
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
      protocol: TCP
      address: 0.0.0.0
      port_value: 9901
static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address:
        protocol: TCP
        address: 0.0.0.0
        port_value: 10000
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        config:
          stat_prefix: ingress_http
          route_config:
            name: local_route
            virtual_hosts:
            - name: local_service
              domains: ["*"]
              routes:
              - match:
                  prefix: "/"
                route:
                  host_rewrite: www.ggalihpp.ga
                  cluster: service_google
          http_filters:
          - name: envoy.router
  clusters:
  - name: service_google
    connect_timeout: 0.25s
    type: LOGICAL_DNS
    # Comment out the following line to test on v6 networks
    dns_lookup_family: V4_ONLY
    lb_policy: ROUND_ROBIN
    hosts:
      - socket_address:
          address: minio.ggalihpp.ga
          port_value: 443
    tls_context: { sni: www.minio.ggalihpp.ga }

rate_limit_service:
    grpc_service:
        envoy_grpc:
            cluster_name: service_google
        timeout: 1s

How to add limit rps to the rate_limit_service?

again, sorry for the beginner question

Is it still used in Production?

The doc says you have been using that in Prod for the last 2 years, is it still the case? I'm curious because the commit history seems very light.

integration tests incompatible with go 1.8

Reported by @amogh-plivo

go 1.8 doesn't allow import of a program golang/go@0f06d0a

$ make tests
mkdir -p /home/dhochman/go/src/github.com/lyft/ratelimit/bin
cd /home/dhochman/go/src/github.com/lyft/ratelimit/src/service_cmd && go build -o ratelimit ./ && mv ./ratelimit /home/dhochman/go/src/github.com/lyft/ratelimit/bin
cd /home/dhochman/go/src/github.com/lyft/ratelimit/src/client_cmd && go build -o ratelimit_client ./ && mv ./ratelimit_client /home/dhochman/go/src/github.com/lyft/ratelimit/bin
cd /home/dhochman/go/src/github.com/lyft/ratelimit/src/config_check_cmd && go build -o ratelimit_config_check ./ && mv ./ratelimit_config_check /home/dhochman/go/src/github.com/lyft/ratelimit/bin
go test ./src/... ./proto/... ./test/... -tags=integration
# github.com/lyft/ratelimit/test/integration
test/integration/integration_test.go:13:2: import "github.com/lyft/ratelimit/src/service_cmd" is a program, not an importable package
FAIL	github.com/lyft/ratelimit/test/integration [setup failed]

I think the fix is to move Run() to another package and have service_cmd be a simple wrapper.

Redis usage

Hello,

I monitored Redis while ratelimit was in service and I noticed only increments and expiration sets. No reads read requests towards DB by the application so I'm wondering what's the purpose of deploying a Redis with ratelimit? Also I observed random expiration times (many times quite long e.g. 240 seconds or more). Why is that? What's the purpose of leaving keys in DB for some random time?

Thank you,
Apostolos

Making runtime config change did not work

Hello,

This is my docker compose:

    image: envoyproxy/ratelimit:v1.4.0
    container_name: ratelimit
    command: /usr/local/bin/ratelimit
    ports:
      - 8050:8080
      - 8051:8081
      - 6070:6070
    volumes:
      - binary:/usr/local/bin/
      - ./examples:/data
    environment:
      - USE_STATSD=false
      - LOG_LEVEL=debug
      - REDIS_SOCKET_TYPE=tcp
      - REDIS_URL=redis:6379
      - RUNTIME_ROOT=/data
      - RUNTIME_SUBDIRECTORY=/
      - RUNTIME_IGNOREDOTFILES=true
    depends_on:
      - redis

When I make changes in /data/config/config.yaml, the event is not catched.

However, if I do something in root /, I can see logs like :

ratelimit         | time="2020-04-10T03:47:58Z" level=debug msg="Got event \"//config.yaml\": REMOVE"
ratelimit         | time="2020-04-10T03:47:58Z" level=debug msg="Got event \"//config.yaml\": CREATE"
ratelimit         | time="2020-04-10T03:47:58Z" level=debug msg="Got event \"//config.yaml\": WRITE"
ratelimit         | time="2020-04-10T03:47:58Z" level=debug msg="Got event \"//config.yaml\": WRITE"

When I set - RUNTIME_ROOT=/data/sub/sub1, then the event would be caught if I make changes in /data/sub

Not sure if it's a bug or I missed something here.

Thanks in advance!

Cannot start service ratelimit

After running:
docker-compose down && docker-compose up --build -d

Getting below error:

Creating ratelimit_ratelimit_1       ... error

ERROR: for ratelimit_ratelimit_1  Cannot start service ratelimit: OCI runtime create failed: container_linux.go:344: starting container process caused "exec: \"/usr/local/bin/ratelimit\": stat /usr/local/bin/ratelimit: no such file or directory": unknown

ERROR: for ratelimit  Cannot start service ratelimit: OCI runtime create failed: container_linux.go:344: starting container process caused "exec: \"/usr/local/bin/ratelimit\": stat /usr/local/bin/ratelimit: no such file or directory": unknown
ERROR: Encountered errors while bringing up the project.

Getting few 5xx Backend 5xx for Ratelimit service on Low RPM

Hi,
I am running 2 pods of rate limit service. When I am hitting them with 6k Request Per Minute then I am seeing backend 5xx from 2-40 in between. If I increase the number of request from 6k to 12k I see no backend 5xx.
So I am running it with 2 pods. so for one pod on 3k I am getting backend 5xx while for 6k I am not. from envoy timeout is 20ms.

Is there any Client or server Timeout or keepalive problem is there?
Screen Shot 2019-09-17 at 3 44 19 PM

Enabling StatsD by setting the flag USE_STATSD=true

I currently have my dd-agent running on localhost:8125 accepting UDP traffic. However, I think after setting the flag to true. The program is trying to connect through TCP which is causing a connection refused error.
I looked up the flag in the code base and did not find where it being used. Can someone shed a light on how to solve this?
Besides, I am also trying to use see the stats through gostats. However, I not familiar with it. Should write another program to accept the data or should I modify the current code to be able to see the stats?
Thanks ahead for any info provided.

rate limit service not really block my connection

Hi, I installed the ratelimit service by docker-compose and created my own configuration file. the service could run up and all works fine, but when I running the benchmark for the service sit behind envoy and ratelimit filter, the result is not expected, could you help take a look ?

envoy config

static_resources:
  listeners:
  - address:
      socket_address:
        address: 0.0.0.0
        port_value: 80
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        config:
          use_remote_address: true
          server_name: ront-proxy
          stat_prefix: ingress_http
          codec_type: auto
          access_log: # configure logging
            name: envoy.file_access_log
            config:
              path: /dev/stdout
          route_config:
            name: local_route
            virtual_hosts:
            - name: backend
              domains:
              - "*"
              rate_limits:
                - stage: 0
                  actions:
                    - remote_address: {}
              routes:
                - match:
                  prefix: "/v1/generic/"
                route:
                  cluster: vault
                  rate_limits:
                    - stage: 0
                      actions:
                        - generic_key: {descriptor_value: "default"}
              - match:
                  prefix: "/v1"
                route:
                  cluster: vault
          http_filters:
          - name: envoy.rate_limit
            config:
              stage: 0
              domain: "ratelimiter"
          - name: envoy.router
            config: {}
  clusters:
  - name: vault
    connect_timeout: 30s
    type: static
    lb_policy: round_robin
    hosts:
    - socket_address:
        address: 127.0.0.1
        port_value: 8200
  - name: rate_limit_service
    connect_timeout: 0.25s
    type: static
    lb_policy: round_robin
    http2_protocol_options: {} 
    hosts:
    - socket_address:
        address:  127.0.0.1
        port_value: 8081
        
rate_limit_service: 
  grpc_service:
    envoy_grpc:
      cluster_name: rate_limit_service
    timeout: 0.25s
admin:
  access_log_path: "/dev/null"
  address:
    socket_address:
      address: 0.0.0.0
      port_value: 8001

retelimit service config

$ cat examples/ratelimit/config/ratelimiter.yaml
---
domain: ratelimiter
descriptors:
  - key: remote_address
    rate_limit:
      unit: minute
      requests_per_unit: 1

  - key: generic_key
    value: default
    rate_limit:
      unit: second
      requests_per_unit: 500

envoy run in my localhost and made change to compose file to enable 8081 grpc port

From my understanding, api request hit v1/generic/ would be limited by 500/s, and v1 would be 1/min, but from the testing, I could get nearly 100% requests by vegeta

$ echo "GET http://127.0.0.1:18200/v1/sys/health" | vegeta attack -header "X-Vault-Token: $(cat ~/.vault-token)" -rate=1000 -duration=0 | tee results.bin | vegeta report
^CRequests      [total, rate]            7398, 991.69
Duration      [total, attack, wait]    7.489244538s, 7.460022s, 29.222538ms
Latencies     [mean, 50, 95, 99, max]  51.959881ms, 36.24044ms, 140.004647ms, 213.315832ms, 300.517854ms
Bytes In      [total, mean]            1946430, 263.10
Bytes Out     [total, mean]            0, 0.00
Success       [ratio]                  98.54%
Status Codes  [code:count]             200:7290  429:108
Error Set:
429 Too Many Requests

and ratelimit service logs

ratelimit_1_25f2c6251453 | time="2018-11-21T07:57:28Z" level=debug msg="found rate limit: remote_address"
ratelimit_1_25f2c6251453 | time="2018-11-21T07:57:28Z" level=debug msg="starting cache lookup"
ratelimit_1_25f2c6251453 | time="2018-11-21T07:57:28Z" level=debug msg="starting cache lookup"
ratelimit_1_25f2c6251453 | time="2018-11-21T07:57:28Z" level=debug msg="looking up key: remote_address"
ratelimit_1_25f2c6251453 | time="2018-11-21T07:57:28Z" level=debug msg="found rate limit: remote_address"
ratelimit_1_25f2c6251453 | time="2018-11-21T07:57:28Z" level=debug msg="found rate limit: remote_address"
ratelimit_1_25f2c6251453 | time="2018-11-21T07:57:28Z" level=debug msg="looking up cache key: ratelimiter_remote_address_172.17.0.1_1542787020"
ratelimit_1_25f2c6251453 | time="2018-11-21T07:57:28Z" level=debug msg="looking up cache key: ratelimiter_remote_address_172.17.0.1_1542787020"
ratelimit_1_25f2c6251453 | time="2018-11-21T07:57:28Z" level=debug msg="cache key: ratelimiter_remote_address_172.17.0.1_1542787020 current: 1419"

Redis cluster mode support

Hey,

I'd like to add support for Redis running in cluster mode.

The existing redis client, radix.v2 has a cluster package which supports this, so this change would involve pulling that package, updating the driver impl, and potentially exposing a new config value to allow the user to specify whether or not they're running Redis as a cluster.

Happy to open a PR for this if it's something you would like added.

Resource Usage in Production

What kind of resource (CPU + RAM) limits do you recommend when running this in your production environment? Any guidelines available publicly?

Unable to update the port in the test TestBasicConfigLegacy

Found the connection issue if we change the port from 8083 to another port in the test TestBasicConfigLegacy(#117). It seems to only happen in the CI, cannot repro in my local env.

The error:

Error:		Received unexpected error "rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:8093: connect: connection refused\""

IMO, about the reason why the original port 8083 works, it is related to the port 8083 has been opened before in other tests(https://github.com/lyft/ratelimit/blob/c4f75a8e3f2671c81764d55d5e0b08b92e4a11df/test/integration/integration_test.go#L45).

Deploying config as Kubernetes config map not working

I'm able to build and create a docker image (with a config embedded) and run on kubernetes.

I'm trying to extract the same config to be used as a config map

kubectl create configmap my-config --from-file=mt-ratelimit/config/config.yaml

I mounted the config map as a volume

Environment:
LOG_LEVEL: INFO
REDIS_SOCKET_TYPE: tcp
REDIS_URL: ratelimit-store:6379
RUNTIME_ROOT: /data
RUNTIME_SUBDIRECTORY: ratelimit2
USE_STATSD: false
Mounts:
/data/ratelimit2/config from config-volume (rw)

Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: ratelimit-config
Optional: false

but logs are showing error loading:

time="2019-03-18T23:28:41Z" level=warning msg="runtime: error reading /data/ratelimit2/config/..data: read /data/ratelimit2/config/..data: is a directory"
time="2019-03-18T23:28:41Z" level=warning msg="connecting to redis on tcp ratelimit-store:6379 with pool size 10"
time="2019-03-18T23:28:41Z" level=error msg="error loading new configuration from runtime: config...3983_18_03_23_27_49.611881932.config.yaml: error loading config file: yaml: mapping values are not allowed in this context"

when i sh into the pod, i see this
/data/ratelimit2/config # ls -al
total 12
drwxrwxrwx 3 root root 4096 Mar 18 23:27 .
drwxr-xr-x 3 root root 4096 Mar 18 23:28 ..
drwxr-xr-x 2 root root 4096 Mar 18 23:27 ..3983_18_03_23_27_49.611881932
lrwxrwxrwx 1 root root 31 Mar 18 23:27 ..data -> ..3983_18_03_23_27_49.611881932
lrwxrwxrwx 1 root root 18 Mar 18 23:27 config.yaml -> ..data/config.yaml

i also tried creating the configmap using
kubectl create configmap my-config --from-file=ratelimitconfig.yaml=config.yaml

but got the same result.

proposal: radix.v2 to radix.v3

Description:

As radix.v2 is deprecated for a time, upgrading to radix.v3 give benefits on performance and maintenance.

The biggest change will be Pool interface . Pool in radix.v3 implements implicit pipelining, no need to explicitly Get and Put connections from pool.

Changes:

  • intrefaces: RateLimitCache, Pool ..
  • server initialization
  • unit and integration tests, mocks

I'm glad to write a POC pr if we think it's worth doing.

script/install-protoc rough edges

A couple of issues I ran into with script/install-protoc, probably easily fixable:

  1. If you already have /usr/local/protoc, it fails to install the correct version with mv: cannot move 'protoc' to '/usr/local/protoc': Directory not empty. Possible solution is to remove that path or move protoc to a relative ./bin/ path that won't mess with system installs.

  2. protoc-gen-go install uses go get, which pulls the latest version, not the version in glide.lock. This leads to make compile failing. Possible solution: build using the version pulled down by glide via go build ./vendor/github.com/golang/protobuf/protoc-gen-go.

make bootstrap failing for 'github.com/Sirupsen/logrus'

When I tried to do a 'make bootstrap', I got this error.
[ERROR] Update failed for github.com/Sirupsen/logrus: The Remote does not match the VCS endpoint
[ERROR] Failed to install: The Remote does not match the VCS endpoint
make: *** [bootstrap] Error 1

The possible cause if that the logrus package has got renamed to 'github.com/sirupsen/logrus' from 'github.com/Sirupsen/logrus'. This should be fixed in glide files.

Not meshing with ambassador

I have managed to get past my previous difficulties with getting the actual app running, but now that I have, I can't seem to figure out how to get it to mesh with ambassador. Despite both lyft and datawire repos/docs repeatedly mentioning this as a drop-in solution for rate-limiting in ambassador, I can't actually find any example projects that combine these two technologies.

kubectl logs ratelimiter-0 -c ratelimit
time="2018-09-14T17:10:15Z" level=warning msg="statsd is not in use"
time="2018-09-14T17:10:15Z" level=warning msg="connecting to redis on tcp 127.0.0.1:6379 with pool size10"
time="2018-09-14T17:10:15Z" level=warning msg="Listening for HTTP on ':8080'"
time="2018-09-14T17:10:15Z" level=warning msg="Listening for debug on ':6070'"
time="2018-09-14T17:10:15Z" level=warning msg="Listening for gRPC on ':8081'"

This is what I see when I first start the service. My first question is, what is the difference between the 8080 and the 8081 endpoint? The 8081 endpoint is not mentioned in any of the documentation, and the docker-compose reference does not expose it.

However when I attempt to curl 8080 on the ratelimit pod, I get a 404 not found error. When I curl 8081, I get curl: (56 Recv failure: Connection reset by peer). Directly curling the endpoints also does not appear in any form in the console output for the ratelimit app (the log level is set to debug).

My ambassador instance can see the mapping on the kubernetes service I have defined for ratelimit, and shows "tcp://ratelimiter:8080" on the list of RateLimitService.

So I'm not sure where to go from here. Is the service supposed to 404 on 8080? The debug endpoint looks ok, and hitting 0:6070/rlconfig accurately prints out my configs. I see no error messages either.

Unable to compile

I cloned the repo, did make boostrap without errors and warning, but make compile gives these errors:

script/generate_proto
libprotoc 3.5.1
mkdir -p /root/projects/src/src/github.com/lyft/ratelimit/bin
cd /root/projects/src/src/github.com/lyft/ratelimit/src/service_cmd && go build -o ratelimit ./ && mv ./ratelimit /root/projects/src/src/github.com/lyft/ratelimit/bin
# github.com/lyft/ratelimit/proto/envoy/api/v2/ratelimit
../../proto/envoy/api/v2/ratelimit/ratelimit.pb.go:22:11: undefined: "github.com/lyft/ratelimit/vendor/github.com/golang/protobuf/proto".ProtoPackageIsVersion3
# github.com/lyft/ratelimit/proto/ratelimit
../../proto/ratelimit/ratelimit.pb.go:23:11: undefined: proto.ProtoPackageIsVersion3
make: *** [compile] Error 2

I saw that in the two files ratelimit.pb.go on these rows before the compile the string is ProtoPackageIsVersion2, but after trying the compile the string becomes ProtoPackageIsVersion3. Is the problem related to some versions mismatch?

Fresh installation of CentOS 7 with Go 1.11.4

Proposal: more permissive shouldRateLimit-to-config matching

We're looking into using envoyproxy/ratelimit but have some concerns that configuring limits and checking limits is more intertwined than we'd want.

Suppose you want to have a few different rate limits that might share common fields, like one on (account, endpoint) and one on (account, ip)

As currently implemented, a person calling into the rate limiter API would need to make a request that's very aware of exactly which rate limits exist and apply:

rate_limit_request [
  descriptor(account=123, endpoint=foo_api_method),
  descriptor(account=123, ip=1.2.3.4)
]

And then if you want to make a new rate limit on another dimension like (account_type=free, endpoint_type=write) you would need to go back and update the rate_limit_request yet again to include it in exactly that order:

rate_limit_request [
  descriptor(account=123, endpoint=foo_api_method),
  descriptor(account=123, ip=1.2.3.4)
  descriptor(account_type=free, endpoint_type=write)
]

The proposal is for more permissive rate limit matching where instead of three different descriptors as above you could just have one:

rate_limit_request [
  descriptor(account=123, endpoint=foo_api_method, ip=1.2.3.4, account_type=free, endpoint_type=write),
]

A limit configured for (account, endpoint) would match this descriptor, or (account, ip), or (account_type, endpoint_type), or any subset of keys from the provided descriptor.

To do the matching, instead of iterating through descriptors to find matching limits it would likely involve looking through configured limits to see which ones have all the necessary keys present in a descriptor, restricted to a given domain.

The behavior change could be opt-in to avoid impacting existing users, either with a request flag or a server configuration option.

The benefit going forward would be a separation of concerns between how rate limiting is configured vs how limits are queried. A shouldRateLimit() call would essentially say, "here are all the fields that might be useful for rate limiting" and if there's an active incident where it becomes useful to introduce a new rate limit based on IP address that would be a standalone configuration change to rate limits without any need for a corresponding change to the service that's being limited

Thoughts? Feelings? Concerns? Hopes? Dreams? Fears?

Why do you apply a different (random) expiration time whenever the key is accessed?

Hello,

Whenver ratelimit access a specific key to increase it, it re-applies also a different (random) expiration time on the key. So 2 actions are performed against DB instead of the one that really needed (the increment).

So, I was wondering, why not to apply key expiration just once (at the time the key is created) and not during every single increment? The CPU demands see to be pretty high for ratelimit so I'm wondering if there are any plans for some optimization.

Thank you,
Apostolos

Migrate from glide to dep

Hi Lyft,

Thank you for the awesome rate limiter service!

Now that the Go community has settled on dep, should the dependency management be switched from glide to dep? The glide repo is recommending people to migrate over:

The Go community now has the dep project to manage dependencies. Please consider trying to migrate from Glide to dep. If there is an issue preventing you from migrating please file an issue with dep so the problem can be corrected. Glide will continue to be supported for some time but is considered to be in a state of support rather than active feature development.

I don't mind taking this on if it's something you would like to do.

Git release any time soon?

Hey all,
the last git release is from Oct 9, 2018. Are we gonna have a new one any time soon. I see the repo is active.

Lower-case import path for github.com/Sirupsen/logrus

When adding ratelimit as a dependency using dep it fails with the error:

Solving failure: No versions of github.com/lyft/ratelimit met constraints:
master: Could not introduce github.com/lyft/ratelimit@master due to a case-only variation: it depends on "github.com/Sirupsen/logrus", but "github.com/sirupsen/logrus" was already established as the case variant for that project root by the following other dependers:

The logrus README states:

Everything using logrus will need to use the lower-case: github.com/sirupsen/logrus. Any package that isn't, should be changed.

https://github.com/sirupsen/logrus/blob/master/README.md

As requested in #23, please can the import paths be updated to use the lower-case form.

Using Redis in socket mode no longer works

I had a demo of ratelimit running in a Kubernetes cluster at the end of 2019. For performance reasons I had ratelimit connect to a Redis instance over a socket shared on a ConfigMap volume. Having the ratelimit -> Redis connection not use a network at all is great for performance.

Turns out 3ec0f5f#diff-ab160e3a4ff5cb5fd488f666c5266fdd disabled the default Unix socket usage by hardcoding the use of TCP.

It is not possible to launch ratelimit with REDIS_SOCKET_TYPE=unix even though it is the 'default' value. See an example below:

env USE_STATSD=false LOG_LEVEL=debug REDIS_URL=/tmp/redis.sock REDIS_SOCKET_TYPE=unix RUNTIME_ROOT=examples/ratelimit/config/ RUNTIME_SUBDIRECTORY=. bin/ratelimit
WARN[0000] statsd is not in use
DEBU[0000] runtime changed. loading new snapshot at examples/ratelimit/config
DEBU[0000] runtime: processing examples/ratelimit/config
DEBU[0000] runtime: processing examples/ratelimit/config/config.yaml
DEBU[0000] runtime: adding key=config.yaml value=---
domain: mongo_cps
descriptors:
  - key: database
    value: users
    rate_limit:
      unit: second
      requests_per_unit: 500

  - key: database
    value: default
    rate_limit:
      unit: second
      requests_per_unit: 500
 uint=false
WARN[0000] connecting to redis on /tmp/redis.sock with pool size 10
panic: dial tcp: address /tmp/redis.sock: missing port in address

goroutine 1 [running]:
github.com/envoyproxy/ratelimit/src/redis.checkError(...)
        /home/moderation/Library/envoyproxy/ratelimit/src/redis/driver_impl.go:44
github.com/envoyproxy/ratelimit/src/redis.NewPoolImpl(0xca07e0, 0xc00010eb40, 0x0, 0x0, 0x0, 0xc00003a2ea, 0xf, 0xa, 0xc00011e080, 0xc000134000)
        /home/moderation/Library/envoyproxy/ratelimit/src/redis/driver_impl.go:98 +0x344
github.com/envoyproxy/ratelimit/src/service_cmd/runner.(*Runner).Run(0xc000213f68)
        /home/moderation/Library/envoyproxy/ratelimit/src/service_cmd/runner/runner.go:57 +0x317
main.main()
        /home/moderation/Library/envoyproxy/ratelimit/src/service_cmd/main.go:7 +0x43

The socket type setting at https://github.com/envoyproxy/ratelimit/blob/master/src/settings/settings.go#L22 is no longer used in the code base.

no compilation is possible, no build, cannot use ratelimit service together with envoy

Hi,

Unfortunately, I cannot make any use of the ratelimit service together with envoy,
because of installation issues.

glide is not maintained anymore, and ratelimit cannot be installed and there is no container image available for it either.

Can you update the deployment guidelines please ?

In #49 there was no action taken.

Here is the output when trying to make bootstrap

# git clone https://github.com/lyft/ratelimit.git
Cloning into 'ratelimit'...
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Compressing objects: 100% (3/3), done.
remote: Total 566 (delta 0), reused 0 (delta 0), pack-reused 563
Receiving objects: 100% (566/566), 172.90 KiB | 0 bytes/s, done.
Resolving deltas: 100% (244/244), done.
# cd ratelimit/
# export GOPATH=${PWD}
# make bootstrap
script/install-glide
glide install
[WARN]	The name listed in the config file (github.com/lyft/ratelimit) does not match the current location (.)
[INFO]	Loading mirrors from mirrors.yaml file
[INFO]	Downloading dependencies. Please wait...
[INFO]	--> Found desired version locally github.com/envoyproxy/go-control-plane 0ad6fa1cf0b9b6ca8f3617a7188a568e81f40b87!
[INFO]	--> Found desired version locally github.com/envoyproxy/protoc-gen-validate ff6f7a9bc2e5fe006509b9f8c7594c41a953d50f!
[INFO]	--> Found desired version locally github.com/fsnotify/fsnotify 629574ca2a5df945712d3079857300b5e4da0236!
[INFO]	--> Found desired version locally github.com/gogo/protobuf ba06b47c162d49f2af050fb4c75bcbc86a159d5c!
[INFO]	--> Found desired version locally github.com/golang/mock 8a44ef6e8be577e050008c7886f24fc705d709fb!
[INFO]	--> Found desired version locally github.com/golang/protobuf b5d812f8a3706043e23a9cd5babf2e5423744d30!
[INFO]	--> Found desired version locally github.com/google/protobuf 6973c3a5041636c1d8dc5f7f6c8c1f3c15bc63d6!
[INFO]	--> Found desired version locally github.com/gorilla/mux 9e1f5955c0d22b55d9e20d6faa28589f83b2faca!
[INFO]	--> Found desired version locally github.com/kavu/go_reuseport 3d6c1e425f717ee59152524e73b904b67705eeb8!
[INFO]	--> Found desired version locally github.com/kelseyhightower/envconfig ac12b1f15efba734211a556d8b125110dc538016!
[INFO]	--> Found desired version locally github.com/lyft/goruntime a0d6acf20fcfd48f53e623ed62b87ffb7fe17038!
[INFO]	--> Found desired version locally github.com/lyft/gostats 943f43ede7b2dbf1d7162587689cb484d49ecd15!
[INFO]	--> Found desired version locally github.com/lyft/protoc-gen-validate f9d2b11e44149635b23a002693b76512b01ae515!
[INFO]	--> Found desired version locally github.com/mediocregopher/radix.v2 94360be262532d465b7e4760c7a67195d3319a87!
[INFO]	--> Found desired version locally github.com/sirupsen/logrus d682213848ed68c0a260ca37d6dd5ace8423f5ba!
[INFO]	--> Found desired version locally github.com/stretchr/testify f390dcf405f7b83c997eac1b06768bb9f44dec18!
[INFO]	--> Found desired version locally golang.org/x/crypto 81e90905daefcd6fd217b62423c0908922eadb30!
[INFO]	--> Found desired version locally golang.org/x/net d0887baf81f4598189d4e12a37c6da86f0bba4d0!
[INFO]	--> Found desired version locally golang.org/x/sys acbc56fc7007d2a01796d5bde54f39e3b3e95945!
[INFO]	--> Found desired version locally golang.org/x/text b19bf474d317b857955b12035d2c5acb57ce8b01!
[INFO]	--> Fetching gopkg.in/yaml.v2
[INFO]	--> Fetching google.golang.org/grpc
[INFO]	--> Fetching google.golang.org/genproto
[WARN]	Unable to checkout google.golang.org/grpc
[ERROR]	Update failed for google.golang.org/grpc: Cannot detect VCS
[WARN]	Unable to checkout gopkg.in/yaml.v2
[ERROR]	Update failed for gopkg.in/yaml.v2: Cannot detect VCS
[WARN]	Unable to checkout google.golang.org/genproto
[ERROR]	Update failed for google.golang.org/genproto: Cannot detect VCS
[ERROR]	Failed to install: Cannot detect VCS
Cannot detect VCS
Cannot detect VCS
make: *** [bootstrap] Error 1
$ go version
go version go1.11.5 linux/amd64
$ glide --version
glide version v0.13.3

Can you please give a pragmatic advice how to proceed further with it please ?

Making available binary files for linux_x64 under Releases in GitHub would be a great step forward.

Is there any way to check a limit without incrementing it?

I would like to check a limit without incrementing it, which would be logical equivalent to do hits_addend = 0. As proto3 doesn't distinguish between zero and null, this would however not be possible (handled in https://github.com/lyft/ratelimit/blob/master/src/redis/cache_impl.go#137).

Is there any way to model this in the current setup?

The use case I want to solve is a auth server, where I want to do rate limitation based on invalid credentials such that all requests will give 429 after e.g. 10 invalid requests per minute.

Rate Limit Service: Resiliency Question

Kind of a meta question 🤓possibly for the folk at Lyft. I'm curious on how people make sure the rate limit service itself is resilient to abuse / high traffic.

  • How do you rate limit the rate limit service?
  • What about the fallback strategy? Defaulting to not rate limited potentially affects the rest of the system, and defaulting to rate limited essentially stops all users from interacting with the system.

Rolling window limits?

Hi!
Does the library support rolling windows limits?
For example, if my limit is 60 per hour, there are 2 ways to go about it

  1. If the user hits the limit, she has to wait for 1 hour before the limit resets. Making even one call during the one hour time period will extend the limit by another hour. For example: if I made 60 calls in 1 hour, and then make one call in 1:59, I will be rate limited, and will have to wait for until 2:59 for the limit it reset )
  2. The limits get incremented as the time window slides, to always count only the events in the last hour. For example: if I made 60 calls in 1 hour (5 of which were in the first minute), and 1:01 I am allowed to make 5 more calls.

Based on reading the code, I think only 1 is supported, but just want to make sure

ratelimit client does not correctly handle multiple descriptors

ratelimit_client -domain d -descriptors k1=v1 -descriptors k2=v2

is incorrectly handled as though it was

ratelimit_client -domain d -descriptors k1=v1, k2=v2

Two descriptors isn't the same thing as one descriptor with two entries in it, these two requests have different semantics so the client has a bug.

Cannot start service ratelimit

I modified examples/ratelimit/config/config.yaml file, Then executed the sudo docker-compose up command.

[jasontom@python ratelimit]$ sudo docker-compose up
Creating network "ratelimit_ratelimit-network" with the default driver
Creating network "ratelimit_default" with the default driver
Creating ratelimit_redis_1           ... done
Creating ratelimit_ratelimit-build_1 ... done
Creating ratelimit_ratelimit_1       ... error

ERROR: for ratelimit_ratelimit_1  Cannot start service ratelimit: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"/usr/local/bin/ratelimit\": stat /usr/local/bin/ratelimit: no such file or directory": unknown

ERROR: for ratelimit  Cannot start service ratelimit: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"/usr/local/bin/ratelimit\": stat /usr/local/bin/ratelimit: no such file or directory": unknown
ERROR: Encountered errors while bringing up the project.

Whitelisting IP addresses

Hi,
We are trying to setup ratelimit for our services and can't seem to be able to overcome 1 issue, we can't seem to be able to whitelist our IP addresses in config file.

`---
domain: webback

descriptors:

  • key: generic_key
    value: registration_service_stage0
    descriptors:

    • key: remote_address
      rate_limit:
      unit: hour
      requests_per_unit: 16
  • key: generic_key
    value: registration_service_stage1
    descriptors:

    • key: remote_address
      rate_limit:
      unit: minute
      requests_per_unit: 4
  • key: generic_key
    value: registration_service_stage2
    descriptors:

    • key: remote_address
      rate_limit:
      unit: second
      requests_per_unit: 2
  • key: generic_key
    value: forgot_password_stage_0
    descriptors:

    • key: remote_address
      rate_limit:
      unit: hour
      requests_per_unit: 12
  • key: generic_key
    value: forgot_password_stage_1
    descriptors:

    • key: remote_address
      rate_limit:
      unit: minute
      requests_per_unit: 1

`
Could you please maybe give us an idea on how we can go around our problem?
Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.