Code Monkey home page Code Monkey logo

actix-ratelimit's Introduction

Travis (.org) Crates.io Crates.io

actix-ratelimit

Rate limiting middleware framework for actix-web

This crate provides an asynchronous and concurrent rate limiting middleware based on actor model which can be wraped around an Actix application. Middleware contains a store which is used to identify client request.

Check out the documentation here.

Comments, suggesstions and critiques are welcome!

Usage

Add this to your Cargo.toml:

[dependencies]
actix-ratelimit = "0.3.1"

Version 0.3.* supports actix-web v3. If you're using actix-web v2, consider using version 0.2.*.

Minimal example:

use actix_web::{web, App, HttpRequest, HttpServer, Responder};
use actix_ratelimit::{RateLimiter, MemoryStore, MemoryStoreActor};
use std::time::Duration;

async fn greet(req: HttpRequest) -> impl Responder{
    let name = req.match_info().get("name").unwrap_or("World!");
    format!("Hello {}!", &name)
}

#[actix_web::main]
async fn main() -> std::io::Result<()> {
    // Initialize store
    let store = MemoryStore::new();
    HttpServer::new(move ||{
        App::new()
            // Register the middleware
            // which allows for a maximum of
            // 100 requests per minute per client
            // based on IP address
            .wrap(
                RateLimiter::new(
                MemoryStoreActor::from(store.clone()).start())
                    .with_interval(Duration::from_secs(60))
                    .with_max_requests(100)
            )
            .route("/", web::get().to(greet))
            .route("/{name}", web::get().to(greet))
    })
    .bind("127.0.0.1:8000")?
    .run()
    .await
}

Sending a request returns a response with the ratelimiting headers:

$ curl -i "http://localhost:8000/"

HTTP/1.1 200 OK
content-length: 13
content-type: text/plain; charset=utf-8
x-ratelimit-remaining: 99
x-ratelimit-reset: 52
x-ratelimit-limit: 100
date: Tue, 04 Feb 2020 21:53:27 GMT

Hello World!

Exceeding the limit returns HTTP 429 Error code.

Stores

A store is a data structure, database connection or anything which can be used to store ratelimit data associated with a client. A store actor which acts on this store is responsible for performiing all sorts of operations(SET, GET, DEL, etc). It is Important to note that there are multiple store actors acting on a single store.

List of features

  • memory (in-memory store based on concurrent hashmap)
  • redis-store (based on redis-rs)
  • memcached (based on r2d2-memcache, see note to developers below)

Implementing your own store

To implement your own store, you have to implement an Actor which can handle ActorMessage type and return ActorResponse type. Check the module level documentation for more details and a basic example.

Note to developers

  • By default, all features are enabled. To use a particular feature, for instance redis, put this in your Cargo.toml:
[dependencies]
actix-ratelimit = {version = "0.3.1", default-features = false, features = ["redis-store"]}
  • By default, the client's IP address is used as the identifier which can be customized using ServiceRequest instance. For example, using api key header to identify client:
#[actix_web::main]
async fn main() -> std::io::Result<()> {
    // Initialize store
    let store = MemoryStore::new();
    HttpServer::new(move ||{
        App::new()
            .wrap(
                RateLimiter::new(
                MemoryStoreActor::from(store.clone()).start())
                    .with_interval(Duration::from_secs(60))
                    .with_max_requests(100)
                    .with_identifier(|req| {
                        let key = req.headers().get("x-api-key").unwrap();
                        let key = key.to_str().unwrap();
                        Ok(key.to_string())
                    })
            )
            .route("/", web::get().to(greet))
            .route("/{name}", web::get().to(greet))
    })
    .bind("127.0.0.1:8000")?
    .run()
    .await
}
  • The memcache store uses a separate key to keep track of expiry since there's no way to get ttl of keys in memcache natively yet. This means memcache store will use double the number of keys as compared to redis store. If there's any better way to do this, please considering opening an issue!

  • It is important to initialize store before creating HttpServer instance, or else a store will be created for each web worker. This may lead to instability and inconsistency! For example, initializing your app in the following manner would create more than one stores:

#[actix_web::main]
async fn main() -> std::io::Result<()> {
    HttpServer::new(move ||{
        App::new()
            .wrap(
                RateLimiter::new(
                MemoryStoreActor::from(MemoryStore::new()).start())
                    .with_interval(Duration::from_secs(60))
                    .with_max_requests(100)
            )
            .route("/", web::get().to(greet))
            .route("/{name}", web::get().to(greet))
    })
    .bind("127.0.0.1:8000")?
    .run()
    .await
}
  • To enable ratelimiting across multiple instances of your web application(multiple http servers behind load balancer), consider using a feature called session stickiness supported by popular cloud services such as AWS, Azure, etc.

Status

This project has not reached v1.0, so some instability and breaking changes are to be expected till then.

You can use the issue tracker in case you encounter any problems.

LICENSE

This project is licensed under MIT license.

License: MIT

actix-ratelimit's People

Contributors

detailyang avatar leonardolang avatar tglman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

actix-ratelimit's Issues

No ratelimit is forced (no HTTP 429 Error code)

I use JMeter for a lot of local threaded requests and the ratelimiter detects only a single connection although there are 100 hundred requests in a second.
I've also used a loop to get requests for some more seconds. No change.
I've checked this with wireshark and there are really 100 hundred new TCP connections.

Every response looks the same:

HTTP/1.1 200 OK
content-length: 244
connection: close
x-ratelimit-remaining: 9
content-type: application/json
x-ratelimit-reset: 60
x-ratelimit-limit: 10
date: Sun, 31 Jan 2021 02:17:50 GMT

My code looks like this

pub async fn main_server(bind: &'static str) -> std::io::Result<()> {
    println!("Starting server {}...", bind);

    env_logger::Builder::from_env(env_logger::Env::default().default_filter_or("info")).init();

    let store = MemoryStore::new();

    HttpServer::new(move || {
        App::new()
            .wrap(Logger::default())
            .wrap(
                RateLimiter::new(MemoryStoreActor::from(store.clone()).start())
                    .with_interval(Duration::from_secs(60))
                    .with_max_requests(10),
            )
            .wrap(Compress::default())
            .service(
                web::scope("/api/v1")
                    .guard(guard::All(guard::Get()).and(guard::Header(
                        header::CONTENT_TYPE.as_str(),
                        mime::APPLICATION_JSON.essence_str(),
                    )))
                    .service(srv_query),
            )
    })
    .bind(bind)?
    .run()
    .await
}

Looks like there is always executed the else-condition here: https://docs.rs/actix-ratelimit/0.3.1/src/actix_ratelimit/middleware.rs.html#220

Exit gracefully if unable to connect to the Redis

Currently if you provide an IP address which cannot be connected to

RedisStoreActor::from(RedisStore::connect("redis://1.1.1.1")).start()

the process still continues on and requests just timeout without a response.

Wrap doesn't work within scope

Hey @TerminalWitchcraft,

my use-case currently is that I only want to apply a rate limiter to my /auth path. In issue #10 you wrote the following:

Hey, yes you can using scopes. Each scope can register independent middlewares and it's also a nice utility to group related resources/endpoints! More information here: https://docs.rs/actix-web/3.3.2/actix_web/struct.Scope.html

I haven't personally tried it, but it should work if I understand the documentation correctly. Let me know if you run into any trouble using this.

Scopes would actually work perfectly for me. I just never worked with them before.

Well, I tried them and my code looks like this:

let cors = Cors::default();
let store = MemoryStore::new();

App::new()
    .wrap(Compress::default())
    .service(
      web::scope("/auth")
        .wrap(cors)    // This .wrap actually works with actix-cors
        .wrap(
          RateLimiter::new(
              MemoryStoreActor::from(store.clone()).start()
          )
          .with_interval(Duration::from_secs(60 * 10))
          .with_max_requests(20)
        )
        .service(web::resource("/").to(auth::get_auth))
    )
    /* Some more routes down here */
)

As it seems actix-ratelimit doesn't work properly as scoped middleware because the Rust compiler complains:

error[E0277]: the trait bound `RateLimiter<MemoryStoreActor>: Transform<actix_web::scope::ScopeService, ServiceRequest>` is not satisfied
   --> src/main.rs:69:25
    |
68  |                       .wrap(
    |                        ---- required by a bound introduced by this call
69  | /                         RateLimiter::new(
70  | |                             MemoryStoreActor::from(store.clone()).start()
71  | |                         )
72  | |                         .with_interval(Duration::from_secs(60 * 10))
73  | |                         .with_max_requests(20)
    | |______________________________________________^ the trait `Transform<actix_web::scope::ScopeService, ServiceRequest>` is not implemented for `RateLimiter<MemoryStoreActor>`
    |

The problem may lay in this implementation of RateLimiter<T>: https://github.com/TerminalWitchcraft/actix-ratelimit/blob/master/src/middleware.rs#L100

Their implementation (from actix-cors) looks like this:

impl<S, B> Transform<S, ServiceRequest> for Cors
where
    S: Service<ServiceRequest, Response = ServiceResponse<B>, Error = Error>,
    S::Future: 'static,

    B: MessageBody + 'static,
{
    type Response = ServiceResponse<EitherBody<B>>;
    type Error = Error;
    type InitError = ();
    type Transform = CorsMiddleware<S>;
    type Future = Ready<Result<Self::Transform, Self::InitError>>;

    fn new_transform(&self, service: S) -> Self::Future {
        // ...
    }
}

Source: https://github.com/actix/actix-extras/blob/master/actix-cors/src/builder.rs#L486

I'm not a Rust expert, I've just started using it seriously for ~3 months, but I'd bet that the S generic is incorrectly defined. Do you know how to fix this?

Thanks in advance,
Nicolas

New release?

Now that actix-web v3 is available, could I request a new release of actix-ratelimit please? I see that the PR #6 was merged but that didn't include a version bump for actix-ratelimit and there hasn't been a anything new on crates.io.

If there's anything I can do to help with this, I'm very happy to help out.

Finally, when I request something from an open-source project, I think it's only fair to let them know they are appreciated, so thanks a lot for this project, I certainly would have struggled to implement this myself at this stage in my rust experience!

Rate Limit /64 subnets when remote address is IPv6

In https://docs.rs/actix-ratelimit/latest/src/actix_ratelimit/middleware.rs.html#62-76, the default identifier for a client is its ip address.
But IPv6 clients usually get at least a /64 assigned to them, so a single machine could easily exhaust memory for the rate-limit store and/or avoid being rate-limited by rotating through ip addresses within its /64.
See also: https://adam-p.ca/blog/2022/02/ipv6-rate-limiting/

I suggest to extract the /64 subnet in the default identifier and thus rate-limit the complete subnet.

Variable Rate Limiters?

Is it possible to have multiple rate limiters that get triggered depending on different factors? like for example, if i wanted to have a much higher rate limit for users who provide a valid authorization token, but fall back to the normal rate limit for unauthorized requests.
I tried adding multiple RateLimiter warps, but it seems that only the first one ever does anything.

Using master branch with the redis feature on actix-web 3

Error Actix 3

Hi I keep getting this error

rror[E0271]: type mismatch resolving `<RateLimiter<MemoryStoreActor> as Transform<<impl ServiceFactory as ServiceFactory>::Service>>::Request == ServiceRequest`
   --> src/main.rs:164:27
    |
164 |     App::new().wrap(cors).wrap(ratelimiter)
    |                           ^^^^ expected struct `actix_web::service::ServiceRequest`, found struct `ServiceRequest`
    |
    = note: perhaps two different versions of crate `actix_web` are being used?
  let ratelimiter = RateLimiter::new(
      MemoryStoreActor::from(store.clone()).start())
          .with_interval(Duration::from_secs(3))
          .with_max_requests(10);

actix-ratelimit = "0.3.1"
actix-web = { version = "3.0.0", features=["openssl"] }

while trying to update to actix 3 not sure what i'm doing wrong

Thanks a lot

Support Actix `4.0.0`

This library does not support versions of actix-web greater than the latest version 3 release (3.3.3). It would be awesome if Actix version 4 support is added in!

Error when trying to use just the memory feature

Reproduce: Use the following in Cargo.toml: actix-ratelimit = { version = "0.3.0", default-features = false, features = ["memory"] }

cargo check output:

error[E0432]: unresolved import `stores::memory`
   --> /Users/jasonpkovalski/.cargo/registry/src/github.com-1ecc6299db9ec823/actix-ratelimit-0.3.0/src/lib.rs:194:17
    |
194 | pub use stores::memory::{MemoryStore, MemoryStoreActor};
    |                 ^^^^^^ could not find `memory` in `stores`

RateLimit per API level?

Is it possible to control the rate limit at per API level?

For example:
GET /foo 100 max requests per minute
GET /bar 200 max requests per minute

Rate limiting sensible to race conditions

Hi there! We've identified a potential race condition when counting down rate limits. Let me explain:

List of actions

There are two actions that can are atomically executed for each request:

  • GET key: returns the current remaining count as a usize
  • DECRBY key 1: decrement the counter by 1

Say there's only 1 request left for the rate limiting. If two (probably more) requests are issued concurrently the following can happen:

  • req A: Get key -> returns 1
  • req B: Get key -> returns 1
  • req A: DECRBY key 1 -> key = 0
  • req B: DECRBY key 1 -> key = -1

What now?

The current code only deals with usize counters, so the behavior depends on the implementation of the stores, but overall I think the code is too optimistic for this scenario.

One solution would be to use an atomically safe Get + Decr, using locks for instance. But I think the cost is too high for the normal case.

I would rather document the possibility of this scenarion and I would recommend 2 changes to the current implementation:

  • use i32 instead of usize to express the counter to acknowledge this scenario
  • make sure the code checks that remaining <= 0 instead of remaining == 0

If you agree, I can contribute to make these changes. I wanted to make sure we agree and we didn't forget anything before moving on.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.