Code Monkey home page Code Monkey logo

slowapi's Introduction

SlowApi

A rate limiting library for Starlette and FastAPI adapted from flask-limiter.

This package is used in various production setups, handling millions of requests per month, and seems to behave as expected. There might be some API changes when changing the code to be fully async, but we will notify users via appropriate semver version changes.

The documentation is on read the docs.

Quick start

Installation

slowapi is available from pypi so you can install it as usual:

$ pip install slowapi

Features

Most feature are coming from FlaskLimiter and the underlying limits.

Supported now:

  • Single and multiple limit decorator on endpoint functions to apply limits
  • redis, memcached and memory backends to track your limits (memory as a fallback)
  • support for sync and async HTTP endpoints
  • Support for shared limits across a set of routes

Limitations and known issues

  • The request argument must be explicitly passed to your endpoint, or slowapi won't be able to hook into it. In other words, write:
    @limiter.limit("5/minute")
    async def myendpoint(request: Request)
        pass

and not:

    @limiter.limit("5/minute")
    async def myendpoint()
        pass
  • websocket endpoints are not supported yet.

Developing and contributing

PRs are more than welcome! Please include tests for your changes :)

The package uses poetry to manage dependencies. To setup your dev env:

$ poetry install

To run the tests:

$ pytest

Credits

Credits go to flask-limiter of which SlowApi is a (still partial) adaptation to Starlette and FastAPI. It's also important to mention that the actual rate limiting work is done by limits, slowapi is just a wrapper around it.

slowapi's People

Contributors

aberlioz avatar alpkeskin avatar andreagrandi avatar brumar avatar daniellok-db avatar dependabot[bot] avatar dustyposa avatar fabaff avatar glinmac avatar karlnewell avatar laurents avatar maratsarbasov avatar me-on1 avatar nootr avatar pookiebuns avatar renovate[bot] avatar rested avatar sanders41 avatar thentgesmindee avatar thomasleveil avatar twcurrie avatar xuxueyang-dx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

slowapi's Issues

Change check for request based on type instead of name

It would be nice if the paramter would be checked based on its type starlette.request.Request instead of the name,
Which would allow something like this:

@router.get("/test")
@limiter.limit("2/minute")
async def test(
        _: Request
):
    return "hi"

I found where the check is but I couldnt figure out how to get the type back from the parameter object

            for idx, parameter in enumerate(sig.parameters.values()):
                if parameter.name == "request" or parameter.name == "websocket":
                    connection_type = parameter.name
                    break

Support for aioredis

Since FastAPI and Starlette are ASGI and are built on asyncio, it would be nice for slowapi to support aioredis as a storage backend. I imagine that it might require some playing around with the limits library, but would be a cool feature.

Add to docs limiter must be below the route annotation in order to work

This works

@router.get("/test")
@limiter.limit("2/minute")
async def test(
        request: Request
):
    return "hi"

but this doesnt

@limiter.limit("2/minute")
@router.get("/test")
async def test(
        request: Request
):
    return "hi"

That would be useful for beginners to know (if they havent used any ratelimiter module before)

Limit Http Headers

I'm setting a rate limit for a route as below

from slowapi import Limiter, _rate_limit_exceeded_handler
from slowapi.util import get_remote_address
limiter = Limiter(key_func=get_remote_address, headers_enabled=True)

@router.post("/send-verification-otp", response_model=schemas.Msg)
@limiter.limit("1/1 minute")
def send_verification_otp(

Check the below error:

AttributeError: 'dict' object has no attribute 'headers'

No need of Request in endpoint

In relation to example in issue #25 , ratelimit works without using Request in endpoint. I havenot used Request in any endpoint of my application but ratelimit works. But documentation strongly suggests to use Request in endpoint.
Example:

@router.post('/show',
             tags=["show"],
             name="show:showToAll")
async def show_to_all(background_tasks: BackgroundTasks, user_id = Form(...) ,db: AsyncIOMotorClient = Depends(get_database), Authorize: AuthJWT = Depends(), language: Optional[str] = "en"):
    ****code*****

This works with example of issue #25

About redis

I havenot mentioned storage_uri in limiter separately.

class LimiterClass:
    def __init__(self):
        self.limiter = Limiter(key_func=get_remote_address, strategy="moving-window", default_limits=["3/5second"])

I have created account in redis lab. If need to keep storage_uri , how to set redis uri with host, port and password? In issue #2, there is no password mentioned.

Convert the code to async

As both starlette and FastAPI are fully async, there is no reason to keep this extension running synchronously. Going async should help save a few ms while fetching data from Redis or memcached for instance.

TypeError: __init__() got multiple values for argument 'key_func'

The code :-
from fastapi import FastAPI, Request
from slowapi import Limiter, _rate_limit_exceeded_handler
from slowapi.util import get_remote_address
from slowapi.errors import RateLimitExceeded
from threading import Thread
from lyrics_extractor import SongLyrics
from helpers.meme import get_meme

app = FastAPI(title="General API V2", description = "An API for everything!")
limiter = Limiter(app, key_func=get_remote_address)
app.state.limiter = limiter
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)

The error :-
TypeError: init() got multiple values for argument 'key_func'

This is incredibly weird and seems to be a problem from the package itself and not from my code, probably a problem with the Limiter class but I can be wrong.. I'm using the version 0.1.5 of the slowapi, any help is appreciated, Thanks!

Global default limit in FastAPI

Setting up the default global limit for all routes using this:

    # global rate limiter
    limiter = Limiter(key_func=default_identifier,
        default_limits=["1/minute"],
        headers_enabled=True,
        storage_uri="redis://XXXXXX.home",
        in_memory_fallback_enabled=True)
    app.state.limiter = limiter
    app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)

But all routes are not getting rate limited. If I add the limiter decorator to a route and test it works as expected. What am I missing here?

I am adding routes using separate modules and attaching to app as so :
app.include_router(data.router)

Pass request object to limiter callbacks

The request object cannot be accessed easily in some of the limiter callbacks (limit_value, exempt_when). Support for this would be really useful. This already seems to work with key_func which gets passed the request if the callback declares a request parameter.

This would also remove the need for the workaround from #13.

FastAPI 'RateLimitExceeded' is not defined

I am trying the FastAPI example and I get :

  File "./basic.py", line 13, in <module>
    app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)
NameError: name 'RateLimitExceeded' is not defined

Python 3.9
fastapi 0.63.0
slowapi 0.1.3

how to support key_func is aysnc?

i have two questions.

  1. key func will async ๏ผŒthis don't support await execute.
  2. How do the global limiter and individual limiter not affect each other in the same router.

Ratelimit doesn't work after token expires

I am rate limiting on two approaches:
i)based on IP address (for endpoint having no access token)
Works fine
ii) based on user id ( obtained from JWT token)
I have used https://pypi.org/project/fastapi-jwt-auth/ for JWTAuth. On the basis of #25 , in limiter.py

from slowapi import Limiter, _rate_limit_exceeded_handler
from slowapi.util import get_remote_address
from slowapi.errors import RateLimitExceeded
from app.utility.config import DEFAULT_RATE_LIMIT
from starlette.requests import Request
from starlette.responses import JSONResponse, Response
from fastapi_jwt_auth import AuthJWT
from app.core.security.security_utils import decrypt_data

class LimiterClass:
    def __init__(self):
      
        #redis used locally 
        self.limiter = Limiter(key_func=get_user_id_or_ip, strategy="moving-window", default_limits=[DEFAULT_RATE_LIMIT])

"""
Method : Get user_id for JWT access token , IP address for not having token
@Param : 
    1. request : type-> Request
Return: User id or IP address 
"""
def get_user_id_or_ip(request : Request):
    authorize = AuthJWT(request)  # initial instance fastapi-jwt-auth
    authorize.jwt_optional()  # for validation jwt token
    return decrypt_data(authorize.get_jwt_subject()) or get_remote_address

This works fine on unexpired JWT access token and ratelimits user. But when JWT access token expires, then it throws an error.

ERROR:uvicorn.error:Exception in ASGI application
Traceback (most recent call last):
  File "C:\****\anaconda3\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 394, in run_asgi
    result = await app(self.scope, self.receive, self.send)
  File "C:\****\anaconda3\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 45, in __call__
    return await self.app(scope, receive, send)
  File "C:\****\anaconda3\lib\site-packages\fastapi\applications.py", line 199, in __call__
    await super().__call__(scope, receive, send)
  File "C:\****\anaconda3\lib\site-packages\starlette\applications.py", line 111, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\****\anaconda3\lib\site-packages\starlette\middleware\errors.py", line 181, in __call__
    raise exc from None
  File "C:\****\anaconda3\lib\site-packages\starlette\middleware\errors.py", line 159, in __call__
    await self.app(scope, receive, _send)
  File "C:\****\anaconda3\lib\site-packages\starlette\middleware\base.py", line 25, in __call__
    response = await self.dispatch_func(request, self.call_next)
  File "C:\****\anaconda3\lib\site-packages\slowapi\middleware.py", line 51, in dispatch
    return exception_handler(request, e)
  File "C:\****l\anaconda3\lib\site-packages\slowapi\extension.py", line 88, in _rate_limit_exceeded_handler
    {"error": f"Rate limit exceeded: {exc.detail}"}, status_code=429
AttributeError: 'JWTDecodeError' object has no attribute 'detail'

When accesstoken expires, I need to return HTTP 403 Forbidden access message.

Error in the documentation

In the example on the documentation page, the limit on a request to any of the endpoints adds 2 to the counter, this behavior can be corrected by making the name of the decoded functions different.
Extended version of the code from the documentation for reproduction:

from fastapi import FastAPI
from slowapi import Limiter, _rate_limit_exceeded_handler
from slowapi.util import get_remote_address
from slowapi.errors import RateLimitExceeded
from fastapi import Request, Response
from fastapi.responses import PlainTextResponse

limiter = Limiter(key_func=get_remote_address)
app = FastAPI()
app.state.limiter = limiter
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)


# Note: the route decorator must be above the limit decorator, not below it
@app.get("/home")
@limiter.limit("5/minute")
async def homepage(request: Request):
    return PlainTextResponse("test")


@app.get("/mars")
@limiter.limit("5/minute")
async def homepage(request: Request, response: Response):
    return {"key": "value"}


if __name__ == "__main__":
    import uvicorn

    uvicorn.run("main:app", host="127.0.0.1", reload=False, port=int("8000"))

Documentation?

Hi, I was looking for a way to rate limit my production FastAPI backend and I stumbled upon this repo, which seems to meet my needs, but lacks documentation. I was able to grasp some stuff from the examples and by reading the source code, but is there any chance to find proper documentation for this project?

Best Regards, and thanks for making this!

Global limit?

Is it possible to define a "global" limit (still per-user) to apply to all handlers and then have more restrictive limits on a per-handler basis? If so, how?

Redis backend configuration

By reading the source code I figured out that it's possible to setup a redis server to be used as a temporary storage for rate limiting data, but I couldn't figure out how to actually use it with slowapi, is there anything I need to configure/import?

Update fastapi snippet so it can work from copy paste

Hello,
This project is awesome thanks for all the work could the docs for fastapi be updated to this:

from fastapi import FastAPI, Request, Response
from slowapi import Limiter, _rate_limit_exceeded_handler
from slowapi.util import get_remote_address
from slowapi.errors import RateLimitExceeded

limiter = Limiter(key_func=get_remote_address)
app = FastAPI()
app.state.limiter = limiter
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)

# Note: the route decorator must be above the limit decorator, not below it
@app.get("/home")
@limiter.limit("5/minute")
async def homepage(request: Request):
    return "test"

@app.get("/mars")
@limiter.limit("5/minute")
async def homepage(request: Request, response: Response):
    return {"key": "value"}

The current snippet misses the Request and Response import and I can't find a text response in fastapi

The above edited snippet will work when copy pasted!

rate limited for route

image
in flask-limiter we can use limiter.exempt(...) for blueprint or bluprint route. can i do any thing like that at the moment

Is it possible to conditionally rate limit depending on HTTP header information?

Thanks for this project and a great name!

An example of conditional rate limiting: rate limit a REST endpoint to 1 call per minute per user. The user can be identified by a token in the HTTP header. I appreciate adding predicates other than time can make this messy fast! At the same time, I'm sure this is an extremely common problem. I guess people resort to solving this in there apps because the problem is so domain specific.

Add support for python3.9

There is a problem in CI when trying to install dependencies for python3.9. See the logs here for details.
I'm not even sure why lxml is in the dependency list...

Opening this repo | Adding contributors

I would like to run an experiment with this repo.
I am clearly not available enough to maintain this package at the pace I would like to, but it does seem to have gathered some amount of interest.
I would hate to see it become a problem for its users, and even more so for its contributors.

So I would like to do the following, in the hope that it helps slowapi be more useful, and improve at a faster pace.

I will add anyone who has contributed a (valid) PR as a contributor to the repo. This means that instead of a single person being able to review code changes, there should be a handful. There is no obligations on your end, other than not misusing your access.
I'll change settings to require at least one approval for PRs, but it can come from anyone.
Hopefully with this, the code can keep getting better for everyone.

Tagging the current contributors just for context:
@brumar
@fabaff
@glinmac
@Rested
@thearchitector
@thomasleveil
@twcurrie
@xuxueyang-dx

rate limit issue

Hi, thank you for making this great tools.
I have an issue when I set a rate limit to "1/second". After initiation the service and try to call the api, in the first second I always get error of too many requests.

there is the limiter class:

limiter = Limiter(key_func=get_remote_address)

I've used this decorator:

@limiter.limit("1/second")

And the app setup:

app.state.limiter = limiter
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)

Any help? Thanks.

Global limit and routes setting a limit cause double response headers

A default limit set as :

limiter = Limiter(key_func=default_identifier,
                  default_limits="10/minute",
                  headers_enabled=True,
                  storage_uri=settings.RateLimit.redis_uri,
                  in_memory_fallback_enabled=True,
                  swallow_errors=True)
app.state.limiter = limiter
app.add_middleware(SlowAPIMiddleware)
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)

With routes that use the global limit, eg do not use the @limiter.limit decorator they respond with headers as expected with:

 x-ratelimit-limit: 10 
 x-ratelimit-remaining: 9 
 x-ratelimit-reset: 1610412027 

But routes that use the @limitier.limit decorator now get double headers as such:

 x-ratelimit-limit: 1000,1000 
 x-ratelimit-remaining: 950,950 
 x-ratelimit-reset: 1612495738,1612495738 

and an error is thrown:

2021-01-12 14:09:37,215 [17636] ERROR: Failed to update rate limit headers. Swallowing error
Traceback (most recent call last):
  File "D:\programming\financefeast_api\venv\lib\site-packages\slowapi\extension.py", line 381, in _inject_headers
    retry_after = parsedate_to_datetime(existing_retry_after_header)
  File "C:\Python38\lib\email\utils.py", line 198, in parsedate_to_datetime
    *dtuple, tz = _parsedate_tz(data)
TypeError: cannot unpack non-iterable NoneType object

If the parameter 'swallow_errors' is set to False, SlowAPI throws an stack-trace with:

2021-01-12 13:44:28,823 [16576] ERROR: Exception in ASGI application
Traceback (most recent call last):
  File "D:\programming\financefeast_api\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 394, in run_asgi
    result = await app(self.scope, self.receive, self.send)
  File "D:\programming\financefeast_api\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 45, in __call__
    return await self.app(scope, receive, send)
  File "D:\programming\financefeast_api\venv\lib\site-packages\uvicorn\middleware\debug.py", line 81, in __call__
    raise exc from None
  File "D:\programming\financefeast_api\venv\lib\site-packages\uvicorn\middleware\debug.py", line 78, in __call__
    await self.app(scope, receive, inner_send)
  File "D:\programming\financefeast_api\venv\lib\site-packages\fastapi\applications.py", line 199, in __call__
    await super().__call__(scope, receive, send)
  File "D:\programming\financefeast_api\venv\lib\site-packages\starlette\applications.py", line 111, in __call__
    await self.middleware_stack(scope, receive, send)
  File "D:\programming\financefeast_api\venv\lib\site-packages\starlette\middleware\errors.py", line 181, in __call__
    raise exc from None
  File "D:\programming\financefeast_api\venv\lib\site-packages\starlette\middleware\errors.py", line 159, in __call__
    await self.app(scope, receive, _send)
  File "D:\programming\financefeast_api\venv\lib\site-packages\starlette\middleware\base.py", line 25, in __call__
    response = await self.dispatch_func(request, self.call_next)
  File ".\app\middleware\request_context.py", line 42, in dispatch
    raise e
  File ".\app\middleware\request_context.py", line 38, in dispatch
    response = await call_next(request)
  File "D:\programming\financefeast_api\venv\lib\site-packages\starlette\middleware\base.py", line 45, in call_next
    task.result()
  File "D:\programming\financefeast_api\venv\lib\site-packages\starlette\middleware\base.py", line 38, in coro
    await self.app(scope, receive, send)
  File "D:\programming\financefeast_api\venv\lib\site-packages\starlette\middleware\base.py", line 25, in __call__
    response = await self.dispatch_func(request, self.call_next)
  File "D:\programming\financefeast_api\venv\lib\site-packages\slowapi\middleware.py", line 54, in dispatch
    response = limiter._inject_headers(response, request.state.view_rate_limit)
  File "D:\programming\financefeast_api\venv\lib\site-packages\slowapi\extension.py", line 381, in _inject_headers
    retry_after = parsedate_to_datetime(existing_retry_after_header)
  File "C:\Python38\lib\email\utils.py", line 198, in parsedate_to_datetime
    *dtuple, tz = _parsedate_tz(data)
TypeError: cannot unpack non-iterable NoneType object

It seems since there is an existing header on the response, a string to datetime conversion is attempted on the response header '"Retry-After", but being less that 5 characters parsedate_tz returns None and we get the error above.

Question: why are routes that define a limit return double response headers?

Dynamic rate limit based on user type

I need a way to dynamically set the rate limit based on the user type.

For example, I want to limit users without access token & have unlimited access to users with the access token.

What I am currently using:


limiter = Limiter(key_func=identify_my_user_func)
app.state.limiter = limiter
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)

def identify_my_user_func(request: Request):
    if 'access_token' not in request.query_params:
        return request.client.host
    return "REGISTERED_USER"

@limiter.limit("2/minute")
def some_request(request: Request)):
     return data

I am trying to find a way to conditionally limit 2/minute. Basically I want to increase the limit based on the user type.

$.ajax and $.getJSON request not limited

Hi everyone, I'm having a problem. When I make too many browser or postman requests rightly the APIs block me as I set, but if I make a request via AJAX and jquery's $ .getJSON, I'm not limited. How can I solve?

Return a custom JSON-like object when rate limit is hit

I know it is possible to specify the error message in the limit decorator, but I'm building a very complex API and all of my requests need to be consistent and follow a specific schema that I represented using a pydantic.BaseModel subclass (basically a dataclass on steroids). Is it possible to return such a custom object instead of the default starlette.responses.JSONResponse?

My response looks like this (once converted to a dictionary)

{"request_id" : "foobarbaz", "status_code": 416, "response_data": None, "error_message" : "Unsupported media type"}

Limit concurrency to 1

In my use case I want to limit concurrency to 1 for some endpoints to avoid race conditions.

I am looking for a limiter which will reliably not allow to have two requests at the same time to the same endpoint.

I wonder if this library will suit my needs?

Question about get_ipaddr function

this is how get_ipaddr currently works
if "X_FORWARDED_FOR" in request.headers: r = request.headers["X_FORWARDED_FOR"] return r else: return request.client.host or "127.0.0.1"
but the X-Forwarded-For gives two ips. So shouldn't it be
if "X_FORWARDED_FOR" in request.headers: r = request.headers["X_FORWARDED_FOR"].split(",")[0] return r else: return request.client.host or "127.0.0.1"

Limit rate issue

I've tested limit rate locally and it works fine.

After I deployed application on AWS, rates did't work at all until I set redis as storage.

But even with redis, rate limit seems to be broken.
Limit is exceeded after ~10th attempt, and I've set limit to 5.

I've checked redis value for the key inserted by limiter, and I think it did not count every attempt.

I'm using FastAPI 0.45.0 and slowapi 0.1.1

Access Limiter within route files

I would like to access my limiter object which is in my app.py file, within one of my route files which are within my routes directory. Can anyone help me do this?

Current Code:

from slowapi import Limiter, _rate_limit_exceeded_handler
from slowapi.util import get_remote_address
from slowapi.errors import RateLimitExceeded

limiter = Limiter(key_func = get_remote_address)
app.state.limiter = limiter
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)

Feature request: Return remaining API calls

What an amazing, easy to use tool this is!
Is it possible to return the number of remaining API calls (per decorator) to the user of the API?
This way the user of the API can better understand how his/her interaction with an app results in API calls and how many are left.

Code coverage checks

It'd be good to setup some automated code coverage checks to validate PR's provided don't reduce the existing code coverage. Ideally, there'd be a zero cost option for something within the public domain.

limits more than specified

for some reason, I get half the rate specified.
e.g 2/minute -> 1, 10/minute -> 5

This is my set up:

from fastapi import FastAPI, Request
from slowapi import Limiter, _rate_limit_exceeded_handler
from slowapi.util import get_remote_address
from slowapi.errors import RateLimitExceeded

limiter = Limiter(key_func=get_remote_address)
app = FastAPI()
app.state.limiter = limiter
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)


@app.get("/")
@limiter.limit("5/minute")
async def root(request: Request):
    return {"message": "Hello World"}

Is there anything obvious that I might be missing here?

Cache backend documentation

I cannot find the documentation on how to set up the different types of backend's (Redis,Memcache) anywhere in the docs.

Could you please add it to the documentation or point me in the right direction?

Using slowapi in bigger application

In test.py from app is run:

import uvicorn

from app.main import app

if __name__ == "__main__":
    uvicorn.run("test:app", host="0.0.0.0", port=8000, reload=True)

In main.py

from app.api.api_v1.api import router as api_router
from fastapi import FastAPI
from slowapi import Limiter, _rate_limit_exceeded_handler
from slowapi.util import get_remote_address
from slowapi.errors import RateLimitExceeded
def get_application() -> FastAPI:
     application = FastAPI(title=PROJECT_NAME, debug=DEBUG, version=VERSION)
     application.add_event_handler(
        "startup", create_start_app_handler(application))
     application.add_event_handler(
        "shutdown", create_stop_app_handler(application))
    application.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)
    application.include_router(api_router, prefix=API_PREFIX)

     return application
app = get_application()

In endpoint user.py, how to use @limiter.limit("5/minute") ?

from starlette.requests import Request
router = APIRouter()
@router.post('/show',
             tags=["show"],
             name="show:show")
async def homepage(request: Request):
        return PlainTextResponse("test")
***code****

In this use case how to use slowapi in endpoint. I need to limit to 5 requests per minute per user only and block that user for a minute Sleep and retry after a minute

In api.py

from app.api.api_v1.endpoints.show import router as ShowRouter

router = APIRouter()



router.include_router(ShowRouter, tags=["show"], prefix="/show")

Support for default limit

There should be a way to declare a default set of limits that apply to all endpoints without explicitly marking each endpoint with the decorator.

Getting error in calling ratelimited route if set RATELIMIT_ENABLED=False in config

We were trying to disable ratelimit check for all routes which are marked/decorated with limiter.limit(). For that, we set RATELIMITE_ENABLED=False in .env file or set Limiter(enabled=False), then try to call any route which is decorated with limiter.limit(), we are getting following exception -

ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/.venv/lib/python3.7/site-packages/uvicorn/protocols/http/httptools_impl.py", line 390, in run_asgi
    result = await app(self.scope, self.receive, self.send)
  File "/.venv/lib/python3.7/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
    return await self.app(scope, receive, send)
  File "/.venv/lib/python3.7/site-packages/fastapi/applications.py", line 199, in __call__
    await super().__call__(scope, receive, send)
  File "/.venv/lib/python3.7/site-packages/starlette/applications.py", line 111, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/.venv/lib/python3.7/site-packages/starlette/middleware/errors.py", line 181, in __call__
    raise exc from None
  File "/.venv/lib/python3.7/site-packages/starlette/middleware/errors.py", line 159, in __call__
    await self.app(scope, receive, _send)
  File "/.venv/lib/python3.7/site-packages/starlette/middleware/cors.py", line 78, in __call__
    await self.app(scope, receive, send)
  File "/.venv/lib/python3.7/site-packages/starlette/exceptions.py", line 82, in __call__
    raise exc from None
  File "/.venv/lib/python3.7/site-packages/starlette/exceptions.py", line 71, in __call__
    await self.app(scope, receive, sender)
  File "/.venv/lib/python3.7/site-packages/starlette/routing.py", line 566, in __call__
    await route.handle(scope, receive, send)
  File "/.venv/lib/python3.7/site-packages/starlette/routing.py", line 227, in handle
    await self.app(scope, receive, send)
  File "/.venv/lib/python3.7/site-packages/starlette/routing.py", line 41, in app
    response = await func(request)
  File "/.venv/lib/python3.7/site-packages/fastapi/routing.py", line 202, in app
    dependant=dependant, values=values, is_coroutine=is_coroutine
  File "/.venv/lib/python3.7/site-packages/fastapi/routing.py", line 148, in run_endpoint_function
    return await dependant.call(**values)
  File "/.venv/lib/python3.7/site-packages/slowapi/extension.py", line 636, in async_wrapper
    self._inject_headers(response, request.state.view_rate_limit)
  File "/.venv/lib/python3.7/site-packages/starlette/datastructures.py", line 672, in __getattr__
    raise AttributeError(message.format(self.__class__.__name__, key))
AttributeError: 'State' object has no attribute 'view_rate_limit'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.