Comments (15)
oh ok. Thanks for the insight @laurentS. So, in cases wherein multiple container instances of my service(internally using FastAPI) are spawned, then these limits based on client_id will depend on which container the requests reach?
Exactly, if you want to share those limits you have to use some kind of persistent storage such as Redis for example.
Also, just curious, how do i write a limiter using SlowAPI which imposes a blanket limit of 50 requests per minute to an endpoint like "/query". Unclear on what the "key_func" should return as I'am not really imposing limits based on a key such as Remote address or client id or any other key.
global_limiter = Limiter(key_func=lambda ?????)
Returning any string from key_func would work, as long as key_func return the exact same string for each call
from slowapi.
You can absolutely use memory storage, but you need to be aware of the limitations:
- if you have multiple
fastapi
processes (often the case in production), then each one will use a separate store, which could lead to the problem you're seeing. - the data stored will only live as long as the memory storage process, so when you restart your server, limits will be reset, which could be a problem depending on your use case.
If the above are not acceptable for your needs, you probably want to switch to something like redis instead.
from slowapi.
@RohitPShetty I suspect your code has a small bug which we missed:
client_id_limiter = Limiter(
key_func=lambda request: request.headers.get("client-uuid"),
)
global_limiter = Limiter(
key_func=lambda request: request.url.path
)
# global_limiter is never attached to the app
app.state.limiter = client_id_limiter
# instead use a single `Limiter`
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)
@router.post(
"/query")
@client_id_limiter.limit("3/minute")
@global_limiter.limit("5/minute")
async def user_query(
request: Request
)
The tests actually have an example of exactly what you want to do:
slowapi/tests/test_fastapi_extension.py
Line 72 in 90ec3ab
limiter = Limiter()
@router.post("/query")
@limiter.limit("3/minute", key_func=lambda request: request.headers.get("client-uuid"))
@limiter.limit("5/minute", key_func=lambda: "shared_by_everyone")
async def user_query(
request: Request
)
from slowapi.
Hi @RohitPShetty, I don't have time to dig into this right now, but have you tried looking into the storage backend to see what keys are in there? It might give you some hints as to what is going wrong. You could also log what goes into hit
here:
Line 505 in 90ec3ab
What storage backend do you use?
from slowapi.
Hi @laurentS , I currently do not use any explicit storage back-end. I read that slow api uses in memory storage by default. is that true or do I need to explicilty use a storage backend?
from slowapi.
oh ok. Thanks for the insight @laurentS. So, in cases wherein multiple container instances of my service(internally using FastAPI) are spawned, then these limits based on client_id will depend on which container the requests reach?
Also, just curious, how do i write a limiter using SlowAPI which imposes a blanket limit of 50 requests per minute to an endpoint like "/query". Unclear on what the "key_func" should return as I'am not really imposing limits based on a key such as Remote address or client id or any other key.
global_limiter = Limiter(key_func=lambda ?????)
from slowapi.
@thentgesMindee @laurentS Thank you for the explanation.
Going back to my original question, how can we achieve something like this even though we are using in-memory storage?
Adding multiple limiters on a single endpoint "/query".
The 1st limiter will utilise a client-id extracted from the request and will impose a restriction of 3 requests per minute on the client while accessing the "/query" endpoint.
The 2nd limiter will be a global limiter which imposes restriction of 5 requests per minute irrespective of the client while accessing the "/query" endpoint.
The ideal behaviour should be that when i try hitting the endpoint "/query" with 5 requests each from 2 different clients, then 3 requests from one client and 2 requests from another client should be accepted. However, in reality 3 requests from each client are accepted. Which means that the global limit is not working.
from slowapi.
The ideal behaviour should be that when i try hitting the endpoint "/query" with 5 requests each from 2 different clients, then 3 requests from one client and 2 requests from another client should be accepted. However, in reality 3 requests from each client are accepted. Which means that the global limit is not working.
It should work already
You should try again with a single worker of your app running since you're using an in-memory storage
from slowapi.
yes, i have verified that I have used a single worker of my app. But still the global limit doesn't work simultaneously with the client_id limit
from slowapi.
can you paste the code you use for testing ?
from slowapi.
Using a simple curl command with an for loop across 2 terminals with slight difference in client_id to simulate 2 clients:
for i in {1..7}; do
curl -X POST
-H "Authorization: Bearer your_access_token"
-H "client-uuid: 3fa85f64-5717-4562-b3fc-2c963f66afa6"
-H "Content-Type: application/json"
-d '{"query": "Valid user query"}'
http://localhost:8000/user-query &
done
from slowapi.
Good catch @laurentS
Another option, if you really need both limiter defined differently (for example if you want different storage configuration) could be to make use the global_limiter without any decorator, using its default limit
from slowapi.
@laurentS Thanks for looking into this. I had also tried attaching the global_limiter
to the app but had not posted it in the code here. Even that hadn't worked.
Currently trying out the single limiter approach you have mentioned. Have a doubt on the creation of limiter
in above code you have posted.
limiter = Limiter()
I keep getting the "key_func" missing in init error for this as below:
TypeError: Limiter.__init__() missing 1 required positional argument: 'key_func'
These are my imports:
from slowapi import Limiter, _rate_limit_exceeded_handler from slowapi.errors import RateLimitExceeded
@laurentS @thentgesMindee Could you pls help me with this bit?
from slowapi.
@laurentS @thentgesMindee Any pointers on tackling the above error while creating the limiter?
from slowapi.
@RohitPShetty this is a python error, not something specific to slowapi. Python is telling you that an argument (key_func
) is missing in your call to Limiter()
. You can just add one here, which will act as the default value for key_func
. If you keep your code as we discussed it above, specifying key_func
for each .limit()
then the default value does not really matter, you can pass anything.
You could use the default value for your shared "global limiter" and do something like:
limiter = Limiter(key_func=lambda: "shared_by_everyone")
@router.post("/query")
@limiter.limit("3/minute", key_func=lambda request: request.headers.get("client-uuid"))
@limiter.limit("5/minute") # omit key_func here and the default will be used
async def user_query(
request: Request
)
Closing this issue as resolved now.
from slowapi.
Related Issues (20)
- How to call in APIRouter'? HOT 2
- Add support for python 3.11 HOT 6
- New release/tag HOT 2
- Can global limits make this in FastAPI? HOT 3
- [QUESTION] redis asyncio-compatible? HOT 2
- Installation failure with poetry.core 1.5
- Can't set RATELIMIT_DEFAULT in .env HOT 4
- Dependency Dashboard
- Hi @smittysmee this is not my day job, it's 100% volunteer work, so priority may fall behind. The process for publishing an update is a bit manual still. If you want to give a hand with opening a PR to prepare for a release 0.1.8, I think a lot of people will be grateful. Examples from past releases #120 or #108. As a policy (see #58 ), I add any contributor who's had at least one PR merged to the repo, to help reduce bottlenecks. You're welcome to join the team!
- Features
- Feature | Can this support individual limits by IP address or user token? HOT 1
- slowapi uses limits==2.8.0 which contains usage of deprecated pkg_resources
- Bug: slowapi shared limiter does not accept callable for scope parameter HOT 3
- The limiters of different routes with the same function name are confused, resulting in multiple checks HOT 1
- [Ehancement] Don't require `Request` parameter to be present in endpoint function signature. HOT 4
- Add as package in arch user repository
- Exception: No "request" or "websocket" argument on function HOT 3
- Need Rate Limiting on Basis of Request Body for a POST API HOT 1
- mypy error: type of `_rate_limit_exceeded_handler` is incompatible with latest starlette HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from slowapi.