Code Monkey home page Code Monkey logo

ratelimit's Introduction

Upstash Rate Limit

npm (scoped) Tests

[!NOTE] > This project is in GA Stage. The Upstash Professional Support fully covers this project. It receives regular updates, and bug fixes. The Upstash team is committed to maintaining and improving its functionality.

It is the only connectionless (HTTP based) rate limiting library and designed for:

  • Serverless functions (AWS Lambda, Vercel ....)
  • Cloudflare Workers & Pages
  • Vercel Edge
  • Fastly Compute@Edge
  • Next.js, Jamstack ...
  • Client side web/mobile applications
  • WebAssembly
  • and other environments where HTTP is preferred over TCP.

Quick Start

Install

npm

npm install @upstash/ratelimit

Deno

import { Ratelimit } from "https://cdn.skypack.dev/@upstash/ratelimit@latest";

Create database

Create a new redis database on upstash. See here for documentation on how to create a redis instance.

Basic Usage

import { Ratelimit } from "@upstash/ratelimit"; // for deno: see above
import { Redis } from "@upstash/redis"; // see below for cloudflare and fastly adapters

// Create a new ratelimiter, that allows 10 requests per 10 seconds
const ratelimit = new Ratelimit({
  redis: Redis.fromEnv(),
  limiter: Ratelimit.slidingWindow(10, "10 s"),
  analytics: true,
  /**
   * Optional prefix for the keys used in redis. This is useful if you want to share a redis
   * instance with other applications and want to avoid key collisions. The default prefix is
   * "@upstash/ratelimit"
   */
  prefix: "@upstash/ratelimit",
});

// Use a constant string to limit all requests with a single ratelimit
// Or use a userID, apiKey or ip address for individual limits.
const identifier = "api";
const { success } = await ratelimit.limit(identifier);

if (!success) {
  return "Unable to process at this time";
}
doExpensiveCalculation();
return "Here you go!";

For more information on getting started, you can refer to our documentation.

Here's a complete nextjs example

Documentation

See the documentation for more information details about this package.

Contributing

Database

Create a new redis database on upstash and copy the url and token.

Running tests

To run the tests, you will need to set some environment variables. Here is a list of variables to set:

  • UPSTASH_REDIS_REST_URL
  • UPSTASH_REDIS_REST_TOKEN
  • US1_UPSTASH_REDIS_REST_URL
  • US1_UPSTASH_REDIS_REST_TOKEN
  • APN_UPSTASH_REDIS_REST_URL
  • APN_UPSTASH_REDIS_REST_TOKEN
  • EU2_UPSTASH_REDIS_REST_URL
  • EU2_UPSTASH_REDIS_REST_TOKEN

You can create a single Upstash Redis and use its URL and token for all four above.

Once you set the environment variables, simply run:

pnpm test

ratelimit's People

Contributors

aykutkardas avatar ayushsehrawat avatar brunowego avatar buggyhunter avatar cahidarda avatar chris13524 avatar chronark avatar dev-ups avatar enesakar avatar fahreddinozcan avatar gnadii avatar mdumandag avatar ogzhanolguncu avatar omfj avatar quatton avatar sourabpramanik avatar thecmdrunner avatar tudorzgimbau avatar unstubbable avatar xanderbarkhatov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ratelimit's Issues

Unable to view analytics on rate limit dashboard

const rateLImit = new Ratelimit({
redis,
prefix: "@upstash/ratelimit",
limiter: Ratelimit.slidingWindow(10, "10 s"),
analytics: true,
})

const isAuthed = root.middleware(async ({ ctx, next }) => {
if (!ctx.auth.userId) {
throw new TRPCError({ code: "UNAUTHORIZED" });
}
const { success } = await rateLImit.limit(ctx.auth.userId);
if (!success) {
throw new TRPCError({ code: "TOO_MANY_REQUESTS" });
}
return next({
ctx: {
// infers the session as non-nullable
auth: ctx.auth,
},
});
});

export const protectedProcedure = root.procedure.use(isAuthed);

I am using protectedProcedure in my routers query as well as mutations in TRPC.

I can see other logs and usage dashaboard. but when I navigate to rate limit dashboard it is all empty even though I can trigger rate limit by sending multiple requests and servers does throw 429 (TOO Many Requests)

does it not support TRPC?

what I am doing wrong

Analytics dont work when using prefix

This is our configuration:

const ratelimit = new Ratelimit({
  redis: Redis.fromEnv(),
  limiter: Ratelimit.cachedFixedWindow(10, '60 s'),
  prefix: 'ratelimit:inviteCode',
  ephemeralCache: new Map(),
  analytics: true,
})

No analytics are getting reported:

image

If I comment out the prefix it works:

const ratelimit = new Ratelimit({
  redis: Redis.fromEnv(),
  limiter: Ratelimit.cachedFixedWindow(10, '60 s'),
  //prefix: 'ratelimit:inviteCode',
  ephemeralCache: new Map(),
  analytics: true,
})

image

Question about rate limit

Hey, sorry if this is a dumb question. I'm wondering if this keeps the redis storage low by deleting the ip's that haven't sent a request in the last window time?

Ratelimit usage with ioredis

Is it possible to use ioredis connection with ratelimit library?

import Redis from 'ioredis'
const ratelimit = new Ratelimit({
    redis: new Redis(...),
    limiter: Ratelimit.slidingWindow(aIRateLimitPerDay, '1 d'),
    analytics: env.NODE_ENV === 'development',
  })

slidingWindow doesn't work with tokens: 1

Using the following limiter doesn't work and always returns success: false:

Ratelimit.slidingWindow(1, "10 s"),

We discovered it while switching from our previous rate limiting, where we have an API endpoint that we want to limit to 1 request every N seconds. We could use Ratelimit.fixedWindow(1, "10 s") for it, but our code is factorized to use the same algorithm for all methods.
The workaround we use is to increase from 1 to 2.

Ratelimit doesn't seem to return proper reset time

I'm trying to get an accurate TTL for a rate limit I have set up using Upstash

import { Ratelimit } from "@upstash/ratelimit";
import { Redis } from "@upstash/redis";

const redis = new Redis({
  url: process.env.UPSTASH_REDIS_REST_URL!,
  token: process.env.UPSTASH_REDIS_REST_TOKEN!,
});

My rate limit is set up like

const ratelimit = new Ratelimit({
  redis: redis,
  limiter: Ratelimit.fixedWindow(1, "4h"),
});

I then have the following headers being set

    const identifier = `receive:${ip}`;
    const result = await ratelimit.limit(identifier);
    res.setHeader("X-RateLimit-Limit", result.limit);
    res.setHeader("X-RateLimit-Remaining", result.remaining);
    res.setHeader("X-RateLimit-Reset", result.reset);

When I make the request & get back the result.reset it doesn't seem to match what I'd expect. Am I looking at the wrong field to get the key TTL?

In this case, I have my Data Browser fully cleared beforehand & cache flushed. When I make the request and go into the Upstash viewer, I can find the key & the expiration it has is correct (A single request, 4 hour limit, 14400s)

image

However, the result.reset value I'm getting back from this same call await ratelimit.limit(identifier); doesn't match up to the Upstash TTL

image

It never seems to give me back anything that's useable to identify when the limit expires, sometimes seemingly giving me the same UNIX timestamp even after clearing the DB. Not sure if there's something I'm missing about my implementation?

How to set one hour limit for sliding window?

Hey, I've been trying to setup my rate limiter like this:
const ratelimit = new Ratelimit({ redis: Redis.fromEnv(), limiter: Ratelimit.slidingWindow(2, '3600 s'), });
But it doesn't seem to work, there's a special way to reference an hour or theres a time limit?

cf workers: TypeError: Cannot read properties of undefined (reading 'eval')

I am trying to use the Durable Object state as the store, on a CF worker:

import { Ratelimit } from '@upstash/ratelimit'
import { error } from 'itty-router'

import { getClient } from './graphql/index.js'

export default class StoreDurableObject {
  constructor (state, env) {
    this.state = state
    state.cache ||= new Map()
    this.ratelimit = new Ratelimit({
      limiter: Ratelimit.slidingWindow(10, '10 s'),
      ephemeralCache: state.cache
    })
  }

  async fetch (req, env) {
    const { success } = await this.ratelimit.blockUntilReady(req.headers.get('CF-Connecting-IP'), 10_000)
    if (!success) {
      return error(429, 'Rate-limit exceeded.')
    }

    const graphql = await getClient({ req, env, state: this.state })
    const { query, variables = {} } = await req.json()
    return new Response(JSON.stringify(await graphql(query, variables)))
  }
}

And I get this:

โœ˜ [ERROR] Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'eval')

      at null.<anonymous>
  (file:///Users/konsumer/Desktop/store-ideas/node_modules/@upstash/ratelimit/src/single.ts:258:42)
      at limit
  (file:///Users/konsumer/Desktop/store-ideas/node_modules/@upstash/ratelimit/src/ratelimit.ts:134:55)
      at blockUntilReady
  (file:///Users/konsumer/Desktop/store-ideas/node_modules/@upstash/ratelimit/src/ratelimit.ts:221:24)
      at fetch (file:///Users/konsumer/Desktop/store-ideas/src/StoreDurableObject.js:17:46)

Is there a better way to do this?

Self-hosted redis?

Hello, I was wondering if I could use a self-hosted redis instance?
Also, is the upstash default pay as you go pricing also applies to rate limit, so 0.2$ per 100k commands?
So for example using the slidingWindow algorithm, it would use 4 commands per limit check? I'm afraid in that case the costs would skyrocket, am I missing something?

Update Next.js Middleware example

Hey guys! Figured I'd throw an issue in here to comment on the new middleware changes from: https://nextjs.org/docs/messages/middleware-upgrade-guide#no-response-body

The following patterns will no longer work (used in examples/nextjs/pages/api/_middleware.ts):

new Response('a text value')
new Response(streamOrBuffer)
new Response(JSON.stringify(obj), { headers: 'application/json' })
NextResponse.json()

Is below an appropriate replacement given the new middleware? In the docs, they say:

To produce a response from Middleware, you should rewrite to a route (Page or Edge API Route) that produces a response.

import type { NextRequest, NextFetchEvent } from "next/server"
import { NextResponse } from "next/server"
import { Ratelimit } from "@upstash/ratelimit"
import { Redis } from "@upstash/redis"

const url = process.env.UPSTASH_REDIS_REST_URL!
const token = process.env.UPSTASH_REDIS_REST_TOKEN!

export const middleware = async (req: NextRequest, event: NextFetchEvent) => {
  const ip = req.ip ?? "127.0.0.1"
  const redis = new Redis({
    url,
    token
  })
  // Create a new ratelimiter, that allows 10 requests per 10 seconds
  const ratelimit = new Ratelimit({
    redis: redis,
    limiter: Ratelimit.slidingWindow(10, "10 s")
  })
  const { success, pending } = await ratelimit.limit(ip)

  event.waitUntil(pending)

  if (!success) {
    return NextResponse.rewrite(new URL("/rate-limit", req.url))
  }
}

Unable to check remaining uses without triggering an increment

For longer term rate limits, it's useful to show the end user a count of how many uses they have left within a rate limit window. Currently, there doesn't appear to be any way to access the key consistently without also triggering an increment on the number of uses. The key appears to be appended with a variable number value that gets in the way of doing a standard redis GET query.

Is there some way to consistently do a GET call on the key without implementing some further custom logic? If not, this would be nice to have.

Limiting Concurrency

Thanks for this library! Would it be hard to add limitting concurrency? Would love to use this library with upstash redis to limit API calls, but a few APIs incur concurrency/parallelity limitations. Currently using bottleneck.

Upstash Ratelimit Error in GH Action

Hi, while running my GH Action I got this error from upstash:

Failed to record analytics TypeError: fetch failed
    at Object.fetch (node:internal/deps/undici/undici:117***:11)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async HttpClient.request (file:///home/runner/work/Weather-App/Weather-App/node_modules/.pnpm/@[email protected]/node_modules/@upstash/redis/esm/pkg/http.js:90:23)
    at async HIncrByCommand.exec (file:///home/runner/work/Weather-App/Weather-App/node_modules/.pnpm/@[email protected]/node_modules/@upstash/redis/esm/pkg/commands/command.js:55:35)
    at async /home/runner/work/Weather-App/Weather-App/node_modules/.pnpm/@[email protected]/node_modules/@upstash/core-analytics/dist/index.js:125:9
    at async Promise.all (index 0)
    at async Analytics.ingest (/home/runner/work/Weather-App/Weather-App/node_modules/.pnpm/@[email protected]/node_modules/@upstash/core-analytics/dist/index.js:120:5)
    at async Analytics.record (/home/runner/work/Weather-App/Weather-App/node_modules/.pnpm/@[email protected]/node_modules/@upstash/ratelimit/dist/index.js:60:5)
    at async Promise.all (index 1) {
  cause: Error: getaddrinfo ENOTFOUND feasible-urchin-36877.upstash.io,
      at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:107:26) {
    errno: -***08,
    code: 'ENOTFOUND',
    syscall: 'getaddrinfo',
    hostname: 'feasible-urchin-36877.upstash.io,'
  }
}

With this Ratelimit code:

const ratelimit = new Ratelimit({
  redis,
  limiter: Ratelimit.slidingWindow(
    parseInt(env.UPSTASH_RATELIMITER_TOKENS_PER_TIME),
    UPSTASH_RATELIMITER_TIME_INTERVAL,
  ),
  analytics: true,
  prefix: "@upstash/ratelimit",
});

const rateLimitMiddleware = t.middleware(async ({ ctx, path, next }) => {
  const identifier = `${ctx.ip}:${path}`;
  // log.debug("identifier", { identifier });
  const { success, remaining } = await ratelimit.limit(identifier);
  // log.debug("remaining", { remaining });
  ctx.res.setHeader(
    "X-RateLimit-Limit",
    env.UPSTASH_RATELIMITER_TOKENS_PER_TIME,
  );
  ctx.res.setHeader("X-RateLimit-Remaining", remaining);
  if (!success) {
    log.warn("Rate limit exceeded", { ip: identifier });
    throw new TRPCError({
      code: "INTERNAL_SERVER_ERROR",
      message: "Rate limit exceeded",
    });
  }
  return next();
});

Deno release broken?

Trying to use as per docs

import { Ratelimit } from "https://deno.land/x/upstash_ratelimit/mod.ts";

I'm getting

Warning Implicitly using latest version (v0.3.7) for https://deno.land/x/upstash_ratelimit/mod.ts
Download https://deno.land/x/[email protected]/mod.ts
worker thread panicked Import 'https://deno.land/x/[email protected]/mod.ts' failed, not found.

When using esm.sh

import { Ratelimit } from "https://esm.sh/@upstash/[email protected]";

I get

serving function upstash-redis-ratelimiter
worker thread panicked Uncaught SyntaxError: The requested module 'https://esm.sh/@upstash/[email protected]' does not provide an export named 'Ratelimit'
    at file:///home/deno/functions/upstash-redis-ratelimiter/index.ts:3:10

Any ideas?

Self-hosted Redis use?

Can this library be used with a self-hosted Redis instance? Or is it only meant to integrate with the Upstash Redis service?

Awaiting `pending` promise is required in Lambda

I'm not sure whether this is a gap in the documentation, or actually a bug: In my single-region setup with AWS Lambda (with response streaming), I need to await pending to avoid that the lambda functions are running into task timeouts.

I tried everything else (different limiters, setting a timeout, declaring in module scope or request scope, using an ephemeralCache or not), but nothing helped, only awaiting the pending promise worked.

Provided Next.js middleware example does not seem to actually work with Next.js

Copying the example provided here to our project, I get the following error:

Error: Module not found: Can't resolve 'stream'

Import trace for requested module:
./node_modules/isomorphic-fetch/fetch-npm-node.js
./node_modules/@upstash/redis/esm/platforms/nodejs.js
./pages/api/_middleware.ts


https://nextjs.org/docs/messages/module-not-found

You're using a Node.js module (stream) which is not supported in the Edge Runtime.
Learn more: https://nextjs.org/docs/api-reference/edge-runtime

It seems that @upstash/redis depends on stream but it's not possible to use the stream module in a Next.js Edge function. A seperate Vercel middleware example seems to use a slimed down version of a Redis client that doesn't depend on stream , but alas I'm not sure how that would integrate with the rate limiting module.

Is there a way to use @upstash/ratelimit with Next.js Edge functions out of the box?

Rate Limiter takes 5s to complete API call

Hi, i'm trying to set up the upstash rate limiter on a GET route on my own NextJS API following web dev cody video : https://www.youtube.com/watch?v=9xqlkAPnoTY (which is basically the demo code on this repo).

This is my code without the limiter, takes around 50ms to complete :

import Albums from "@models/albums";
import { connectToDB } from "@utils/database";

import { NextResponse } from 'next/server'

export const GET = async () => {
    try {
        await connectToDB()

        const albums = await Albums.find({})

        return NextResponse.json(
            albums,
            { status: 200 }
        )
    } catch (error) {
        console.log(error)
        return NextResponse.json(
            "Failed to fetch all ablums",
            { status: 500 }
        )
    }
} 

And here is the code with upstash rate limiter added, which takes around 5s to complete :

import Albums from "@models/albums";
import { connectToDB } from "@utils/database";

import { Ratelimit } from "@upstash/ratelimit";
import { Redis } from "@upstash/redis";

import { NextResponse } from 'next/server'

const ratelimit = new Ratelimit({
    redis: Redis.fromEnv(),
    limiter: Ratelimit.slidingWindow(1, "10 s")
})

export const GET = async (request) => {
    try {
        const ip = request.headers.get("x-forwarded-for") ?? "";
        const { success, reset } = await ratelimit.limit(ip);
        if (!success) {
            const now = Date.now();
            const retryAfter = Math.floor((reset - now) / 1000);
            return new NextResponse("Too Many Requests", {
                status: 429,
                headers: {
                    ["retry-after"]: `${retryAfter}`,
                },
            });
        }

        await connectToDB()

        const albums = await Albums.find({})

        return NextResponse.json(
            albums,
            { status: 200 }
        )
    } catch (error) {
        console.log(error)
        return NextResponse.json(
            "Failed to fetch all ablums",
            { status: 500 }
        )
    }
} 

Does anybody have an idea why it suddenly takes so long to complete API calls with the rate limiter set up ?

Next Middleware new signature

I'm getting this error

error - node_modules/next/dist/server/web/spec-extension/response.js (18:0) @ handleMiddlewareField
error - The middleware "/src/middleware" accepts an async API directly with the form:

export function middleware(request, event) {
    return NextResponse.redirect('/new-location')
}

Read more: https://nextjs.org/docs/messages/middleware-new-signature

But changing NextResponse.next(request) -> NextResponse.next() and NextResponse.rewrite(new URL("/api/blocked", request.url), request) -> NextResponse.rewrite(new URL("/api/blocked", request.url)) fixed the error.

No TTL set for slidingWindow rate limit keys on latest version

Hello,

I noticed that all of the keys stored in redis remain kept forever since I upgraded to latest version of @upstash/ratelimit (1.0.3 to be specific). I use slidingWindow method for rate limiting, and after reverting to version 1.0.1 everything works fine and TTL is set for every key created by Ratelimit. I don't use analytics if this matters (it's set to false).

Hopefully you can take a look and have it fixed, for now I must remain on the older version.

custom rates support

in the ratelimit sdk, is there a way consume with different rates? e.g. in openai api, I will allow 100 words per hour so I need to count the words in the prompt and consume it. so something like:
ratelimit.limit(identifier, wordCount)

How to use multiple ratelimiters?

What is the best practice here?

  • Creating two Ratelimit instance with two different prefixes?
  • Using one Ratelimit instance and using a prefixed identifier? Downside: Only one Ratelimit Strategy / options possible for limiter.

Support getting and resetting limit of a specific identifier

I have a feature request.

Right now the limit function can modify the limit, but we lack a getRemaining function to see the current value of the limit without any modification. It can be useful for displaying the remaining amount to users.

Another useful function is resetting the limit. In practice, this might mean deleting the entry in Redis. It could be useful when we are trying to reset the limit on every billing cycle for paid members. For example, I might allow 1000 AI image generations monthly, and every time the user pays the bill, I can reset the limit.

This is how I currently implement getRemaining() function (it's only for fixed window). I did it by writing a wrapper around the Ratelimit object:

/**
 * Get redis key for fixed window rate limiter
 * @param prefix e.g. 'ratelimit:free'
 * @param identifier e.g. user ID or email
 * @param window time e.g. '1m'
 * @returns a key in the form of "prefix:identifier:timestamp"
 */
function getFixedWindowKey(prefix: string, identifier: string, window: string) {
  const intervalDuration = ms(window)
  const key = [prefix, identifier, Math.floor(Date.now() / intervalDuration)].join(':')
  return key
}

/**
 * Get the remaining limit for a fixed window rate limiter without actually consuming the tokens.
 * @param key redis key in the form of "prefix:identifier:timestamp"
 * @param tokens max number of tokens
 * @returns remaining number of tokens in the window, or 0 if the limit is reached
 */
async function getFixedWindowLimitRemaining(key: string, tokens: number) {
  const used = await kv.get<number>(key)
  let remaining = tokens - (used ?? 0)
  if (remaining < 0) remaining = 0
  return remaining
}

/**
 * Create a fixed window rate limiter with Upstash Redis as the backend.
 * This returns a rate limiter object which extends the original ratelimit object by adding
 * a `getRedisKey` function and a `getRemaining` function.
 */
export function createFixedWindowRatelimit(config: RateLimiterConfig) {
  const { redis, timeout, analytics, prefix, tokens, window } = config

  const ratelimit = new Ratelimit({
    redis,
    timeout,
    analytics,
    prefix,
    limiter: Ratelimit.fixedWindow(tokens, window),
  })

  const getRedisKey = (identifier: string) => {
    return getFixedWindowKey(prefix, identifier, window)
  }

  const getRemaining = async (identifier: string) => {
    const key = getRedisKey(identifier)
    return await getFixedWindowLimitRemaining(key, tokens)
  }

  return {
    limit: ratelimit.limit.bind(ratelimit),
    blockUntilReady: ratelimit.blockUntilReady.bind(ratelimit),
    getRedisKey,
    getRemaining,
  }
}

/**
 * Rate limiter for free tier, should be used to limit image generations based on userID
 */
const ratelimitFree = createFixedWindowRatelimit({
  redis: kv,
  timeout: 1000,
  analytics: true,
  prefix: 'ratelimit:free',
  tokens: 30,
  window: '1h',
})

I think API users can implement the get and reset function as well but it's better if users don't have to understand Redis or Upstash at all. I currently use Vercel KV and I want to be oblivious to how it works. I don't want to learn Redis just to do rate limiting.
Maybe what we need is something like the getRemaining function above and another function like reset inside the Ratelimit object.

Thanks!

Crypto is not properly imported

When I am trying to use the MultiRegionRatelimit rate limiter, I am getting the following error:

ReferenceError: crypto is not defined
    at MultiRegionRatelimit.limiter (/Users/jason/WORK/llegamus/node_modules/@upstash/ratelimit/script/multi.js:94:31)
    at MultiRegionRatelimit.value (/Users/jason/WORK/llegamus/node_modules/@upstash/ratelimit/script/ratelimit.js:65:35)

This is because the crypto package is not imported before it is used during the multi region rate limiter. I think this is likely because Deno automatically imports the module, where other node runtimes do not.

I tried manually patching the package locally to import crypto and everything worked fine.

My specific setup is using Remix Dev Server and then hosting on Vercel's Serverless Functions. I don't think Vercel's Edge Functions would have this issue as it seems like they have support for the Web Crypto APIs.

QStash integration

It would be cool if I could associate rate limits to endpoints in the qstash UI and not have to change my code

Version 1.1.2 throws errors

After updating to 1.1.2 i get:

Failed to record analytics [UpstashError: WRONGTYPE Operation against a key holding the wrong kind of value, command was: ["zincrby","mw:events:1713535200000",1,"{"identifier":"127.0.0.1","success":true}"]] {
name: 'UpstashError'
}

Downgrading makes the error go away.

I run analytics inside my middleware

const ratelimit = new Ratelimit({
  redis: redis,
  analytics: true,
  limiter: Ratelimit.slidingWindow(10, '10 s'),
  prefix: 'mw'
});

` if (request.method === 'POST') {
const ip = request.ip ?? '127.0.0.1';
const { success, limit, reset, remaining } = await ratelimit.limit(ip);

if (!success) {
  // Construct a response for blocked requests
  const blockedResponse = new Response('Too Many Requests', {
    status: 429, // HTTP status code for Too Many Requests
    headers: {
      'X-RateLimit-Limit': limit.toString(),
      'X-RateLimit-Remaining': remaining.toString(),
      'X-RateLimit-Reset': reset.toString(),
      'Content-Type': 'text/plain'
    }
  });

  return blockedResponse;
}

}`

middleware.ts env - using UPSTASH_REDIS_REST_URL with "https://" blocks entry, using without blocks analytics

I am trying to setup the middleware.ts example for my Next.js application to allow for easy ratelimiting and DDoS protection.

When I use the following for my environment variables:

UPSTASH_REDIS_REST_URL="https://example.upstash.io"

I get the following error:

error - node_modules\@upstash\redis\esm\pkg\http.js (101:0) @ HttpClient.request
error - Unauthorized
null

When I take the "https://" out:

UPSTASH_REDIS_REST_URL="example.upstash.io"

The first error goes away, and my app works as expected. But, I get the following error:

event - compiled successfully in 32 ms (154 modules)
Failed to record analytics [TypeError: Failed to parse URL from example.upstash.io]

UpstashError: NOPERM this user has no perm issions to run the 'eval' command or its subcommand

I'm trying to rate-limit a simple netlify serverless function. The function manages to return a response but still throws an error.

Here's the code for the function:

// /netlify/handler.ts
import * as dotenv from 'dotenv';
import { Handler, HandlerEvent, HandlerContext } from "@netlify/functions";
import { Configuration, OpenAIApi } from "openai";
import { Ratelimit } from "@upstash/ratelimit";
import { Redis } from "@upstash/redis";

dotenv.config();

const configuration = new Configuration({
    apiKey: process.env.OPENAI_API_KEY
});
const openai = new OpenAIApi(configuration);

const ratelimit = new Ratelimit({
  redis: Redis.fromEnv(),
  limiter: Ratelimit.fixedWindow(10, "20 s"),
});

const handler: Handler = async (event: HandlerEvent, _context: HandlerContext) => {

  const question = JSON.parse(event.body).body;

  // check if the selected text is useless.
  if (question === undefined || question.length < 7) {
    return {
      statusCode: 205,
      body: JSON.stringify({ message: "no text" }),
    }
  }

  // Upstash rate-limiter
  const identifier = "api";
  const success = ratelimit.limit(identifier)
    .then(e => { return e})
    .catch(e => { console.log('Error in upstash fetch:', e) }) // <---- error comes from here

  if (!success) {
    return {
      statusCode: 429,
      body: JSON.stringify({ message: "Woah! Slow down." }),
    }
  }

  try {
    const res = await openai.createCompletion({
      model: "text-curie-001",
      prompt: `:Say this is a test: """${question}"""`,
      max_tokens: 100,
      temperature: 0,
    });

    return {
      statusCode: 200,
      body: JSON.stringify({ message: res.data.choices[0].text }),
    }
  } catch (e) {
    console.log("ERROR has occured", e.response);
    return {
      statusCode: 500,
      body: JSON.stringify({ message: "Error" }),
    };
  }
};

export { handler };

Here's the stack trace:

Request from ::1: POST /.netlify/functions/gpt-summarize
(node:87001) ExperimentalWarning: The Fetch API is an experimental feature. This feature could change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
Error in upstash fetch: UpstashError: NOPERM this user has no permissions to run the 'eval' command or its subcommand
    at HttpClient.request (/Users/Yash/Code/summarize/node_modules/@upstash/redis/esm/pkg/http.js:93:19)
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at EvalCommand.exec (/Users/Yash/Code/summarize/node_modules/@upstash/redis/esm/pkg/commands/command.js:46:35)
    at RegionRatelimit.limiter (/Users/Yash/Code/summarize/node_modules/@upstash/ratelimit/esm/single.js:88:44)
    at RegionRatelimit.value (/Users/Yash/Code/summarize/node_modules/@upstash/ratelimit/esm/ratelimit.js:62:24)
Response with status 200 in 1763 ms.

After I catch the error, it manages to give me a response though.

Let me know if this a ratelimiter error or if I report it at upstash-redis.

Cheers!

Enabling analytics leads to timeouts in response-streaming AWS Lambda

Follow-up from #89. Turns out, the analytics are the cause for the task timeouts, not the pending promise.

Additional info: The affected Lambda functions do use response streaming. I haven't tested classic Lambdas with analytics yet. (Update: Only response-streaming Lambdas are affected.) The response stream is ended properly, but the task does not complete and runs into a timeout.

A related PR is #91. This interfered with my observations because I wasn't even aware that I was switching from enabled to disabled analytics in one of my changes.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.