Code Monkey home page Code Monkey logo

openai-assistants-quickstart's Introduction

OpenAI Assistants API Quickstart

A quick-start template using the OpenAI Assistants API with Next.js.

OpenAI Assistants API Quickstart

Quickstart Setup

1. Clone repo

git clone https://github.com/openai/openai-assistants-quickstart.git
cd openai-assistants-quickstart

2. Set your OpenAI API key

export OPENAI_API_KEY="sk_..."

(or in .env.example and rename it to .env).

3. Install dependencies

npm install

4. Run

npm run dev

5. Navigate to http://localhost:3000.

Deployment

You can deploy this project to Vercel or any other platform that supports Next.js.

Deploy with Vercel

Overview

This project is intended to serve as a template for using the Assistants API in Next.js with streaming, tool use (code interpreter and file search), and function calling. While there are multiple pages to demonstrate each of these capabilities, they all use the same underlying assistant with all capabilities enabled.

The main logic for chat will be found in the Chat component in app/components/chat.tsx, and the handlers starting with api/assistants/threads (found in api/assistants/threads/...). Feel free to start your own project and copy some of this logic in! The Chat component itself can be copied and used directly, provided you copy the styling from app/components/chat.module.css as well.

Pages

Main Components

  • app/components/chat.tsx - handles chat rendering, streaming, and function call forwarding
  • app/components/file-viewer.tsx - handles uploading, fetching, and deleting files for file search

Endpoints

  • api/assistants - POST: create assistant (only used at startup)
  • api/assistants/threads - POST: create new thread
  • api/assistants/threads/[threadId]/messages - POST: send message to assistant
  • api/assistants/threads/[threadId]/actions - POST: inform assistant of the result of a function it decided to call
  • api/assistants/files - GET/POST/DELETE: fetch, upload, and delete assistant files for file search

Feedback

Let us know if you have any thoughts, questions, or feedback in this form!

openai-assistants-quickstart's People

Contributors

akkadaska avatar aleksa-codes avatar defimatt avatar eltociear avatar ibigio avatar jaredpalmer avatar jferrettiboke avatar kotaikehara avatar romainhuet avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openai-assistants-quickstart's Issues

What is the point of this interval?

This is the solution to fetching files after uploading/deleting or am I missing something?
Why not call fetchFiles after uploading and deleting?

useEffect(() => {
    const interval = setInterval(() => {
      fetchFiles();
    }, 1000);

    return () => clearInterval(interval);
  }, []);
const fetchFiles = async () => {
    const resp = await fetch("/api/assistants/files", {
      method: "GET",
    });
    const data = await resp.json();
    setFiles(data);
  };

[I found no better way of contacting you:] the webpage on 'start using chatgpt instantly' provides no way of starting to use chatgpt

This report is misfiled. I found no better way of contacting you except for various 'socials' that I do not use.

This webpage is about how one can start to use chatgpt 'instantly' (its term). It seems to me though that the page provides no such way. That seems fairly grotesque, if I may say so. Moreover, whilst I found this other webpage, that page did not allow instant access either. (Is that what the former page meant by saying that the facility was being rolled out gradually?)

PS: The first of the two webpages that I mentioned contains at its end a link that reads: 'View all Product'. That is gibberish.

Use mutliple assistant ids?

This is more of a question than an issue.

Would it be possible to configure multiple Assistants IDs that could then be used by different chat components?

The use case we are looking at would replace the current landing page with the different examples with chats with different assistants.

We were thinking we could change the current assistant-config code to have an array of IDs which we then import into chat components

e.g. The assistant config array would contain (marketing, sales, support). The Marketing chat component would import the array and set the current assistantid = marketing etc....

Just wanted to check that this could work?

Thanks!

How to install depedencies?

I want to ask a very stupid question...

In your "Quickstart Setup", the step 4 says "npm install", where were you running it? I run on my git bash, and it returns "npm command not found".

Thanks

Clicking create assistant BadRequestError

Clicking create assistant gives BadRequestError: 400 The requested model 'gpt-4-turbo-preview' does not exist.

Amending api/assistants/route.ts to change to gpt-4-turbo fixes.

Stuck and Task timeout with longer responses

I deployed it on Netlify setting up my assistant but I noticed that when the response is a bit longer it get stuck, it truncate response and send button is disabled, need to refresh page.

Jun 8, 12:12:43 AM: 00634a06 ERROR Task timed out after 10.03 seconds
Jun 8, 12:12:43 AM: 00634a06 Duration: 10031.32 ms Memory Usage: 85 MB

Also the sources in the responses are not formatted correctly:【7:3†source】【7:10†source

Feedback form not public

Feedback
Let us know if you have any thoughts, questions, or feedback in this form!

Visiting form link gives:

You need permission
This form can only be viewed by users in the owner's organisation.

Try contacting the owner of the form if you think that this is a mistake.

Uncaught (in promise) Error: Final run has not been received

Getting this error when assistant returns longer responses (~250 tokens): "Uncaught (in promise) Error: Final run has not been received"

The response is truncated at this point.

I deployed the app to Vercel. I do not encounter this when running the code in my local dev environment.

Thinking that a possible workaround would be to set max_tokens, but it seems that is not available (yet?) with the Assistants API?

Pagination and Rate Limit Issues in File Viewer When Handling Larger VectorStores

Issue Overview

I am experiencing an issue where only a subset (20 out of 300) of the files in my vectorstore is visible in the UI of the file viewer component. Despite attempts to resolve this by fetching additional data and handling pagination, I keep encountering rate limit errors that hinder further data fetching.

Steps to Reproduce

  1. Populate the vectorstore with more than 20 files.
  2. Access the file viewer UI which fetches and displays the files.
  3. Notice that only the first 20 files are displayed, and subsequent fetch attempts either result in repeated data (same 20 files) or hit rate limits.

Expected Behavior

The file viewer should correctly paginate through all files in the vectorstore, displaying all (n) files without hitting rate limits prematurely.

Actual Behavior

Only 20 files are displayed repeatedly, and attempts to fetch more files frequently hit the rate limit, even with extremely conservative fetch intervals (e.g., every 30 seconds).

Possible Solutions or Suggestions

  • Pagination Handling: It seems that the API might be missing proper pagination handling to load subsequent files beyond the initial batch.
  • Rate Limit Management: There might be an issue with how rate limits are managed, or possibly the limits are too stringent for practical use in this scenario. Adjusting the rate limit policy or providing guidelines on managing fetches could be beneficial.

Additional Context

Here's the logic I've tried implementing to handle fetching, with adjustments for rate limits:

const fetchFiles = async (retryDelay = 1000) => {
  try {
    const resp = await fetch("/api/assistants/files", { method: "GET" });
    if (resp.status === 429) {  
      setTimeout(() => fetchFiles(retryDelay * 2), retryDelay);  // Exponential backoff
      return;
    }
    const data = await resp.json();
    setFiles(data);
  } catch (error) {
    console.error('Failed to fetch files:', error);
  }
};

Any insights or suggestions on how to properly paginate and handle rate limits in this scenario would be greatly appreciated!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.