Code Monkey home page Code Monkey logo

ai-chatbot's Introduction

Next.js 14 and App Router-ready AI chatbot.

An open-source AI chatbot app template built with Next.js, the Vercel AI SDK, OpenAI, and Vercel KV.

Features · Model Providers · Deploy Your Own · Running locally · Authors


Features

  • Next.js App Router
  • React Server Components (RSCs), Suspense, and Server Actions
  • Vercel AI SDK for streaming chat UI
  • Support for OpenAI (default), Anthropic, Cohere, Hugging Face, or custom AI chat models and/or LangChain
  • shadcn/ui
  • Chat History, rate limiting, and session storage with Vercel KV
  • NextAuth.js for authentication

Model Providers

This template ships with OpenAI gpt-3.5-turbo as the default. However, thanks to the Vercel AI SDK, you can switch LLM providers to Anthropic, Cohere, Hugging Face, or using LangChain with just a few lines of code.

Deploy Your Own

You can deploy your own version of the Next.js AI Chatbot to Vercel with one click:

Deploy with Vercel

Creating a KV Database Instance

Follow the steps outlined in the quick start guide provided by Vercel. This guide will assist you in creating and configuring your KV database instance on Vercel, enabling your application to interact with it.

Remember to update your environment variables (KV_URL, KV_REST_API_URL, KV_REST_API_TOKEN, KV_REST_API_READ_ONLY_TOKEN) in the .env file with the appropriate credentials provided during the KV database setup.

Running locally

You will need to use the environment variables defined in .env.example to run Next.js AI Chatbot. It's recommended you use Vercel Environment Variables for this, but a .env file is all that is necessary.

Note: You should not commit your .env file or it will expose secrets that will allow others to control access to your various OpenAI and authentication provider accounts.

  1. Install Vercel CLI: npm i -g vercel
  2. Link local instance with Vercel and GitHub accounts (creates .vercel directory): vercel link
  3. Download your environment variables: vercel env pull
pnpm install
pnpm dev

Your app template should now be running on localhost:3000.

Authors

This library is created by Vercel and Next.js team members, with contributions from:

ai-chatbot's People

Contributors

admineral avatar balazsorban44 avatar batuhan avatar bots avatar carlosziegler avatar giannis2two avatar izeye avatar jaredpalmer avatar jeremyphilemon avatar joulev avatar jphyqr avatar justjavac avatar leerob avatar malewis5 avatar markeljan avatar maxleiter avatar nicoalbanese avatar peek-a-booo avatar royal-lobster avatar rudrodip avatar seahorse-byte avatar shadcn avatar shuding avatar steven-tey avatar swyxio avatar themataleao avatar tsui66 avatar urnas avatar zackrw avatar zhxnlai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ai-chatbot's Issues

Error: chat?.title.slice is not a function

Issue Description

Starting a chat conversation and using a single digit as a beginning.
image

Error Message:

Uncaught TypeError: chat?.title.slice is not a function

Issue Summary:

When executing the code chat?.title.slice(0,50), it throws an uncaught type error. The error occurs because the variable chat?.title is not a string and therefore, the slice function cannot be called.

Steps to Reproduce:

  1. Initiating a chat conversation by using a single digit as the starting point. For example, begin the conversation with 1.
  2. Access the "chat history" and locate the chat labeled as 1, then click on it.

Environment:

  • Operating System: macOS
  • Browser: Chrome

How yo implement a payment method?

Hi guys, I am in the process of learning Nextjs, in some projects I have been able to integrate Stripe as a payment method but I don't know how to do it when I have a third party api like openai, do you have any example of token limit or message limit?

Deploy failed: ERR_PNPM_LOCKFILE_CONFIG_MISMATCH

Some recent commits seem to cause a new error that makes deployment crash, here's the log:

[23:51:04.260] Running build in Cleveland, USA (East) – cle1
[23:51:04.320] Cloning github.com/xxx/xxx (Branch: main, Commit: 5b52780)
[23:51:04.559] Previous build cache not available
[23:51:04.910] Cloning completed: 589.325ms
[23:51:05.120] Running "vercel build"
[23:51:05.641] Vercel CLI 30.2.1
[23:51:05.962] Detected `pnpm-lock.yaml` version 6.1 generated by pnpm 8...
[23:51:05.976] Installing dependencies...
[23:51:06.576]  ERR_PNPM_LOCKFILE_CONFIG_MISMATCH  Cannot proceed with the frozen installation. The current "settings.autoInstallPeers" configuration doesn't match the value found in the lockfile
[23:51:06.576] 
[23:51:06.577] Update your lockfile using "pnpm install --no-frozen-lockfile"
[23:51:06.601] Error: Command "pnpm install" exited with 1
[23:51:07.039] BUILD_UTILS_SPAWN_1: Command "pnpm install" exited with 1

getting @resvg/resvg-wasm error in build on deployment

after using the template the initial deploy fails with the following message. This happens locally too.
 WARN  GET https://registry.npmjs.org/@resvg/resvg-wasm/-/resvg-wasm-2.4.1.tgz error (ERR_PNPM_FETCH_404). Will retry in 10 seconds. 2 retries left.

Sytax highlighting not working

Noticed a strange issue with code blocks not always using syntax highlighting. I'm not sure if this is an issue with the Markdown parser or the OpenAI response doesn't always include language in the markdown codeblock.

image

Change API to external endpoint ?

Hi,

I am very new to NextJS, so I hope I can get some help.

I have an API endpoint at localhost:3000/api/chat.

The ai-chatbot NextJS is running at :3001/chat.

In src/components/chat.tsx, I assigned the api as follows:

const { messages, append, reload, stop, isLoading, input, setInput } =
    useChat({
      api: 'http://localhost:3000/api/chat', // added this line
      initialMessages,
      id,
      body: {
        id,
        previewToken
      },

On the backend API, I tried to retrieve the body as follows:

export const postChat = async (req: Request, res: Response) => {
  if (req && req.body) {
    const { messages, id } = req.body;
    console.log(req.body);
//...
}

However, the body is returning {}.

I have tried to investigate further, but without any luck.

I hope I can get some help from the community.

Best regards,

Vercel Build Error - PPNPM - Resolved

I tried with the stock deploy button, but didnt work.

Running pnpm install --no-frozen-lockfile fixed the deploy

ERR_PNPM_LOCKFILE_CONFIG_MISMATCH  Cannot proceed with the frozen installation. The current "settings.autoInstallPeers" configuration doesn't match the value found in the lockfile
--
15:02:10.314 |  
15:02:10.315 | Update your lockfile using "pnpm install --no-frozen-lockfile"
15:02:10.334 | Error: Command "pnpm install" exited with 1
15:02:10.797 | BUILD_UTILS_SPAWN_1: Command "pnpm install" exited with 1

Next-auth version?

Hey it looks like auth was pinned to a commit that doesn't exist anymore maybe? Moving to a main version of next auth seems to break the app.

Version is

"next-auth": "0.0.0-manual.4cd21ea5",

Which no longer resolves when installing dependencies

Unhandled Runtime Error When Unauthorized

clone and run this repo first time, chat request is not authorized, but the site window got a runtime error:

image

then I set onError options for useChat hook, it still happens.

Error: The Edge Function "share/[id]/opengraph-image" size is 1.08 MB and your plan size limit is 1 MB.

I'm a new user to Vercel, opened an account to try deploying this. During the build step I get the error in the title. Here's the logs

[21:14:12.113] Running build in Cleveland, USA (East) – cle1
[21:14:12.162] Cloning github.com/mileszim/nextjs-chat (Branch: main, Commit: c6b6e39)
[21:14:12.407] Previous build cache not available
[21:14:12.770] Cloning completed: 607.715ms
[21:14:12.917] Running "vercel build"
[21:14:13.385] Vercel CLI 30.2.1
[21:14:13.656] Detected `pnpm-lock.yaml` version 6.1 generated by pnpm 8...
[21:14:13.667] Installing dependencies...
[21:14:14.159] Lockfile is up to date, resolution step is skipped
[21:14:14.193] Progress: resolved 1, reused 0, downloaded 0, added 0
[21:14:14.286] Packages: +637
[21:14:14.287] ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[21:14:14.549] Packages are hard linked from the content-addressable store to the virtual store.
[21:14:14.549]   Content-addressable store is at: /vercel/.local/share/pnpm/store/v3
[21:14:14.549]   Virtual store is at:             node_modules/.pnpm
[21:14:15.194] Progress: resolved 637, reused 0, downloaded 58, added 58
[21:14:16.197] Progress: resolved 637, reused 0, downloaded 135, added 130
[21:14:17.201] Progress: resolved 637, reused 0, downloaded 247, added 245
[21:14:18.203] Progress: resolved 637, reused 0, downloaded 332, added 331
[21:14:19.209] Progress: resolved 637, reused 0, downloaded 446, added 441
[21:14:20.212] Progress: resolved 637, reused 0, downloaded 567, added 563
[21:14:21.212] Progress: resolved 637, reused 0, downloaded 633, added 633
[21:14:22.211] Progress: resolved 637, reused 0, downloaded 635, added 635
[21:14:23.306] Progress: resolved 637, reused 0, downloaded 636, added 635
[21:14:24.236] .../[email protected]/node_modules/es5-ext postinstall$  node -e "try{require('./_postinstall')}catch(e){}" || exit 0
[21:14:24.297] .../[email protected]/node_modules/es5-ext postinstall: Done
[21:14:24.311] Progress: resolved 637, reused 0, downloaded 637, added 637, done
[21:14:24.360] .../[email protected]/node_modules/esbuild postinstall$ node install.js
[21:14:24.423] .../[email protected]/node_modules/esbuild postinstall: Done
[21:14:24.578] 
[21:14:24.578] dependencies:
[21:14:24.579] + @radix-ui/react-alert-dialog 1.0.4
[21:14:24.579] + @radix-ui/react-dialog 1.0.4
[21:14:24.579] + @radix-ui/react-dropdown-menu 2.0.5
[21:14:24.579] + @radix-ui/react-label 2.0.2
[21:14:24.579] + @radix-ui/react-select 1.2.2
[21:14:24.579] + @radix-ui/react-separator 1.0.3
[21:14:24.579] + @radix-ui/react-slot 1.0.2
[21:14:24.579] + @radix-ui/react-switch 1.0.3
[21:14:24.579] + @radix-ui/react-tooltip 1.0.6
[21:14:24.579] + @vercel/analytics 1.0.1
[21:14:24.580] + @vercel/kv 0.2.1
[21:14:24.580] + @vercel/og 0.5.7
[21:14:24.580] + ai 2.1.2
[21:14:24.580] + body-scroll-lock 4.0.0-beta.0
[21:14:24.580] + class-variance-authority 0.4.0
[21:14:24.580] + clsx 1.2.1
[21:14:24.580] + focus-trap-react 10.1.1
[21:14:24.581] + nanoid 4.0.2
[21:14:24.581] + next 13.4.7-canary.1
[21:14:24.581] + next-auth 0.0.0-manual.4cd21ea5
[21:14:24.581] + next-themes 0.2.1
[21:14:24.581] + openai-edge 0.5.1
[21:14:24.581] + react 18.2.0
[21:14:24.581] + react-dom 18.2.0
[21:14:24.581] + react-hot-toast 2.4.1
[21:14:24.581] + react-intersection-observer 9.4.4
[21:14:24.581] + react-markdown 8.0.7
[21:14:24.581] + react-syntax-highlighter 15.5.0
[21:14:24.581] + react-textarea-autosize 8.4.1
[21:14:24.581] + remark-gfm 3.0.1
[21:14:24.582] + remark-math 5.1.1
[21:14:24.582] 
[21:14:24.582] devDependencies:
[21:14:24.583] + @tailwindcss/typography 0.5.9
[21:14:24.583] + @types/node 17.0.45
[21:14:24.583] + @types/react 18.2.6
[21:14:24.583] + @types/react-dom 18.2.4
[21:14:24.583] + @types/react-syntax-highlighter 15.5.6
[21:14:24.583] + @typescript-eslint/parser 5.59.7
[21:14:24.583] + autoprefixer 10.4.14
[21:14:24.584] + dotenv 16.0.3
[21:14:24.584] + drizzle-kit 0.18.0
[21:14:24.584] + eslint 8.40.0
[21:14:24.584] + eslint-config-next 13.4.7-canary.1
[21:14:24.584] + eslint-config-prettier 8.8.0
[21:14:24.584] + eslint-plugin-tailwindcss 3.12.0
[21:14:24.584] + postcss 8.4.23
[21:14:24.584] + prettier 2.8.8
[21:14:24.584] + tailwind-merge 1.12.0
[21:14:24.584] + tailwindcss 3.3.2
[21:14:24.584] + tailwindcss-animate 1.0.5
[21:14:24.584] + typescript 5.1.3
[21:14:24.584] 
[21:14:24.584] Done in 10.8s
[21:14:24.602] Detected Next.js version: 13.4.7-canary.1
[21:14:24.611] Running "pnpm run build"
[21:14:25.065] 
[21:14:25.065] > [email protected] build /vercel/path0
[21:14:25.065] > next build
[21:14:25.065] 
[21:14:25.462] - warn You have enabled experimental feature (serverActions) in next.config.js.
[21:14:25.463] - warn Experimental features are not covered by semver, and may cause unexpected or broken application behavior. Use at your own risk.
[21:14:25.463] 
[21:14:25.478] Attention: Next.js now collects completely anonymous telemetry regarding usage.
[21:14:25.478] This information is used to shape Next.js' roadmap and prioritize features.
[21:14:25.478] You can learn more, including how to opt-out if you'd not like to participate in this anonymous program, by visiting the following URL:
[21:14:25.478] https://nextjs.org/telemetry
[21:14:25.478] 
[21:14:25.564] - info Creating an optimized production build...
[21:14:59.597] - info Compiled successfully
[21:14:59.603] - info Linting and checking validity of types...
[21:15:05.015] 
[21:15:05.015] ./app/layout.tsx
[21:15:05.016] 46:16  Warning: Invalid Tailwind CSS classnames order  tailwindcss/classnames-order
[21:15:05.016] 49:19  Warning: Invalid Tailwind CSS classnames order  tailwindcss/classnames-order
[21:15:05.016] 
[21:15:05.016] ./components/user-menu.tsx
[21:15:05.016] 34:17  Warning: Invalid Tailwind CSS classnames order  tailwindcss/classnames-order
[21:15:05.016] 39:20  Warning: Invalid Tailwind CSS classnames order  tailwindcss/classnames-order
[21:15:05.016] 57:15  Warning: Invalid Tailwind CSS classnames order  tailwindcss/classnames-order
[21:15:05.016] 60:33  Warning: Invalid Tailwind CSS classnames order  tailwindcss/classnames-order
[21:15:05.016] 
[21:15:05.016] info  - Need to disable some ESLint rules? Learn more here: https://nextjs.org/docs/basic-features/eslint#disabling-rules
[21:15:05.017] - info Collecting page data...
[21:15:15.374] - info Generating static pages (0/4)
[21:15:15.429] - info Generating static pages (1/4)
[21:15:15.451] - info Generating static pages (2/4)
[21:15:15.467] - info Generating static pages (3/4)
[21:15:15.627] - info Generating static pages (4/4)
[21:15:15.645] - info Finalizing page optimization...
[21:15:15.647] 
[21:15:15.702] Route (app)                                Size     First Load JS
[21:15:15.702] ┌ ℇ /                                      0 B                0 B
[21:15:15.702] ├ ℇ /api/auth/[...nextauth]                0 B                0 B
[21:15:15.702] ├ ℇ /api/chat                              0 B                0 B
[21:15:15.702] ├ ℇ /chat/[id]                             222 B           418 kB
[21:15:15.702] ├ ○ /opengraph-image.png                   0 B                0 B
[21:15:15.702] ├ ℇ /share/[id]                            1.84 kB         336 kB
[21:15:15.702] ├ ℇ /share/[id]/opengraph-image            0 B                0 B
[21:15:15.702] ├ λ /sign-in                               1.29 kB        96.7 kB
[21:15:15.702] └ ○ /twitter-image.png                     0 B                0 B
[21:15:15.702] + First Load JS shared by all              94 kB
[21:15:15.702]   ├ chunks/193-4f09ad3eb66da1bc.js         5.51 kB
[21:15:15.702]   ├ chunks/212-321422422795b97c.js         8.07 kB
[21:15:15.702]   ├ chunks/699-3e03d43b46cb22bb.js         25.7 kB
[21:15:15.703]   ├ chunks/80f368f5-d40eed256e19ee1e.js    52.7 kB
[21:15:15.703]   ├ chunks/main-app-23cee0dcd6edd8f6.js    239 B
[21:15:15.703]   └ chunks/webpack-92e18ba65220e2f7.js     1.74 kB
[21:15:15.703] 
[21:15:15.703] Route (pages)                              Size     First Load JS
[21:15:15.703] ─ ○ /404                                   183 B            75 kB
[21:15:15.703] + First Load JS shared by all              74.8 kB
[21:15:15.703]   ├ chunks/framework-f780fd9bae3b8c58.js   45.1 kB
[21:15:15.703]   ├ chunks/main-91eace788c200f55.js        27.9 kB
[21:15:15.703]   ├ chunks/pages/_app-06aedd91999f3c8c.js  197 B
[21:15:15.703]   └ chunks/webpack-92e18ba65220e2f7.js     1.74 kB
[21:15:15.703] 
[21:15:15.703] ƒ Middleware                               105 kB
[21:15:15.703] 
[21:15:15.703] ℇ  (Streaming)  server-side renders with streaming (uses React 18 SSR streaming or Server Components)
[21:15:15.703] λ  (Server)     server-side renders at runtime (uses getInitialProps or getServerSideProps)
[21:15:15.703] ○  (Static)     automatically rendered as static HTML (uses no initial props)
[21:15:15.703] 
[21:15:18.966] Traced Next.js server files in: 2.556s
[21:15:25.754] Created all serverless functions in: 6.786s
[21:15:26.954] Collected static files (public/, static/, .next/static): 4.851ms
[21:15:28.176] Build Completed in /vercel/output [1m]
[21:15:28.584] Deploying outputs...
[21:15:40.595] Error: The Edge Function "share/[id]/opengraph-image" size is 1.08 MB and your plan size limit is 1 MB. Learn More: https://vercel.link/edge-function-size
[21:15:42.646] NOW_SANDBOX_WORKER_MAX_MIDDLEWARE_SIZE: The Edge Function "share/[id]/opengraph-image" size is 1.08 MB and your plan size limit is 1 MB.

While using LangChain Stream, not able to determine when Stream is completed

Hey everyone. I am attempting to use this template and replace the direct OpenAI calls with LangChain.

I updated the code in /app/api/chat/route.ts. I used the LangChain example here as a reference: https://github.com/vercel-labs/ai/tree/main/examples/next-langchain

Notice in the screenshot below that once the stream has completed, the button stop generating is still showing. The reason why its showing is cause:
isLoading in chat-panel.tsx is still set to true.

Screenshot 2023-06-22 at 12 42 04 PM

Trying to understand why is it that with the OpenAI example isLoading is flipped to false after streaming is complete.
I tried to analyze the API call with OpenAI /app/api/chat/route.ts to see if its responding with something to notify that streaming is over but dont see anything there. I see that the stop generating button is calling the stop method from useChat:

{isLoading ? (
        <Button
          variant="outline"
          onClick={() => stop()}
          className="bg-background"
        >
          <IconStop className="mr-2" />
          Stop generating
        </Button>
      ) : (
import { useChat, type Message } from 'ai/react'
  const { messages, append, reload, stop, isLoading, input, setInput } =
    useChat({
      initialMessages,
      id,
      body: {
        id,
        previewToken
      }
    })

I think I am on the right track but wonder if there is something missing in the docs about this or if anyone has an example that would be helpful in understanding this.

How are Metadata Images being set?

Hi -

I've tried to replicate how you are rendering Twitter Card images but have been failing miserably. I see you have an opengraph-image.png and a twitter-image.png file at the root app level. This is not working for me. You also are generating this tag:

<meta name="twitter:card" content="summary_large_image"/>

The docs say the meta tag generated is summary and not summary_large_image. How are you doing this? Is there an extra step you have to do?

Also - unrelated but the favicon.ico file was supposed to be inside app folder according to the docs. But seems you made it work by having it in public?

Thanks!

Try to implement Prisma into this repo and always get error "unable to be run in the browser"

I try to implement Prisma into this repo and always get the error "unable to be run in the browser"

Even import Prisma Client in app/Layout or other Serverside components.

- error Error: PrismaClient is unable to be run in the browser.
In case this error is unexpected for you, please report it in https://github.com/prisma/prisma/issues
    at eval (./lib/prisma.ts:10:52)
    at (sc_server)/./lib/prisma.ts (/Users/username/Documents/react-projects/Next-PJ/try-chatbot/.next/server/app/page.js:3270:1)
    at __webpack_require__ (/Users/username/Documents/react-projects/Next-PJ/try-chatbot/.next/server/edge-runtime-webpack.js:37:33)
    at fn (/Users/username/Documents/react-projects/Next-PJ/try-chatbot/.next/server/edge-runtime-webpack.js:310:21)
    at eval (./components/header.tsx:18:70)
    at (sc_server)/./components/header.tsx (/Users/username/Documents/react-projects/Next-PJ/try-chatbot/.next/server/app/page.js:2921:1)
    at __webpack_require__ (/Users/username/Documents/react-projects/Next-PJ/try-chatbot/.next/server/edge-runtime-webpack.js:37:33)
    at fn (/Users/username/Documents/react-projects/Next-PJ/try-chatbot/.next/server/edge-runtime-webpack.js:310:21)
    at eval (./app/layout.tsx:13:76)

prisma.ts was like:

import { PrismaClient } from '@prisma/client'

declare global {
  var prisma: PrismaClient | undefined
}

// Check if we are on the server
if (typeof window === 'undefined') {
  const prisma = global.prisma || new PrismaClient()

  if (process.env.NODE_ENV === 'development') global.prisma = prisma
}

export default global.prisma

New to Next.js and have no idea how to fix it. please help~

Server Error while trying to login

After configuring the necessary environment variables OPENAI_API_KEY, AUTH_GITHUB_ID, AUTH_GITHUB_SECRET, and kvstore, I attempted to log in on the deployed website. However, it prompted a server error and the logs indicated that I seemed to be missing a configuration for a secret variable. Even after adding the variable and redeploying, I still encountered an error. Could you please advise me on how to make the login function work?

Error Log:

// Function: /middleware
[31m[auth][error][Cs][0m: Please define a `secret`.. Read more at https://errors.authjs.dev#cs

Edge Function bigger than expected on deploy

image
I tried with the deploy button to start a new app but it's failing because of how big the edge function ends up on build. Is it supposed to run this way and should I upgrade my Vercel account or am I doing something wrong?

Issues with chat list updating and multiple chat creation

Newly initiated chats do not appear in the chats list until the page is refreshed, but you can still create the new chat.

A second new chat can't be created without a page refresh following the creation of the first one.

video.mp4

Help: how to convert LangChain streamedResponse to StreamingTextResponse (Vercel AI SDK)

Hi,

I have a backend API with the code for /api/chat as follows:

const nonStreamingModel = new ChatOpenAI({
        modelName: "gpt-3.5-turbo",
      }, configuarion)

      const streamingModel = new ChatOpenAI({
        streaming: true,
        callbacks: [
          {
            handleLLMNewToken(token) {
              streamedResponse += token;
            },
          },
        ],
      }, configuarion)

      const embedding = new OpenAIEmbeddings({}, configuarion)

      try {
        const vectorStore = await HNSWLib.load(`${baseDirectoryPath}/docs/index/data/`, embedding);

        const chain = ConversationalRetrievalQAChain.fromLLM(
          streamingModel,
          vectorStore.asRetriever(), {
          memory: new BufferMemory({
            memoryKey: "chat_history",
            returnMessages: true
          }),
          questionGeneratorChainOptions: {
            llm: nonStreamingModel,
            template: await CONDENSE_PROMPT.format({ question: userContent, chat_history }),
          },
          qaChainOptions: {
            type: "stuff",
            prompt: QA_PROMPT
          }
        }
        );

        await chain.call({ question: userContent });
        return streamedResponse

The front-end I am using is vercel/ai-chatbot. The chat page retrieves a StreamingTextResponse and binds it to the UI.

I am really unsure about how to convert the LangChain stream text for the front-end.

Thank you for your help!

Gettings 429 response when using the default setup with Open AI

After deploying ai-chatbot and signing into the app, the prompting doesn't work.

In the Vercel platform logs show:

Error: Failed to convert the response to stream. Received status code: 429.
    at (node_modules/.pnpm/[email protected][email protected][email protected][email protected]/node_modules/ai/dist/index.mjs:112:10)
    at (node_modules/.pnpm/[email protected][email protected][email protected][email protected]/node_modules/ai/dist/index.mjs:140:9)
    at (app/api/chat/route.ts:38:17)
    at (node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/esm/server/future/route-modules/app-route/module.js:210:36)

This makes me think there's an issue interacting with Open AI that causes a Rate limit error response?

What is the purpose of previewToken being passed into the POST request's body?

In app/api/chat.ts, the previewToken is being sent in the request's body, and is then being used here:

  if (previewToken) {
    configuration.apiKey = previewToken;
  }

I modified the code to store the previewKey in the browser's local storage, and send it in the request, assuming that the OpenAIApi config would be updated with the new key in the post request, but that doesn't seem to be happening. The OpenAIApi object is being created outside the post request So I wonder what purpose the above code serves? Would we have to create a new OpenAIApi object inside the post request everytime we wanted to use the a user defined api key that isn't stored in a db?

Warning metadata on development mode

i got this warning when running on development mode (localhost) warn metadata.metadataBase is not set for resolving social open graph or twitter images, fallbacks to "http://localhost:3000". See https://nextjs.org/docs/app/api-reference/functions/generate-metadata#metadatabase

Unhandled Runtime Error - Unauthorized

Your documentation tells us how to configure it but we don't have a url for the github oauth callback.

Somehow it seems to work unauthorized when deployed, but login doesnt work on either local or deployed version.

Unhandled Runtime Error
Error: Unauthorized

Call Stack
eval
node_modules/.pnpm/[email protected][email protected][email protected][email protected]/node_modules/ai/react/dist/index.mjs (120:0)
Generator.next

fulfilled
node_modules/.pnpm/[email protected][email protected][email protected][email protected]/node_modules/ai/react/dist/index.mjs (22:0)

And on prod:
image

Unhandled Runtime Error Error: Unauthorized

Hello everyone,

I'm reaching out for assistance regarding an issue I'm facing while following the documentation. I have diligently followed all the instructions and properly configured the .env and .env.local files as specified. However, despite these efforts, I'm still encountering an error.

I would greatly appreciate any guidance or support you can provide to help me resolve this matter. If there are any additional steps or troubleshooting suggestions that I may have missed, please let me know.

Thank you for your attention and assistance in

Error i am Facing
Screenshot 2023-06-21 at 10 02 23 PM

My .env configuration file
Screenshot 2023-06-21 at 10 03 00 PM

My .env.local configuration file
Screenshot 2023-06-21 at 10 11 51 PM

Method expects to have requestAsyncStorage, none available

Hello i had to update to nextjs 13.4.8 to increase the serverActionsBodySizeLimit , After updating i am getting the following error :

Error: Invariant: Method expects to have requestAsyncStorage, none available

components\header.tsx (18:28) @ auth

  16 | 
  17 | export async function Header() {
> 18 | const session: any =  auth()
     |                          ^

Any ideas how i can solve this ? while staying on 13.4.8 (Because its when the serverActionsBodySizeLimit was added) Thanks

Unhandled Runtime Error

Have made zero changes to the sample app - it runs, but when you type anything it getting this.

CleanShot 2023-06-16 at 20 55 13

Operating System:
  Platform: darwin
  Arch: arm64
  Version: Darwin Kernel Version 23.0.0: Mon May 22 22:52:05 PDT 2023; root:xnu-10002.0.40.505.5~4/RELEASE_ARM64_T6000
Binaries:
  Node: 18.16.0
  npm: 9.5.1
  Yarn: 1.22.19
  pnpm: N/A
Relevant packages:
  next: 13.4.7-canary.1
  eslint-config-next: 13.4.7-canary.1
  react: 18.2.0
  react-dom: 18.2.0
  typescript: 5.1.3

Setup github login

How do I setup the github login? I followed the intruction byt when i click login and authorize my github account, the chat UI just regreshes and still shows Login option, when I click Login again, it just refreshes the page.

It's tough to type in Japanese!!

Watch the video.
In the case of Japanese input, when you convert from Hiragana to Kanji,
the text is sent with that Enter press. I would like it to be corrected.

2023-06-19.21.24.27.mov

Agent streaming

We have an example of streaming from llm, but does anyone have a working example of utilizing langchain agent with memory and streaming? The following does not work.

import { StreamingTextResponse, LangChainStream, Message } from 'ai'
import { ChatOpenAI } from 'langchain/chat_models/openai'
import { initializeAgentExecutorWithOptions } from 'langchain/agents'
import { Calculator } from 'langchain/tools/calculator'

export const runtime = 'edge'

export async function POST(req: Request) {
const { messages } = await req.json()

const { stream, handlers } = LangChainStream()

const model = new ChatOpenAI({
streaming: true,
callbacks: [handlers]
})

const tools = [new Calculator()]

const executor = await initializeAgentExecutorWithOptions(tools, model, {
agentType: 'chat-zero-shot-react-description',
returnIntermediateSteps: true
})

for (const message of messages as Message[]) {
const input = message.content
await executor.call({ input }).catch(console.error)
}

return new StreamingTextResponse(stream)
}

Error 405 when hitting api/chat endpoint

I'm consistently getting a 405 error when trying to send a request to the api/chat endpoint for a chatCompletion streaming response. I'm not sure what's going on. The route returns 404 when trying to run it locally without deploying.

Here is the code for the route:

import { OpenAIStream, StreamingTextResponse } from 'ai'
import { Configuration, OpenAIApi } from 'openai-edge'

export const runtime = "edge";

const configuration = new Configuration({
  apiKey: "API"
})

const openai = new OpenAIApi(configuration)

export async function POST(req: Request) {
  const json = await req.json()
  const { messages } = json

  const res = await openai.createChatCompletion({
    model: 'gpt-3.5-turbo',
    messages,
    stream: true
  })

  const stream = OpenAIStream(res)

  return new StreamingTextResponse(stream)
}

I'm using the useChat() hook to send the request.

OAuth Provider returned an error. Read more at https://errors.authjs.dev

default deployment as of today works fine locally, but fails on vercel with this very opaque error.

image

i also encountered

image

i think the problem comes because the .env file is undocumented and underspecified. your production deployment clearly has a different list of .envs than what you put in the example.

Can't sign in with Github (Deployed on Vercel)

I just deployed this on Vercel and I can't sign in with Github. I already redeployed and recloned.

Here are all the changes I did.

  1. Add the required keys (OPENAI_API_KEY, AUTH_GITHUB_SECRET, AUTH_GITHUB_ID)
  2. Rename opengraph-image.tsx and remove the export const runtime = 'edge' line
  3. Redeploy
  4. Fix the https://errors.authjs.dev#errors_missingsecret error by adding a NEXTAUTH_SECRET environment variable. Redeploy.
  5. Add https://nextjs-chat-DOMAIN.vercel.app/ as my Homepage URL and https://nextjs-chat-domain.vercel.app/api/auth/callback/github as Authorization callback URL for the Github oauth app.

This is what it's leaving me with

Screenshot 2023-06-22 230137

The logs look normal (this is my first time deploying on vercel, I don't know how to expand them)
Screenshot 2023-06-22 230636

I also tried to run it locally, I changed the Authorization callback URL but the result is the same.

Feature request: implement chatRequestOptions in API route

Is it possible to receive the chatRequestOptions from the useChat 'append' function in the /api/chat/route.ts file?

For context, I'm trying to set a different system message to createChatCompletion based on the example message choice in empty-screen.tsx.

I read the definition of the 'append' type, and it is supposed to support options to pass to the API call:

 /**
     * Append a user message to the chat list. This triggers the API call to fetch
     * the assistant's response.
     * @param message The message to append
     * @param options Additional options to pass to the API call
     */
    append: (message: Message | CreateMessage, chatRequestOptions?: ChatRequestOptions) => Promise<string | null | undefined>;

However, if I send something like the following in empty-screen.tsx:

...
export interface EmptyScreenProps {
  append: (
    message: {
      id: string
      content: string
      role: 'user' | 'system' | 'assistant'
    },
    chatRequestOptions?: ChatRequestOptions | undefined
  ) => void
}

export function EmptyScreen({ append }: EmptyScreenProps) {
  const handleExampleClick = (message: string, choice: number) => {
    append(
      {
        id: nanoid(),
        content: message,
        role: 'user'
      },
      { conversationChoice: choice }
    )
  }
...

I do not see it in route.ts if I'm logging the request or the body.

Is there a way to send data through the append function, as explained above? Thank you!

Clicking on the app logo will open a new tab

If I click the app logo on the header, it will open the same page in a new window (due to target=_blank).

The only use case I could think of is if the user wants to open multiple windows to compare multiple models.
Is this the expected behavior? If so, please close this issue.

Streaming langchain openai functions with non-string output

I have a simple working example where I use langchain and stream the response by calling a chain (as opposed to an LLM). The route.ts file contains this:

import { StreamingTextResponse, LangChainStream } from 'ai'
import { ChatOpenAI } from 'langchain/chat_models/openai'
import {
  ChatPromptTemplate,
  SystemMessagePromptTemplate,
  HumanMessagePromptTemplate,
} from "langchain/prompts";
import { LLMChain } from "langchain/chains";

export const runtime = 'edge'

const chatPrompt = ChatPromptTemplate.fromPromptMessages([
  SystemMessagePromptTemplate.fromTemplate(
    "You are a helpful assistant that translates {input_language} to {output_language}."
  ),
  HumanMessagePromptTemplate.fromTemplate("{text}"),
]);

export async function POST(req: Request) {

  const { messages } = await req.json()
  const { stream, handlers } = LangChainStream()
  const llm = new ChatOpenAI({
    streaming: true,
  })

  const chain = new LLMChain({
    prompt: chatPrompt,
    llm
  });

  const input = messages[messages.length - 1].content;

  chain.call(
    {
      text: input,
      input_language: "English",
      output_language: "Spanish",
    }, [handlers]
  )
  .then(result => {
    console.log("Output:", result);
  })
  .catch(console.error);

  return new StreamingTextResponse(stream)
}

This works correctly, meaning the frontend streams the response. When I console.log(result) in chain.call, I see Output: { text: 'Hola' }. This will probably be relevant to the actual issue below.

Langchain recently released an update that allows for creating chains that use openai functions and return structured output. A minimal example that I've implemented is this:

import { StreamingTextResponse, LangChainStream } from 'ai'
import { ChatOpenAI } from 'langchain/chat_models/openai'
import { z } from "zod";

import {
  ChatPromptTemplate,
  SystemMessagePromptTemplate,
  HumanMessagePromptTemplate,
} from "langchain/prompts";
import { createStructuredOutputChain } from "langchain/chains/openai_functions";
import { createStructuredOutputChainFromZod } from "langchain/chains/openai_functions";

const zodSchema = z.object({
  foods: z
    .array(
      z.object({
        name: z.string().describe("The name of the food item"),
      })
    )
    .describe("An array of food items mentioned in the text"),
});



export const runtime = 'edge'

export async function POST(req: Request) {

  const { messages } = await req.json()
  const { stream, handlers } = LangChainStream()


  const prompt = new ChatPromptTemplate({
    promptMessages: [
      SystemMessagePromptTemplate.fromTemplate(
        "List all food items mentioned in the following text."
      ),
      HumanMessagePromptTemplate.fromTemplate("{inputText}"),
    ],
    inputVariables: ["inputText"],
  });

  const llm = new ChatOpenAI({
    streaming: true,
  })

  const chain = createStructuredOutputChainFromZod(zodSchema, {
    prompt,
    llm,
  });

  const input = messages[messages.length - 1].content;

  chain.call(
      {inputText: input}, [handlers]
  ).then(
    result => {
      console.log("Output:", result);
    }
  )
  .catch(console.error);

  return new StreamingTextResponse(stream)
}

The problem here is that nothing is returned/streamed to the frontend. When I console.log(result) in chain.call, however, I see Output: { output: { foods: [ { name: 'steak' } ] } }. So, I suppose that the problem lies in the fact that this output is not being (correctly) written into the stream because its JSON as opposed to a string. Is there any way I can fix this?

Vercel Build Error

I did the one click build, without modifying any code, but when it went to make a build:

Build Failed
The Edge Function "share/[id]" size is 1.32 MB and your plan size limit is 1 MB.

I'm on the hobby plan. Is there a way to decrease the size of this edge function so that I'm
able to use it on the hobby plan?

Thanks! This looks super cool for building off of. Can't wait to dig into it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.