mckaywrigley / chatbot-ui Goto Github PK
View Code? Open in Web Editor NEWAI chat for every model.
Home Page: https://chatbotui.com
License: MIT License
AI chat for every model.
Home Page: https://chatbotui.com
License: MIT License
Alpaca.cpp https://github.com/antimatter15/alpaca.cpp/ runs pretty decently on local machines and M1 macs. I am wondering if it would be possible to add an Alpaca model to the dropdown? It would be very awesome to have a ChatGPT clone running completely locally.
Great app! Here are a few possible improvements...
Streaming API response -- currently have to wait for API completion to finish. I think stream:true API parameter does this
Stop API generation -- if the response was streaming, a button so you can cut the bot off mid generation, like with the original
Flash of pure white screen on page load and reload -- quite painful in dark mode
followed instructions to clone and run locally and get error each time i ask the UI a question
i set up my env local properly (i think)
[Error: OpenAI API returned an error]
would be nice to know what the error is !?
Add a CI step to build and publish the docker image for this repo to the GitHub package registry.
Use GitHub Actions to define a build/deploy step that runs on each commit to the main
branch. The CI step should build the docker image and publish it to the GitHub package repository for this repo under both a tag with the short commit hash, as well as the latest
tag.
This would allow users to pull an already-built docker image from the registry and enable easier use and integration of new changes in hosted docker environments.
My api key gives me access to model gpt-4
and un-commenting
Line 4 in c40f755
Line 10 in c40f755
Now, I guess you kept it commented in order to avoid being flooded with issues saying that GPT 4 is producing errors from people with no gpt-4
access.
I wrote the following api route that provides a list a supported available models given an api key, but I'm not familiar with next.js and I see issues with spamming openai again and again for just checking the available models. Caching should be used. Anyway, here's the endpoint code in case part of it can be useful (maybe with getServerSideProps ?)
// api/models.ts
import { OpenAIModel } from '@/types';
export const config = {
runtime: "edge"
};
type Model = {
id: string;
};
type ModelsData = {
data: Model[];
}
const handler = async (req: Request): Promise<Response> => {
try {
let key: string | null = null;
try {
key = (await req.json())?.key;
} catch (e) { }
const res = await fetch("https://api.openai.com/v1/models", {
headers: {
Authorization: `Bearer ${key ? key : process.env.OPENAI_API_KEY}`
},
});
if (res.status !== 200) {
throw new Error(`OpenAI API returned an error ${res.status} : ${await res.text()}`);
}
const { data } = await res.json() as ModelsData;
const availableModelIds = data.map((model: Model) => model.id)
const supportedModels = availableModelIds.filter(
x => (Object.values(OpenAIModel) as String[]).includes(x)
)
return new Response(
JSON.stringify(supportedModels),
{
status: 200,
headers: {
'content-type': 'application/json',
},
}
);
} catch (error) {
console.error(error);
return new Response("Error", { status: 500 });
}
};
export default handler;
Currently, when your interface GPT writes a reply that exceeds the bottom of the page, it automatically scrolls down to keep the streaming text in view. However, it would be more user-friendly if this feature could be decoupled whenever the user manually scrolls, similar to how the official ChatGPT works.
Sometimes we want to start reading the top of the reply, before it finishes streaming the reply.
When using the light theme, the font color of the question input box is white, which makes it difficult to see what has been typed.
I'm trying to get things running locally. I've cloned the repo, installed dependencies, provided my api key, but I keep receiving the message: Sorry, there was an error.
I've tried creating a new api key and it doesn't fix the issue. Although I can see on OpenAI's site that it updates my API key "last used" to today, so it does hit it.
I've also searched for an error log file but couldn't find it. Any help would be appreciated thanks! (Great work by the way!)
Currently, you have a default system prompt that resets with every new chat.
Having the ability to store and utilize our own customized system prompts within the interface would be an extremely valuable addition. This would allow a user to reuse prompts based on the specific behaviour we desire from GPT in a given thread. Instead of re-typing a new system prompt each time.
I believe the biggest strength of the API is to be able to edit the whole conversation, similar to the openAI playground. It would be great to have edit/delete features.
Hi Is the UI intend to replicate the original chatgpt? because right now it isnt the same and some things things hurt my eyes, also there are some layout issues. If you intend to replicate it 100% I am happy to contribute on that end, fixing the mismatches and layout issues
Most of the time we don't just want a free chat conversation, but a professional one with system role message[s] (or such called prompt prefix) and we can control how many history context (0-any) should be send back to the next message[s].
Like ChatGPT, it would be nice to have a way to stop the model from generating if you want to update the prompt etc
It'd be cool to try out the multi modal capabilities of GPT4. Thoughts?
I have a networking problem and it seems that I have to set a proxy. Message:
[TypeError: fetch failed] {
cause: [ConnectTimeoutError: Connect Timeout Error] {
name: 'ConnectTimeoutError',
code: 'UND_ERR_CONNECT_TIMEOUT',
message: 'Connect Timeout Error'
}
After setting up env file, running "npm run dev" gives:
error - Class extends value undefined is not a constructor or null
This might be caused by a React Class Component being rendered in a Server Component, React Class Components only works in Client Components. Read more: https://nextjs.org/docs/messages/class-component-in-server-component
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/file.js (evalmachine.:5724:19)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/formdata.js (evalmachine.:5881:49)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/body.js (evalmachine.:6094:35)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/response.js (evalmachine.:6510:49)
at __require (evalmachine.:14:50)
at (evalmachine.:11635:30)
at requireFn (file://D:\software\projects\html\chatbot-ui\node_modules\next\dist\compiled\edge-runtime\index.js:1:7079) {
name: 'TypeError'
}
deploy local
Sample:
While chatting with the bot, the user can add a url for an article or a code on github, the chat-bot could identify there is an url, scrap it and extract the contents and use it as context for the chat.
Just like the official ChatGPT website does.
Thank you for your work on this wonderful project.
Support searching within a conversation (along with the name of the conversation)
After chatting back and forth a few times, the dialog context seems to freeze and the responses to an new query stay the same.
Seems there is an issue with the dialogue state?
(GPT3.5 default).
How can I make the markdown code/math render like it does in real chat gpt? thanks
Instead of using the OpenAPI, for example, a user could use the Stanford Alpaca Models training weights, if they were to obtain these.
error - Class extends value undefined is not a constructor or null
This might be caused by a React Class Component being rendered in a Server Component, React Class Components only works in Client Components. Read more: https://nextjs.org/docs/messages/class-component-in-server-component
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/file.js (evalmachine.:5724:19)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/formdata.js (evalmachine.:5881:49)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/body.js (evalmachine.:6094:35)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/response.js (evalmachine.:6510:49)
at __require (evalmachine.:14:50)
at (evalmachine.:11635:30)
at requireFn (file:///Users/bobjames/code/chatgpt/chatbot-ui/node_modules/next/dist/compiled/edge-runtime/index.js:1:7079) {
name: 'TypeError'
}
error - Class extends value undefined is not a constructor or null
This might be caused by a React Class Component being rendered in a Server Component, React Class Components only works in Client Components. Read more: https://nextjs.org/docs/messages/class-component-in-server-component
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/file.js (evalmachine.:5724:19)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/formdata.js (evalmachine.:5881:49)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/body.js (evalmachine.:6094:35)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/response.js (evalmachine.:6510:49)
at __require (evalmachine.:14:50)
at (evalmachine.:11635:30)
at requireFn (file:///Users/bobjames/code/chatgpt/chatbot-ui/node_modules/next/dist/compiled/edge-runtime/index.js:1:7079) {
name: 'TypeError'
}
Sample:
When chatting in a long thread, it can happen that we need some clarifications or we have any question about old messages in the same thread, currently we must copy the whole content go down and ask the chat again and paste the copied text.
With a reply feature we can easily click on "reply" and down in the chat we just ask another question adding the replied text content internally and automatically as part of the context.
I've put my API key in correctly, but it only has access to 3.5 on the dropdown?
I noticed a small sidebar settings animation after clicking on the clear conversations button
it's a styles issue that can be easily fixed
Hello, I recently built a tree-like message history feature (link) and I would like to develop it further to make it more similar to a real ChatGPT conversation. However, implementing this feature would require refactoring the current structure of the conversation. I'm wondering if this is worth doing?
An error message should be displayed, and provide a retry button when an error occurs.
This might be caused by a React Class Component being rendered in a Server Component, React Class Components only works in Client Components. Read more: https://nextjs.org/docs/messages/class-component-in-server-component
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/file.js (evalmachine.:5724:19)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/formdata.js (evalmachine.:5881:49)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/body.js (evalmachine.:6094:35)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/response.js (evalmachine.:6510:49)
at __require (evalmachine.:14:50)
at (evalmachine.:11635:30)
at requireFn (file:///Users/bobjames/code/chatgpt/chatbot-ui/node_modules/next/dist/compiled/edge-runtime/index.js:1:7079) {
name: 'TypeError'
}
error - Class extends value undefined is not a constructor or null
This might be caused by a React Class Component being rendered in a Server Component, React Class Components only works in Client Components. Read more: https://nextjs.org/docs/messages/class-component-in-server-component
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/file.js (evalmachine.:5724:19)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/formdata.js (evalmachine.:5881:49)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/body.js (evalmachine.:6094:35)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/response.js (evalmachine.:6510:49)
at __require (evalmachine.:14:50)
at (evalmachine.:11635:30)
at requireFn (file:///Users/bobjames/code/chatgpt/chatbot-ui/node_modules/next/dist/compiled/edge-runtime/index.js:1:7079) {
name: 'TypeError'
}
Use OpenAI Whisper API to transcribe user voice to input into the chat field (OS native transcription is inaccurate). There could be a passphrase like ("over out" or whatever) that functions as [Enter] command and send the query.
I could try to develop this feature if it's deemed valuable
We should refine the logic for creating new sessions to align it more closely with the ChatGPT website behavior. Specifically, when users click on "New Chat," they will be directed to an empty chat page, and the conversation will only be created in the sidebar after they send their first message.
is it possible to replace the API request framework with https://github.com/transitive-bullshit/chatgpt-api, which has already wrapped the interception after exceeding the request limit?
I'm getting this:
$ npm run dev
> [email protected] dev
> next dev
ready - started server on 0.0.0.0:3000, url: http://localhost:3000
info - Loaded env from /Users/jay/Dropbox/github/chatbot-ui/.env.local
[Browserslist] Could not parse /Users/jay/Dropbox/github/package.json. Ignoring it.
[Browserslist] Could not parse /Users/jay/Dropbox/github/package.json. Ignoring it.
event - compiled client and server successfully in 1762 ms (173 modules)
wait - compiling...
event - compiled successfully in 178 ms (139 modules)
Add a "Continue" button at the bottom of a generated response that may have been cut due to the length.
this allows to easily ask the chat for more text.
First of all, I'd like to thank you for creating and maintaining this wonderful project. I have been using it and find it very helpful.
However, I noticed that the chat input box loses focus after sending a message. This requires users to click on the input box again with the mouse before they can type another message, which can be a bit inconvenient.
I would like to request a feature that keeps the focus on the chat input box after sending a message. This would allow users to continue typing and sending messages more efficiently, without having to manually click on the input box each time.
Best regards
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.