cogentapps / chat-with-gpt Goto Github PK
View Code? Open in Web Editor NEWAn open-source ChatGPT app with a voice
Home Page: https://www.chatwithgpt.ai
License: MIT License
An open-source ChatGPT app with a voice
Home Page: https://www.chatwithgpt.ai
License: MIT License
Hi, thanks for your great project. If I want to host this project in my only server, what should I do? Is there any document I can refer to?
Elimination (single chat)
Add the new GPT-4 models to me I need from 32K tokens
add an .env file to add the APIs
Interested in adding support for gpt4?
hi please add a button to stop generating text.
Hey!
I was super excited to see that Whisper is now integrated into this remarkable app. But unfortunately the website doesn't work on Firefox Desktop (macOS) anymore. It just gets stuck at the three pulsating blue dots indicating a loading process. The same problem occurs when using Firefox for Android.
When using a Chromium-based browser, the website loads fine.
Also, I cannot get the speech recognition to work :( Neither on Chrome for Desktop (macOS) nor on Chrome for Android. When clicking the microphone button, the website asks for access to my microphone, which I grant it. Then the microphone turns red, indicating that it is recording, but nothing happens.
No text is being written into the text field, so I thought maybe it's just not visible, but just pressing send after speaking also doesn't work.
I lose my API keys every time I close my tab (self-destructing cookies). I'd like to setup some accounts for family that are preconfigured. Would it be possible to move API keys into backend storage rather than cookie
Hi!
Wanted to test this out on my small home server but I couldn't get it up and running.
I get the error:
Attaching to chatwithgpt-cogentapps-1
chatwithgpt-cogentapps-1 | exec /usr/local/bin/docker-entrypoint.sh: exec format error
chatwithgpt-cogentapps-1 exited with code 1
I've seen this before when the image wasn't ARM compatible.
Just FYI.
Hello,
When attempting to copy the code after it is presented (clicking on the copy option) it is not copying to clipboard.
Tested on different machines Win/MacOSx and Safari, Firefox, Chrome and Edge.
Hi! UI looks great. Was trying to do docker-compose up and got this error when attempting to run the container. Can you please advise? Thanks!
System: mac m1 pro; OS: Ventura 13.1
Full error message:
Attaching to chat-with-gpt-app-1
chat-with-gpt-app-1 |
chat-with-gpt-app-1 | > Chat with GPT [email protected] start
chat-with-gpt-app-1 | > npx ts-node src/index.ts
chat-with-gpt-app-1 |
chat-with-gpt-app-1 | /app/node_modules/ts-node/src/index.ts:859
chat-with-gpt-app-1 | return new TSError(diagnosticText, diagnosticCodes, diagnostics);
chat-with-gpt-app-1 | ^
chat-with-gpt-app-1 | TSError: ⨯ Unable to compile TypeScript:
chat-with-gpt-app-1 | src/object-store/sqlite.ts(29,34): error TS2339: Property 'value' does not exist on type '{}'.
chat-with-gpt-app-1 |
chat-with-gpt-app-1 | at createTSError (/app/node_modules/ts-node/src/index.ts:859:12)
chat-with-gpt-app-1 | at reportTSError (/app/node_modules/ts-node/src/index.ts:863:19)
chat-with-gpt-app-1 | at getOutput (/app/node_modules/ts-node/src/index.ts:1077:36)
chat-with-gpt-app-1 | at Object.compile (/app/node_modules/ts-node/src/index.ts:1433:41)
chat-with-gpt-app-1 | at Module.m._compile (/app/node_modules/ts-node/src/index.ts:1617:30)
chat-with-gpt-app-1 | at Module._extensions..js (node:internal/modules/cjs/loader:1245:10)
chat-with-gpt-app-1 | at Object.require.extensions.<computed> [as .ts] (/app/node_modules/ts-node/src/index.ts:1621:12)
chat-with-gpt-app-1 | at Module.load (node:internal/modules/cjs/loader:1069:32)
chat-with-gpt-app-1 | at Function.Module._load (node:internal/modules/cjs/loader:904:12)
chat-with-gpt-app-1 | at Module.require (node:internal/modules/cjs/loader:1093:19) {
chat-with-gpt-app-1 | diagnosticCodes: [ 2339 ]
chat-with-gpt-app-1 | }
chat-with-gpt-app-1 exited with code 1
great project. I was thinking build something like this by myself, just don't have time to do it. in my original plan, I think system prompt should be a session related config, thusly user can switch between different type of sessions. and there should be a prompt store. in each session, user could use different prompt templates. just share some ideas
chat-with-gpt/app/src/openai.ts
Line 90 in 1998a66
I think it should be 4096 for gpt-3.5-turbo and 8192 for gpt-4?
Would be cool if it could support https://github.com/ggerganov/llama.cpp
Is it on your roadmap?
The example uses auth0 for login. Can there be an example in the readme regarding
Some folks have access to the new gpt-4 version of chat and it would be great to configure the app to allow us to use it
Can we do Discord/Slack style conversation threads?
I have investigate some python lib like langchain
and gpt_index
, it could create something like new bing or even more powerful function. perhaps the server part should be implement by python and use some libs like langchain
to extend chatgpt abilities. I have found some interesting repo : https://github.com/circlestarzero/EX-chatGPT/blob/main/README.en.md
hi i don't see an option to delete chat. please add it.
Thanks for your repo first! 👍
Here is my idea:
Imagine we have a system prompt list, click each of item, the chat dialog will be switch to another specific chat app. Does that interesting?
Btw, add speech to text feature, like azure, could make it more like chat 😄
pls option ^^
Why not make the conversation continue, you need to manually click to play the voice. I think I am going to modify the code by myself.
Please, move copy button for code blocks to the right side of the block, just like ChatGPT window
Running 'docker-compose up' yields the below:
⠿ Container chat-with-gpt-app-1 Created 0.0s
Attaching to chat-with-gpt-app-1
chat-with-gpt-app-1 |
chat-with-gpt-app-1 | > Chat with GPT [email protected] start
chat-with-gpt-app-1 | > npx ts-node src/index.ts
chat-with-gpt-app-1 |
chat-with-gpt-app-1 | /app/node_modules/ts-node/src/index.ts:859
chat-with-gpt-app-1 | return new TSError(diagnosticText, diagnosticCodes, diagnostics);
chat-with-gpt-app-1 | ^
chat-with-gpt-app-1 | TSError: ⨯ Unable to compile TypeScript:
chat-with-gpt-app-1 | src/database/sqlite.ts(86,25): error TS2698: Spread types may only be created from object types.
chat-with-gpt-app-1 | src/database/sqlite.ts(87,51): error TS18046: 'row' is of type 'unknown'.
chat-with-gpt-app-1 | src/database/sqlite.ts(88,43): error TS18046: 'row' is of type 'unknown'.
chat-with-gpt-app-1 | src/database/sqlite.ts(118,25): error TS18046: 'row' is of type 'unknown'.
chat-with-gpt-app-1 | src/database/sqlite.ts(118,47): error TS18046: 'row' is of type 'unknown'.
chat-with-gpt-app-1 |
chat-with-gpt-app-1 | at createTSError (/app/node_modules/ts-node/src/index.ts:859:12)
chat-with-gpt-app-1 | at reportTSError (/app/node_modules/ts-node/src/index.ts:863:19)
chat-with-gpt-app-1 | at getOutput (/app/node_modules/ts-node/src/index.ts:1077:36)
chat-with-gpt-app-1 | at Object.compile (/app/node_modules/ts-node/src/index.ts:1433:41)
chat-with-gpt-app-1 | at Module.m._compile (/app/node_modules/ts-node/src/index.ts:1617:30)
chat-with-gpt-app-1 | at Module._extensions..js (node:internal/modules/cjs/loader:1245:10)
chat-with-gpt-app-1 | at Object.require.extensions. [as .ts] (/app/node_modules/ts-node/src/index.ts:1621:12)
chat-with-gpt-app-1 | at Module.load (node:internal/modules/cjs/loader:1069:32)
chat-with-gpt-app-1 | at Function.Module._load (node:internal/modules/cjs/loader:904:12)
chat-with-gpt-app-1 | at Module.require (node:internal/modules/cjs/loader:1093:19) {
chat-with-gpt-app-1 | diagnosticCodes: [ 2698, 18046, 18046, 18046, 18046 ]
chat-with-gpt-app-1 | }
chat-with-gpt-app-1 exited with code 1
In case of long conversation, the chat suddenly stop working. There is no response or error message, and the spinner keeps showing up in the message field and ChatGPT response.
The problem always occurs if we use the same conversation for a long time with a larger number of requests. Or when we send too many characters in one request
Continuation of the conversation and correct request shortening. Show an error without sending a request with too many characters.
The conversation cannot be continued and it is no longer possible to use this conversation. When the "refresh" button is clicked, an error message that says "Too many requests, please try again later" may appear. It seems like the conversation is not being trimmed properly and the request may have fallen into a loop. It seems that there is also no implemented character limit that can still be sent in one request, such as a request for a summary of too long text. It seems the code for determining how many tokens are required for the conversation according to the number of characters of the request we want to send is not working correctly.
Run docker based on the last Dockerfile.
Failed to load resource: the server responded with a status of 429 ()
api.openai.com/v1/chat/completions:1 Failed to load resource: the server responded with a status of 500 ()
Unexpected Application Error!
Loading chunk 688 failed. (error: https://anonymizedHostname/static/js/688.0767ad51.chunk.js)
ChunkLoadError: Loading chunk 688 failed.
(error: https://anonymizedHostname/static/js/688.0767ad51.chunk.js)
at n.f.j (https://anonymizedHostname/static/js/main.c391d620.js:2:624098)
at https://anonymizedHostname/static/js/main.c391d620.js:2:621888
at Array.reduce (<anonymous>)
at n.e (https://anonymizedHostname/static/js/main.c391d620.js:2:621853)
at https://anonymizedHostname/static/js/main.c391d620.js:2:899651
at j (https://anonymizedHostname/static/js/main.c391d620.js:2:565799)
at ku (https://anonymizedHostname/static/js/main.c391d620.js:2:532565)
at wl (https://anonymizedHostname/static/js/main.c391d620.js:2:521197)
at gl (https://anonymizedHostname/static/js/main.c391d620.js:2:521125)
at yl (https://anonymizedHostname/static/js/main.c391d620.js:2:520988)
GET https://anonymizedHostname/static/js/message.35fa559f.chunk.js net::ERR_ABORTED 429
React Router caught the following error during render ChunkLoadError: Loading chunk 688 failed.
(error: https://anonymizedHostname/static/js/688.0767ad51.chunk.js)
ChunkLoadError: Loading chunk 688 failed.
(error: https://anonymizedHostname/static/js/688.0767ad51.chunk.js)
error loading chat Error: MiniSearch: cannot remove document with ID 850a18e7-6806-4c20-81ea-32eb514028f8: it is not in the index
Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'role')
When the answer and audio is ready, how to configure and make it auto speech?
Hello, thank you for creating this wonderful tool!
As the title suggests, could you consider adding an option to customize the prompt for each new conversation? Really appreciated!
Dear cogentapps,
I couldn't find a different way to contact you, so I have to do it like this, unfortunately. First: Thank you so much for this app and for making it open source! I've been looking for a ChatGPT integration that uses natural sounding TTS since like forever.
Do you by any chance plan to implement OpenAI's Whisper for voice recognition and enable automatic playback of ChatGPT's answer? I'm dreaming about having a continuous hands-free conversation with ChatGPT.
Thank you :)
First off, wow, impressive. I was about to build this myself to get around GPT-4 limits. And I was going to give it Eleven labs voice. You already did it. Which is awesome. Great stuff here. I’ll be using this a lot.
I know this is open source and I will dig around. But I want to hook this chat UI that you’ve built up to lang chain, so I can retrieve data from external sources and also issue natural language commands through Zapier.
Can you point me in the right direction to which files I need to look at in order to override the actual API requests so I can run them through custom functions that attach Lang Chain tools to the model?
Cheers.
Any plans to update to use the GPT-4 API as an option? Or the other engines?
I would like to propose an enhancement. Specifically, I suggest that we allow users to rename previous chats and delete them (#9)
https://www.reddit.com/r/ChatGPT/comments/11hgtrk/chatgpt_can_generate_simple_images_via_svg/
Often ChatGPT can produce the SVG code for simple images. We can render them in the UI. I've also noticed that ChatGPT can sometimes provide Imgur links, but they rarely work.
hello, the container started on win 11, but localhost:3000 does not work, there is an error inside the contain
2023-03-20 09:54:17 Configuring Passport.
2023-03-20 09:54:18 Listening on port 3000.
2023-03-20 09:54:13 npm ERR! path /app
2023-03-20 09:54:13 npm ERR! command failed
2023-03-20 09:54:13 npm ERR! signal SIGTERM
2023-03-20 09:54:13 npm ERR! command sh -c -- npx ts-node --logError src/index.ts
2023-03-20 09:54:13
2023-03-20 09:54:13 npm ERR! A complete log of this run can be found in:
2023-03-20 09:54:13 npm ERR! /root/.npm/_logs/2023-03-20T06_49_44_043Z-debug-0.log
er
how to fix it?
Hi!
I read your Reddit thread and the documentation. Hoping you could also share how to implement the authentication method.
Thank you.
Hello,
Thanks for the project.
When creating an account and sometimes when login the server crashes and I can see the following on the console:
2023-03-17 12:30:50 /app/src/database/sqlite.ts:92
2023-03-17 12:30:50 passwordHash: Buffer.from(row.password_hash),
2023-03-17 12:30:50 ^
2023-03-17 12:30:50 TypeError: Cannot read properties of undefined (reading 'password_hash')
2023-03-17 12:30:50 at Statement. (/app/src/database/sqlite.ts:92:55)
2023-03-17 12:30:50 at Statement.replacement (/app/node_modules/sqlite3/lib/trace.js:25:27)
2023-03-17 12:30:50 at Statement.replacement (/app/node_modules/sqlite3/lib/trace.js:25:27)
On the browser it just shows "empty response".
It is hosted using docker and this has been tested on two completely different machines (hosted on two different machines and different networks).
Also tested on Edge & Chrome & Firefox.
If you need any other details please let me know.
For privacy reasons, I wanted to self-host this app via Docker.
Everything works fine, but there is no microphone button to start voice recognition.
I already updated to the newest version. Is the Docker version behind the one hosted on https://chatwithgpt.netlify.app/ ?
Why do users manually need to trigger the voice response generated with the ElevenLabs speech API? The whole idea to have it is to make the chat experience more organic, so if a user has activated the API in their settings, shouldn't voice responses be played automatically or is this due to some technical limitation?
Currently the favicon shows the React logo. It'd be nice if the project had it's own logo.
Increase background contrast between YOU and ChatGPT text blocks, for better visualization. Thanks!
I think it would be really cool if OpenAI's whisper could be used as an input method for text if possible.
Currently using the tool for case interviews and think it would be really helpful if I could evaluate my spoken responses. Could be a cool way to interact with the GPT API!
[never mind]
I have created a user, it registered well.
I can login from different browsers/machines but it is not remembering my API key, nor my prompt.
I think it should definitely be remembering the API, I can see it does store the information within the cookies but not with the created user.
Also would be nice to be able to delete chats.
Great work so far, love it.
Which version of gpt api be used for this? Can support gpt-4 later?
Can you provide a tutorial on how to deploy on Netlify?
Hello, impressive project! Your project is getting a lot of traction and many folks would be interested in self-hosting it. You already have the Dockerfile ready, would you mind publishing a public Docker image which people can just pull? I think a GitHub action like this would suffice.
Thanks!
when the answer is longer than the chat window,, the remaining text goes to the bottom. instead, how can the page be autoscrolled, to keep the full response in view
npm run build
> [email protected] build
> craco build
craco: *** Cannot find ESLint loader (eslint-loader). ***
Creating an optimized production build...
(node:183216) [DEP_WEBPACK_COMPILATION_NORMAL_MODULE_LOADER_HOOK] DeprecationWarning: Compilation.hooks.normalModuleLoader was moved to NormalModule.getCompilationHooks(compilation).loader
(Use `node --trace-deprecation ...` to show where the warning was created)
Failed to compile.
Module parse failed: Internal failure: parseVec could not cast the value
You may need an appropriate loader to handle this file type, currently no loaders are configured to process this file. See https://webpack.js.org/concepts#loaders
Error: Internal failure: parseVec could not cast the value
The temperature slide responds very slowly after the user slides it.
Just a suggestion to implement export of the whole conversation to the PDF or Markdown
That would be great! 😀
I cannot find ElevenLabs API key, it shows xi-api-key is in the 'Profile' tab on https://beta.elevenlabs.io. But the 'Profile' tab doesn't exist.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.