Code Monkey home page Code Monkey logo

chatall's Introduction

Chat with ALL AI Bots Concurrently, Discover the Best

Deutsch | English | Español | Français | Italian | 日本語 | 한국어 | Русский | Tiếng Việt | 简体中文

Open in GitHub Codespaces

Screenshots

Screenshot

Features

Large Language Models (LLMs) based AI bots are amazing. However, their behavior can be random and different bots excel at different tasks. If you want the best experience, don't try them one by one. ChatALL (Chinese name: 齐叨) can send prompt to several AI bots concurrently, help you to discover the best results. All you need to do is download, install and ask.

Is this you?

Typical users of ChatALL are:

  • 🤠Gurus of LLMs, who want to find the best answers or creations from LLMs.
  • 🤓Researchers of LLMs, who want to intuitively compare the strengths and weaknesses of various LLMs in different fields.
  • 😎Developers of LLM applications, who want to quickly debug prompts and find the best-performing foundation models.

Supported bots

AI Bots Web Access API Notes
360 AI Brain Yes No API
Baidu ERNIE No Yes
Character.AI Yes No API
ChatGLM2 6B & 130B Yes No API No Login required
ChatGPT Yes Yes Web Browsing, Azure OpenAI service included
Claude Yes Yes
Code Llama Yes No API
Cohere Command R Models No Yes
Copilot Yes No API
Dedao Learning Assistant Coming soon No API
Falcon 180B Yes No API
Gemini Yes Yes
Gemma 2B & 7B Yes No API
Gradio Yes No API For Hugging Face space/self-deployed models
Groq Cloud No Yes
HuggingChat Yes No API
iFLYTEK SPARK Yes Coming soon
Kimi Yes No API
Llama 2 13B & 70B Yes No API
MOSS Yes No API
Perplexity Yes No API
Phind Yes No API
Pi Yes No API
Poe Yes Coming soon
SkyWork Yes Coming soon
Tongyi Qianwen Yes Coming soon
Vicuna 13B & 33B Yes No API No Login required
WizardLM 70B Yes No API
YouChat Yes No API
You Yes No API
Zephyr Yes No API

More is coming. Upvote your favorite bots in these issues.

Other features

  • Quick-prompt mode: send the next prompt without waiting for the previous request to complete
  • Save chat history locally, protect your privacy
  • Highlight the response you like, delete the bad
  • Enable/disable any bots at any time
  • Switch between one, two, or three-column view
  • Auto update to the latest version
  • Dark mode (contributed by @tanchekwei)
  • Short keys. Press Ctrl + / to know all of them (contributed by @tanchekwei)
  • Multiple chats (contributed by @tanchekwei)
  • Proxy setting (contributed by @msaong)
  • Prompt management (contributed by @tanchekwei)
  • Supports multiple languages (Chinese, English, German, French, Russian, Vietnamese, Korean, Japanese, Spanish, Italian)
  • Supports Windows, macOS and Linux

Planned features:

You are welcome to contribute to these features.

  • Deploy front-end to GitHub Pages

Privacy

All chat history, settings and login data are saved locally on your computer.

ChatALL collects anonymous usage data to help us improve the product. Including:

  • Which AI bots are prompted and how long the prompt is. Not including the prompt content.
  • How long the response is, and which response is deleted/highlighted. Not including the response content.

Prerequisites

ChatALL is a client, not a proxy. Therefore, you must:

  1. Have working accounts and/or API tokens for the bots.
  2. Have reliable network connections to the bots.

Download / Install

Download from https://github.com/sunner/ChatALL/releases

On Windows

Just download the *-win.exe file and proceed with the setup.

On macOS

For Apple Silicon Mac (M1, M2 CPU), download the *-mac-arm64.dmg file.

For other Macs, download *-mac-x64.dmg file.

If you are using Homebrew, you can also install it with:

brew install --cask chatall

On Linux

Debian-based Distributions: Download the .deb file, double click it and install the software. Arch-based Distributions: You can clone ChatALL from the AUR here. You can install it manually or using an AUR helper like yay or paru. Other Distributions: Download the .AppImage file, make it executable, and enjoy the click-to-run experience. You can also use AppimageLauncher.

Troubleshooting

If you encounter any problems while using ChatALL, you can try the following methods to resolve them:

  1. Refresh - press Ctrl + R or + R.
  2. Restart - exit ChatALL and run it again.
  3. Re-login - click the settings button in the upper right corner, then click the corresponding login/logout link to relogin the website.
  4. Create a new chat - click the New Chat button and send prompt again.

If none of the above methods work, you can try resetting ChatALL. Note that this will delete all your settings and message history.

You can reset ChatALL by deleting the following directories:

  • Windows: C:\Users\<user>\AppData\Roaming\chatall\
  • Linux: /home/<user>/.config/chatall/
  • macOS: /Users/<user>/Library/Application Support/chatall/

If the problem persists, please submit an issue.

For developers

Contribute a Bot

The guide may help you.

Run

npm install
npm run electron:serve

Build

Build for your current platform:

npm run electron:build

Build for all platforms:

npm run electron:build -- -wml --x64 --arm64

Credits

Contributors

Others

  • GPT-4 contributed much of the code
  • ChatGPT, Copilot and Google provide many solutions (ranked in order)
  • Inspired by ChatHub. Respect!

Sponsor

If you like this project, please consider:

ko-fi

chatall's People

Contributors

asyncbutneverawaits avatar awesdroid avatar blank038 avatar bxdoan avatar cchrkk avatar cybyob avatar dependabot[bot] avatar eltociear avatar haulyn5 avatar himazinnano8925 avatar huskynarr avatar hwwgh555 avatar jerray avatar johnfelipe avatar k4lu-0p avatar mbs0ft avatar peterdavehello avatar qcgm1978 avatar qfxiongbin avatar samanhappy avatar sarroutbi avatar showkawa avatar stefano-u avatar sunner avatar tanchekwei avatar ultragtx avatar venusgirl avatar wakaka378 avatar yfge avatar yl10139wfm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatall's Issues

百度文心一言,只能登录,没有API...[FEAT]

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Bard Error

Describe the bug
Bard Error

TypeError: Cannot read properties of null (reading 'atValue')

input - claude vs vicuna vs alpaca vs chatglm

image

http://localhost:8080 is blank.

Describe the bug
A clear and concise description of what the bug is.

http://localhost:8080 is blank.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

use the app in browser

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS and version: Windows Server 2022 Datacenter Evaluation 21H2

Additional context
Add any other context about the problem here.

"LMSYS (claude-v1)" - it has too low a writing limit is there a way to extend the "Tokens"/ responses lengthen?

I am testing the program and it is truly an exciting project, congratulations to all the developers. Please help me clarify this step, I was checking the source code to understand the project and see if I can try to contribute in the future (even though for me it is a new language), I understood the various AIs how they communicate but I do not have a clear understanding of where and how the language "LMSYS (claude-v1)" communicates I tried to find the source code but I can't find the reference, can you kindly point me where to look? Then, I noticed that the "LMSYS (claude-v1)" language seems to be an excellent AI, but it has too low a writing limit, is there a way to extend the "Tokens" and lengthen the responses so that it does not interrupt too soon?

Migrate to Typescript

What do you think about it? It greatly improves the dev experience and the maintainability of the project

关于点击清除聊天历史后,输入框再也无法输入的问题。

Describe the bug
ChatALL运行正常,但不管在啥状态,只要点击清除聊天历史按钮后,输入框就再也无法输入了。

To Reproduce
详见录像视频。没打开什么应用,只有CMD-启动模型,Google Chrome是Gradio访问的Web页面,还有录屏软件和ChatALL,其它没啥应用开着。

20230515_ChatALL_error.mp4

Expected behavior
我暂时还是分析、猜测不出原因,不知道会不会跟某个系统的进程冲突?抑或搜狗输入法冲突?

Desktop (please complete the following information):
Windows 11 x64,32GB内存。Python 3.10.7。

[Feature Request] Search for specific text in the chat history by pressing `Ctrl/Cmd + F`

Thanks for creating this app! 🏅

One thing I missed was being able to press Ctrl/Cmd + F to look for a specific text in the chat history.
This is especially useful when you have asked a batch of questions and want to get back to a specific question/response.

As a workaround, I've been using the text search function from the Elements tab in the Developer Tools.

Dark mode

Currently only a light mode is offered, a dark mode would also be nice

Help compiling into an exe

I am a Delphi programmer never worked with JS
I have visual studio 19 and code runner.
I tried to run main.js without any success a lot of errors
Could you plz tell me how to compile to exe/Or run JS.
I do something wrong as I do not know what I am doing

Would it be possible to add a settings do disable auto-scroll to the bottom?

Since it is quite annoying when one wants to read already outputted content from one bot, while the other one is still outputting, since it keeps scolling to the bottom erratically. Right now, one basically needs to wait the entire time until all bots are responded to start reading.

I would like it best if this would be configurable and if the page is scrolled to the bottom once the message is sent, so the bot outputs are in view.

通过Gradio调用本机ChatGLM-6B错误。

命令行窗口返回信息如下:
Traceback (most recent call last):
File "C:\Python\Python311\Lib\site-packages\gradio\routes.py", line 395, in run_predict
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Python311\Lib\site-packages\gradio\blocks.py", line 1191, in process_api
inputs = self.preprocess_data(fn_index, inputs, state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Python311\Lib\site-packages\gradio\blocks.py", line 1027, in preprocess_data
self.validate_inputs(fn_index, inputs)
File "C:\Python\Python311\Lib\site-packages\gradio\blocks.py", line 1014, in validate_inputs
raise ValueError(
ValueError: An event handler (predict) didn't receive enough input values (needed: 6, got: 1).
Check if the event handler calls a Javascript function, and make sure its return value is correct.
Wanted inputs:
[textbox, chatbot, slider, slider, slider, state]
Received inputs:
["广州这个城市怎么样?"]
Traceback (most recent call last):
File "C:\Python\Python311\Lib\site-packages\gradio\routes.py", line 395, in run_predict
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Python311\Lib\site-packages\gradio\blocks.py", line 1191, in process_api
inputs = self.preprocess_data(fn_index, inputs, state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Python311\Lib\site-packages\gradio\blocks.py", line 1027, in preprocess_data
self.validate_inputs(fn_index, inputs)
File "C:\Python\Python311\Lib\site-packages\gradio\blocks.py", line 1014, in validate_inputs
raise ValueError(
ValueError: An event handler (predict) didn't receive enough input values (needed: 6, got: 1).
Check if the event handler calls a Javascript function, and make sure its return value is correct.
Wanted inputs:
[textbox, chatbot, slider, slider, slider, state]
Received inputs:
["广州这个城市怎么样?"]
Traceback (most recent call last):
File "C:\Python\Python311\Lib\site-packages\gradio\routes.py", line 395, in run_predict
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Python311\Lib\site-packages\gradio\blocks.py", line 1191, in process_api
inputs = self.preprocess_data(fn_index, inputs, state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Python311\Lib\site-packages\gradio\blocks.py", line 1027, in preprocess_data
self.validate_inputs(fn_index, inputs)
File "C:\Python\Python311\Lib\site-packages\gradio\blocks.py", line 1014, in validate_inputs
raise ValueError(
ValueError: An event handler (predict) didn't receive enough input values (needed: 6, got: 1).
Check if the event handler calls a Javascript function, and make sure its return value is correct.
Wanted inputs:
[textbox, chatbot, slider, slider, slider, state]
Received inputs:
["你好!"]
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "C:\Python\Python311\Lib\site-packages\uvicorn\protocols\websockets\websockets_impl.py", line 254, in run_asgi
result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Python311\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in call
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Python311\Lib\site-packages\fastapi\applications.py", line 276, in call
await super().call(scope, receive, send)
File "C:\Python\Python311\Lib\site-packages\starlette\applications.py", line 122, in call
await self.middleware_stack(scope, receive, send)
File "C:\Python\Python311\Lib\site-packages\starlette\middleware\errors.py", line 149, in call
await self.app(scope, receive, send)
File "C:\Python\Python311\Lib\site-packages\starlette\middleware\cors.py", line 76, in call
await self.app(scope, receive, send)
File "C:\Python\Python311\Lib\site-packages\starlette\middleware\exceptions.py", line 79, in call
raise exc
File "C:\Python\Python311\Lib\site-packages\starlette\middleware\exceptions.py", line 68, in call
await self.app(scope, receive, sender)
File "C:\Python\Python311\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call
raise e
File "C:\Python\Python311\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call
await self.app(scope, receive, send)
File "C:\Python\Python311\Lib\site-packages\starlette\routing.py", line 718, in call
await route.handle(scope, receive, send)
File "C:\Python\Python311\Lib\site-packages\starlette\routing.py", line 341, in handle
await self.app(scope, receive, send)
File "C:\Python\Python311\Lib\site-packages\starlette\routing.py", line 82, in app
await func(session)
File "C:\Python\Python311\Lib\site-packages\fastapi\routing.py", line 289, in app
await dependant.call(**values)
File "C:\Python\Python311\Lib\site-packages\gradio\routes.py", line 518, in join_queue
if blocks.dependencies[event.fn_index].get("every", 0):
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
TypeError: list indices must be integers or slices, not str
ERROR: closing handshake failed
Traceback (most recent call last):
File "C:\Python\Python311\Lib\site-packages\websockets\legacy\server.py", line 248, in handler
await self.close()
File "C:\Python\Python311\Lib\site-packages\websockets\legacy\protocol.py", line 766, in close
await self.write_close_frame(Close(code, reason))
File "C:\Python\Python311\Lib\site-packages\websockets\legacy\protocol.py", line 1232, in write_close_frame
await self.write_frame(True, OP_CLOSE, data, _state=State.CLOSING)
File "C:\Python\Python311\Lib\site-packages\websockets\legacy\protocol.py", line 1205, in write_frame
await self.drain()
File "C:\Python\Python311\Lib\site-packages\websockets\legacy\protocol.py", line 1194, in drain
await self.ensure_open()
File "C:\Python\Python311\Lib\site-packages\websockets\legacy\protocol.py", line 935, in ensure_open
raise self.connection_closed_exc()
websockets.exceptions.ConnectionClosedError: sent 1000 (OK); no close frame received
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "C:\Python\Python311\Lib\site-packages\uvicorn\protocols\websockets\websockets_impl.py", line 254, in run_asgi
result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Python311\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in call
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Python311\Lib\site-packages\fastapi\applications.py", line 276, in call
await super().call(scope, receive, send)
File "C:\Python\Python311\Lib\site-packages\starlette\applications.py", line 122, in call
await self.middleware_stack(scope, receive, send)
File "C:\Python\Python311\Lib\site-packages\starlette\middleware\errors.py", line 149, in call
await self.app(scope, receive, send)
File "C:\Python\Python311\Lib\site-packages\starlette\middleware\cors.py", line 76, in call
await self.app(scope, receive, send)
File "C:\Python\Python311\Lib\site-packages\starlette\middleware\exceptions.py", line 79, in call
raise exc
File "C:\Python\Python311\Lib\site-packages\starlette\middleware\exceptions.py", line 68, in call
await self.app(scope, receive, sender)
File "C:\Python\Python311\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call
raise e
File "C:\Python\Python311\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call
await self.app(scope, receive, send)
File "C:\Python\Python311\Lib\site-packages\starlette\routing.py", line 718, in call
await route.handle(scope, receive, send)
File "C:\Python\Python311\Lib\site-packages\starlette\routing.py", line 341, in handle
await self.app(scope, receive, send)
File "C:\Python\Python311\Lib\site-packages\starlette\routing.py", line 82, in app
await func(session)
File "C:\Python\Python311\Lib\site-packages\fastapi\routing.py", line 289, in app
await dependant.call(**values)
File "C:\Python\Python311\Lib\site-packages\gradio\routes.py", line 518, in join_queue
if blocks.dependencies[event.fn_index].get("every", 0):
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
TypeError: list indices must be integers or slices, not str
ERROR: closing handshake failed
Traceback (most recent call last):
File "C:\Python\Python311\Lib\site-packages\websockets\legacy\server.py", line 248, in handler
await self.close()
File "C:\Python\Python311\Lib\site-packages\websockets\legacy\protocol.py", line 766, in close
await self.write_close_frame(Close(code, reason))
File "C:\Python\Python311\Lib\site-packages\websockets\legacy\protocol.py", line 1232, in write_close_frame
await self.write_frame(True, OP_CLOSE, data, _state=State.CLOSING)
File "C:\Python\Python311\Lib\site-packages\websockets\legacy\protocol.py", line 1205, in write_frame
await self.drain()
File "C:\Python\Python311\Lib\site-packages\websockets\legacy\protocol.py", line 1194, in drain
await self.ensure_open()
File "C:\Python\Python311\Lib\site-packages\websockets\legacy\protocol.py", line 935, in ensure_open
raise self.connection_closed_exc()
websockets.exceptions.ConnectionClosedError: sent 1000 (OK); no close frame received
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "C:\Python\Python311\Lib\site-packages\uvicorn\protocols\websockets\websockets_impl.py", line 254, in run_asgi
result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Python311\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in call
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Python311\Lib\site-packages\fastapi\applications.py", line 276, in call
await super().call(scope, receive, send)
File "C:\Python\Python311\Lib\site-packages\starlette\applications.py", line 122, in call
await self.middleware_stack(scope, receive, send)
File "C:\Python\Python311\Lib\site-packages\starlette\middleware\errors.py", line 149, in call
await self.app(scope, receive, send)
File "C:\Python\Python311\Lib\site-packages\starlette\middleware\cors.py", line 76, in call
await self.app(scope, receive, send)
File "C:\Python\Python311\Lib\site-packages\starlette\middleware\exceptions.py", line 79, in call
raise exc
File "C:\Python\Python311\Lib\site-packages\starlette\middleware\exceptions.py", line 68, in call
await self.app(scope, receive, sender)
File "C:\Python\Python311\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call
raise e
File "C:\Python\Python311\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call
await self.app(scope, receive, send)
File "C:\Python\Python311\Lib\site-packages\starlette\routing.py", line 718, in call
await route.handle(scope, receive, send)
File "C:\Python\Python311\Lib\site-packages\starlette\routing.py", line 341, in handle
await self.app(scope, receive, send)
File "C:\Python\Python311\Lib\site-packages\starlette\routing.py", line 82, in app
await func(session)
File "C:\Python\Python311\Lib\site-packages\fastapi\routing.py", line 289, in app
await dependant.call(**values)
File "C:\Python\Python311\Lib\site-packages\gradio\routes.py", line 518, in join_queue
if blocks.dependencies[event.fn_index].get("every", 0):
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
TypeError: list indices must be integers or slices, not str
ERROR: closing handshake failed
Traceback (most recent call last):
File "C:\Python\Python311\Lib\site-packages\websockets\legacy\server.py", line 248, in handler
await self.close()
File "C:\Python\Python311\Lib\site-packages\websockets\legacy\protocol.py", line 766, in close
await self.write_close_frame(Close(code, reason))
File "C:\Python\Python311\Lib\site-packages\websockets\legacy\protocol.py", line 1232, in write_close_frame
await self.write_frame(True, OP_CLOSE, data, _state=State.CLOSING)
File "C:\Python\Python311\Lib\site-packages\websockets\legacy\protocol.py", line 1205, in write_frame
await self.drain()
File "C:\Python\Python311\Lib\site-packages\websockets\legacy\protocol.py", line 1194, in drain
await self.ensure_open()
File "C:\Python\Python311\Lib\site-packages\websockets\legacy\protocol.py", line 935, in ensure_open
raise self.connection_closed_exc()
websockets.exceptions.ConnectionClosedError: sent 1000 (OK); no close frame received

ChatALL软件窗口一开始是报返回错误,后来好像服务器拒绝连接,然后就报:连接被服务器关闭了。

设置里面参数如下图:
image

参数应该没问题。

ChatALL界面报错如下图:
image

最后就是连接被服务器拒绝了。。。

机器环境:
联想P52笔记本,Windows 11 x64 版本,最新更新。
内存32GB,NIVIDA P3200显卡,6GB显存。
Python 3.11.3、Torch+CUDA是2.0+cu117,ChatGLM-6B原模型,通过.half().quantize(4).cuda()量化加载。

Gradio的Web页面对话一直都正常,见下图:
image

估计是ChatALL软件调用方式有不太完善的地方,请版主分析。

另外,ChatGLM支持API调用?版主为什么不采用更稳定的API调用呢?

还有一个问题就是,ChatALL点击扫帚图标,清空对话历史后,输入框再也无法输入了。

[Feature Request] Multiple chats

Is your feature request related to a problem? Please describe.
I can have only 1 chat at a time

Describe the solution you'd like
Like chatgpt, have multiple sessions of chats, where I can say different things

Not disabling electrons sandbox

Why is the --no-sandbox flag being used? Chatall works fine with the sandbox.

image

That's the desktop file inside the appimage

DeprecationWarning: Invalid 'main' field in '/workspace/ChatALL/dist_electron/package.json' of 'background.js'. Please either fix that or report it to the module author

ChatALL (main) $ npm run electron:serve

[email protected] electron:serve
vue-cli-service electron:serve

INFO Starting development server...

DONE Compiled successfully in 5385ms 6:02:11 PM

App running at:

Note that the development build is not optimized.
To create a production build, run npm run build.

⠧ Bundling main process...

DONE Compiled successfully in 602ms 6:02:11 PM

File Size Gzipped

dist_electron/index.js 825.06 KiB 182.23 KiB

Images and other types of assets omitted.
Build at: 2023-05-16T18:02:12.009Z - Hash: 929d4d11ab29d3db836f - Time: 602ms

INFO Launching Electron...
update-electron-app config looks good; aborting updates since app is in development mode
(node:4323) [DEP0128] DeprecationWarning: Invalid 'main' field in '/workspace/ChatALL/dist_electron/package.json' of 'background.js'. Please either fix that or report it to the module author
(Use electron --trace-deprecation ... to show where the warning was created)

Bard登录后无法使用

如题
bard在软件中已完成登录,
但在试图选取bard时仍显示未登录
已确认成功连接上代理,其他AI均可使用,例如Chatgpt,new bing

是否可以支持chatgpt4新开放的插件系统?

chatgpt plus新开放了插件系统,可以提供接入网络或者使用其他插件的能力,不知道chatall在调用chatgpt4的时候,是否有办法可以有办法使用上这个插件能力?
image

如何读取对话历史

请问下,对话历史文件的保存路径是在哪里?属于哪种类型的数据库?能否使用python代码直接读取呢?

[BUG] LMSYS (claude-v1)点复制会有段落标签<p>复制内容</p>

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS and version: [e.g. Windows 11, Ubuntu Linux 22.04, macOS 13.3.1]

Additional context
Add any other context about the problem here.

LMSYS (chatglm-6b) source

Love the repo!

Is LMSYS (chatglm-6b) an API? Where do the results come from? Would love to use the model more.

能不能在打包在docker中运行

飞腾FT2000,银河麒麟操作系统,安装arm64时出错,好像是npm的版本低,安装相关依赖包,这个操作系统不让装。
能否给个容器运行chatALL,或者给个网页服务,我可以部署到服务器上,谢谢了!

chat all添加一个继续按钮,可以分别继续对话。

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

[BUG]When asking questions in Japanese, GPT-4 translates the question into English instead of answering it

When asking questions in Japanese, GPT-4 translates the question into English instead of answering it

As the title suggests, when I ask a question in Japanese, GPT-4 does not answer the question but instead translates the Japanese text of the question into English.

I am using the Mac version of the ChatALL app, and the "Language" setting in the settings screen is set to "Automatic." The language setting for macOS is Japanese. There is no "Japanese" option in the "Language" choices. Could this be the cause of the issue?

Screenshots

SS 2023-05-17 12 03 52

Desktop (please complete the following information):

  • OS and version: macOS Big Sur 11.7.6
  • App Ver: 1.17.8

能否支持不使用梯子和账户申请就可以获取的llm能力,对于普通用户来说,账户其实难以申请到

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.