n3d1117 / chatgpt-telegram-bot Goto Github PK
View Code? Open in Web Editor NEW🤖 A Telegram bot that integrates with OpenAI's official ChatGPT APIs to provide answers, written in Python
License: GNU General Public License v2.0
🤖 A Telegram bot that integrates with OpenAI's official ChatGPT APIs to provide answers, written in Python
License: GNU General Public License v2.0
Upon redeployment the bot shouldn't lose track of its context.
I've yet to look into how the browser UI handles multiple sessions, but I assume that there is some sort of unique ID you can set. If that is the case, a potential solution would be to utilize the chat_id
as a unique session ID.
Greetings,
the usability would improve if you added Menu Buttons for Telegram.
Thanks!
I am trying to set up the project and enter my own parameters into the .env file which I've created. Shouldn't I use the .env.py extension instead? If I get an answer, I can add this information to the README so that any newbee would be able to understand configuration requirements
occasionally the bot seems to die with the error below and stops processing messages. restarting the docker fixes the problem. (might be related to using non-english prompts, but i haven't narrowed it down). i wonder of anyone else ran into this.
2023-03-19 18:02:54,155 - telegram.ext._updater - ERROR - Error while getting Updates: httpx.LocalProtocolError: Invalid input ConnectionInputs.SEND_HEADERS in state ConnectionState.CLOSED
I suggest supporting multiple bots and allowing the setting of different role identities to facilitate multiple functions for different bots that need to be changed frequently.
Hi!
After I updated this function, there were some problems: Sometimes when ChatGPT typed a long paragraph, it was easy to fail to load after half of the sentence (this cannot be solved by saying "Continue" with ChatGPT, this situation is true ChatGPT is not finished).
This feature also makes the "typing" on the Telegram dialog appear for a while and then disappear. I also don't know what causes this. In the absence of this function, Telegram's bot will always display "typing" until the message is sent. Relatively speaking, this function may still need some repairs, or leave an option for users to freely choose whether to enable "Live "model.
ALLOWED_TELEGRAM_CHAT_IDS="<CHAT_ID_1>,<CHAT_ID_2>,..."
Can everyone use it without selecting an ID? xD
:~/chatgpt-telegram-bot# python main.py
Debugger enabled on OpenAIAuth
Logging in...
Debugger enabled on OpenAIAuth
Beginning auth process
Beginning part two
Beginning part three
Beginning part four
Beginning part five
Beginning part six
Beginning part seven
Request went through
Response code is 302
New state found
Beginning part eight
Beginning part nine
SUCCESS
Part eight called
Traceback (most recent call last):
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/anyio/streams/tls.py", line 130, in _call_sslobject_method
result = func(*args)
File "/usr/lib/python3.10/ssl.py", line 975, in do_handshake
self._sslobj.do_handshake()
ssl.SSLWantReadError: The operation did not complete (read) (_ssl.c:997)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/backends/asyncio.py", line 67, in start_tls
ssl_stream = await anyio.streams.tls.TLSStream.wrap(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/anyio/streams/tls.py", line 122, in wrap
await wrapper._call_sslobject_method(ssl_object.do_handshake)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/anyio/streams/tls.py", line 137, in _call_sslobject_method
data = await self.transport_stream.receive()
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 1265, in receive
await self._protocol.read_event.wait()
File "/usr/lib/python3.10/asyncio/locks.py", line 214, in wait
await fut
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/_exceptions.py", line 10, in map_exceptions
yield
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/backends/asyncio.py", line 76, in start_tls
raise exc
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/backends/asyncio.py", line 66, in start_tls
with anyio.fail_after(timeout):
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/anyio/_core/_tasks.py", line 118, in exit
raise TimeoutError
TimeoutError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_transports/default.py", line 60, in map_httpcore_exceptions
yield
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_transports/default.py", line 353, in handle_async_request
resp = await self._pool.handle_async_request(req)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 253, in handle_async_request
raise exc
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 237, in handle_async_request
response = await connection.handle_async_request(request)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/_async/connection.py", line 86, in handle_async_request
raise exc
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/_async/connection.py", line 63, in handle_async_request
stream = await self._connect(request)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/_async/connection.py", line 150, in _connect
stream = await stream.start_tls(**kwargs)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/backends/asyncio.py", line 64, in start_tls
with map_exceptions(exc_map):
File "/usr/lib/python3.10/contextlib.py", line 153, in exit
self.gen.throw(typ, value, traceback)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc)
httpcore.ConnectTimeout
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/request/_httpxrequest.py", line 183, in do_request
res = await self._client.request(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_client.py", line 1533, in request
return await self.send(request, auth=auth, follow_redirects=follow_redirects)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_client.py", line 1620, in send
response = await self._send_handling_auth(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_client.py", line 1648, in _send_handling_auth
response = await self._send_handling_redirects(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_client.py", line 1685, in _send_handling_redirects
response = await self._send_single_request(request)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_client.py", line 1722, in _send_single_request
response = await transport.handle_async_request(request)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_transports/default.py", line 352, in handle_async_request
with map_httpcore_exceptions():
File "/usr/lib/python3.10/contextlib.py", line 153, in exit
self.gen.throw(typ, value, traceback)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_transports/default.py", line 77, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectTimeout
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/chatgpt-telegram-bot/main.py", line 32, in
main()
File "/root/chatgpt-telegram-bot/main.py", line 28, in main
telegram_bot.run()
File "/root/chatgpt-telegram-bot/telegram_bot.py", line 125, in run
application.run_polling()
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/ext/_application.py", line 670, in run_polling
return self.__run(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/ext/_application.py", line 858, in __run
raise exc
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/ext/_application.py", line 847, in __run
loop.run_until_complete(self.initialize())
File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/ext/_application.py", line 357, in initialize
await self.bot.initialize()
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/ext/_extbot.py", line 252, in initialize
await super().initialize()
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/_bot.py", line 499, in initialize
await self.get_me()
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/ext/_extbot.py", line 1639, in get_me
return await super().get_me(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/_bot.py", line 313, in decorator
result = await func(*args, **kwargs) # skipcq: PYL-E1102
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/_bot.py", line 644, in get_me
result = await self._post(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/_bot.py", line 395, in _post
return await self._do_post(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/ext/_extbot.py", line 306, in _do_post
return await super()._do_post(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/_bot.py", line 426, in _do_post
return await request.post(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/request/_baserequest.py", line 167, in post
result = await self._request_wrapper(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/request/_baserequest.py", line 290, in _request_wrapper
raise exc
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/request/_baserequest.py", line 276, in _request_wrapper
code, payload = await self.do_request(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/request/_httpxrequest.py", line 200, in do_request
raise TimedOut from err
telegram.error.TimedOut: Timed out
This project looks amazing! Could you publish it to Docker Hub so it’ll be easy to deploy on serverless platforms? Thanks! 😊
{
"detail": {
"message": "Your authentication token has expired. Please try signing in again.",
"type": "invalid_request_error",
"param": null,
"code": "token_expired"
}
}
Maybe we can login in every 12 hours?
yesterday i succussfully run this project, but today i found my lost the connection. (In my country internet is not that good :<)
so i reboot my process, i found this login failed error:
"Auth0 did not issue an access token"
File "C:\Users\ting\anaconda3\envs\py310\lib\site-packages\OpenAIAuth\OpenAIAuth.py", line 309, in part_seven
self.part_eight(old_state=state, new_state=new_state)
File "C:\Users\ting\anaconda3\envs\py310\lib\site-packages\OpenAIAuth\OpenAIAuth.py", line 363, in part_eight
raise Exception("Auth0 did not issue an access token")
Exception: Auth0 did not issue an access token
but i have checked my account & password that is correct, and could successfully login in Chrome browser.
after retried run main.py for 2times, i found this:
"Exception: You have been rate limited."
Beginning auth process
Beginning part two
Beginning part three
You have been rate limited
Login failed
Traceback (most recent call last):
File "/app/main.py", line 46, in <module>
main()
File "/app/main.py", line 40, in main
gpt3_bot = ChatGPT3Bot(config=chatgpt_config, debug=debug)
File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.10/site-packages/asyncChatGPT/asyncChatGPT.py", line 59, in __init__
self.refresh_session()
File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.10/site-packages/asyncChatGPT/asyncChatGPT.py", line 309, in refresh_session
raise exc
File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.10/site-packages/asyncChatGPT/asyncChatGPT.py", line 306, in refresh_session
self.login(self.config["email"], self.config["password"])
"I am receiving an error that I have exceeded the maximum number of tokens. In this example, I have used 152 tokens for the prompt and 4000 for the completion. Why is the completion token usage so high?"
2023-03-18 11:25:18,531 - root - INFO - New message received from user @AlyxAbyss 2023-03-18 11:25:18,909 - openai - INFO - error_code=context_length_exceeded error_message="This model's maximum context length is 4097 tokens. Ho wever, you requested 4152 tokens (152 in the messages, 4000 in the completion). Please reduce the length of the messages or completion." error_par am=messages error_type=invalid_request_error message='OpenAI API error received' stream_error=False 2023-03-18 11:25:18,910 - root - ERROR - This model's maximum context length is 4097 tokens. However, you requested 4152 tokens (152 in the messag es, 4000 in the completion). Please reduce the length of the messages or completion. Traceback (most recent call last): File "/home/vicky/VickyAI/chatgpt-telegram-bot/openai_helper.py", line 49, in get_chat_response response = openai.ChatCompletion.create( File "/home/vicky/.local/lib/python3.9/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) File "/home/vicky/.local/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create response, _, api_key = requestor.request( File "/home/vicky/.local/lib/python3.9/site-packages/openai/api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream) File "/home/vicky/.local/lib/python3.9/site-packages/openai/api_requestor.py", line 619, in _interpret_response self._interpret_response_line( File "/home/vicky/.local/lib/python3.9/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, you requested 4152 tokens (152 in the messages, 400 0 in the completion). Please reduce the length of the messages or completion.
Hi,
Is there any plans to support ChatGPT 4 ?
I have valid email and password in my .env, my VPS IP is from Germany, my OpenAI account was registered in Germany too.
Beginning part three
Beginning part four
Beginning part five
Error in part five
Captcha detected
Login failed
Traceback (most recent call last):
File "/home/ubuntu/chatgpt-telegram-bot/main.py", line 45, in <module>
main()
File "/home/ubuntu/chatgpt-telegram-bot/main.py", line 39, in main
gpt3_bot = ChatGPT3Bot(config=chatgpt_config, debug=debug)
File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/asyncChatGPT/asyncChatGPT.py", line 55, in __init__
self.refresh_session()
File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/asyncChatGPT/asyncChatGPT.py", line 305, in refresh_session
raise exc
File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/asyncChatGPT/asyncChatGPT.py", line 302, in refresh_session
self.login(self.config["email"], self.config["password"])
File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/asyncChatGPT/asyncChatGPT.py", line 340, in login
raise exc
File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/asyncChatGPT/asyncChatGPT.py", line 333, in login
auth.begin()
File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/OpenAIAuth/OpenAIAuth.py", line 83, in begin
self.part_two()
File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/OpenAIAuth/OpenAIAuth.py", line 112, in part_two
self.part_three(token=csrf_token)
File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/OpenAIAuth/OpenAIAuth.py", line 147, in part_three
self.part_four(url=url)
File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/OpenAIAuth/OpenAIAuth.py", line 181, in part_four
self.part_five(state=state)
File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/OpenAIAuth/OpenAIAuth.py", line 219, in part_five
raise ValueError("Captcha detected")
ValueError: Captcha detected
When I set VOICE_REPLY_WITH_TRANSCRIPT_ONLY
to false
, can I let the bot transcribe my voice and output it first, then reply it? It's more user-friendly.
Originally posted by @deanxizian in #38 (comment)
Currently, the bot only supports using inline mode in group chats.
I would like to request a new feature that would allow inline mode to be used in private conversations as well.
This would be useful for users who prefer to interact with the bot one-on-one instead of in a group chat.
Hey.
thanks for your awesome work.
the bot works, but it's extremely slow..
how may I increase its speed?
the server has enough resources.
Hey, I encountered the telegram error Message_too_long when transcribing a 6 minute audio file.
Any chance to split responses longer than the limit (I think 4096 characters) into multiple messages?
Might also apply to chatGPT responses although I have not managed to get such a long response yet but I think theoretically it could be possible (4096 tokens > 4096 characters).
Thank you and the other contributors for all your hard work!
Take a look at this one to add message stream support father-bot/chatgpt_telegram_bot@1ba09de
The "typing" prompt disappears too quickly, which make people wait without action for a longer time.
It is best to call the send_chat_action method in a new thread, with each call every 5 seconds.
I would like to request a new feature that would allow users to configure the default initial prompt, and possibly even create custom prompts (chat modes), ideally moved out to .env
file.
I am getting this error when trying docker-compose on Docker Desktop (Win10)
From docker logs:
#0 9.421 [pipenv.exceptions.InstallError]: ERROR: Could not find a version that satisfies the requirement openaiauth==0.0.6 (from versions: none)
#0 9.421 [pipenv.exceptions.InstallError]: ERROR: No matching distribution found for openaiauth==0.0.6
Wehn I try to install using pip command, same error:
> pip install openaiauth
ERROR: Could not find a version that satisfies the requirement openaiauth (from versions: none)
ERROR: No matching distribution found for openaiauth
What am I missing?
Similar to the browser UI, it would be cool if we would be able to save/retrieve sessions in addition to /reset
I'm thinking of:
/save
for saving a specific session, potentially with a name/sessions
for listing all sessions/session session_id
for opening a specific session/reset
for resetting or deleting a sessionHello, I deployed v0.1.3 with Docker Compose, but the audio message transcription feature is not working. The bot always returns "Failed to transcribe text".
(FFmpeg is installed).
How can I enable debug logging for the revChatGpt package
Plz someone help
After using for a while, the boot will generate such message even though I input a very short sentence:
This model's maximum context length is 4096 tokens. However, you requested 4368 tokens (3168 in the messages, 1200 in the completion). Please reduce the length of the messages or completion.
After restarting the service, it will return normal
Is it possible to add a parameter stream=true
to achieve stream outputs like typing effect?
Something like /refresh
to refresh the tokens via the .refresh_session()
function or something like that
If we could allow the robot to be used by group members, I think that would be a good idea:
However, there are also some limitations to this approach.
I did everything you said but i don't get anything back
running on Ubuntu 22.04
here logs:
(chatgpt-telegram-bot) root@linux:~/chatgpt-telegram-bot# python main.py
Logging in...
Error logging in (Probably wrong credentials)
Error refreshing session:
Error logging in
2022-12-07 14:36:54,351 - telegram.ext._application - INFO - Application started
2022-12-07 14:36:54,632 - root - INFO - New message received from user @ArchLUL
2022-12-07 14:36:54,852 - root - INFO - Error while getting the response: 'Chatbot' object has no attribute 'headers'
2022-12-07 14:37:35,805 - root - INFO - User @Archnet is not allowed to start the bot
2022-12-07 14:37:43,588 - root - INFO - Bot started
2022-12-07 14:37:47,411 - root - INFO - New message received from user @ArchLUL
2022-12-07 14:37:47,516 - root - INFO - Error while getting the response: 'Chatbot' object has no attribute 'headers'
2022-12-07 14:37:54,966 - root - INFO - New message received from user @ArchLUL
2022-12-07 14:37:55,317 - root - INFO - Error while getting the response: 'Chatbot' object has no attribute 'headers'
In some application environments, I can only connect to telegram and openai through proxy. This means that I need to open the global proxy, which will affect the running of other programs. I tried to add
{os.environ["http_proxy"] = " http://127.0.0.1:1231 "} But it doesn't seem to work?
This model's maximum context length is 4096 tokens.
Is there a way to bypass this?
Hi.
I downloaded a new version (tried the last commit from the master and release 0.1.8) of the bot to the server and nothing but the help
command works.
Moreover, if I roll back to the previous one, it works fine.
Does not work either in a group chat or in a personal message
OPENAI_API_KEY="my_openai_token"
TELEGRAM_BOT_TOKEN="my_bot_token"
# ALLOWED_TELEGRAM_USER_IDS="USER_ID_1,USER_ID_2" ###Yes I tried: ALLOWED_TELEGRAM_USER_IDS="*"
# MONTHLY_USER_BUDGETS="100.0,100.0"
# MONTHLY_GUEST_BUDGET="20.0"
# PROXY="http://localhost:8080"
OPENAI_MODEL="gpt-3.5-turbo"
ASSISTANT_PROMPT="You are a helpful assistant."
SHOW_USAGE=false
MAX_TOKENS=1200
MAX_HISTORY_SIZE=15
MAX_CONVERSATION_AGE_MINUTES=30
VOICE_REPLY_WITH_TRANSCRIPT_ONLY=false
N_CHOICES=1
TEMPERATURE=0.5
PRESENCE_PENALTY=0
FREQUENCY_PENALTY=0
IMAGE_SIZE="256x256"
GROUP_TRIGGER_KEYWORD="help"
IGNORE_GROUP_TRANSCRIPTIONS=true
TOKEN_PRICE=0.002
IMAGE_PRICES="0.016,0.018,0.02"
TRANSCRIPTION_PRICE=0.006
OPENAI_API_KEY="my_openai_token"
TELEGRAM_BOT_TOKEN="my_bot_token"
ALLOWED_TELEGRAM_USER_IDS="*"
ASSISTANT_PROMPT="You are a helpful assistant."
SHOW_USAGE=false
MAX_TOKENS=1200
MAX_HISTORY_SIZE=10
MAX_CONVERSATION_AGE_MINUTES=30
VOICE_REPLY_WITH_TRANSCRIPT_ONLY=true
What I was doing:
cd chatgpt_bot_v2/ #old
git rev-parse HEAD #34b79a5e2eadfc6b237882cb08a5d11085098dc9 (the sixth commit after 0.1.5)
docker compose up -d #(the first time I started it with the "--build" key)
###Bot works fine###
docker logs chatgpt_bot_v2-chatgpt-telegram-bot-1 #(2023-03-18 14:00:31,381 - telegram.ext._application - INFO - Application started)
docker compose down
cd ../chatgpt_bot_0.1.8/ #latest commit
git rev-parse HEAD #73d200c64e95e481e2986caeaad70d6b339fb1d9
docker compose up -d
docker logs chatgpt_bot_018-chatgpt-telegram-bot-1#(2023-03-18 14:02:08,078 - telegram.ext._application - INFO - Application started)
###Bot doesnt work###
If you need any additional data, I will try to provide it.
Thanks
Is it possible to add a function that allows users to modify the ASSISTANT_PROMPT parameter of the server through the bot?
, because sometimes he is required to play a different role
Hi, when I try to install I get an error:
chatgpt-telegram-bot-chatgpt-telegram-bot-1 | File "/usr/local/lib/python3.9/site-packages/telegram/request/_httpxrequest.py", line 223, in do_request
chatgpt-telegram-bot-chatgpt-telegram-bot-1 | raise NetworkError(f"httpx.{err.class.name}: {err}") from err
chatgpt-telegram-bot-chatgpt-telegram-bot-1 | telegram.error.NetworkError: httpx.ConnectError: All connection attempts failed
chatgpt-telegram-bot-chatgpt-telegram-bot-1 exited with code 1
I tried to install the previous version and everything worked! Hope for help:)
could you set it up such that if the total token exceeds the max, delete half of the chat history? This way, you can prevent excessive use of tokens while still retaining some chat history for better context.
Some Linux distributions have not yet included Python 3.10 in their official repositories. It would be beneficial to enable the bot to run on Python 3.9 as well.
Upon reviewing the codebase, it appears that the only Python 3.9 incompatible syntax is the usage of the new union operator (|
) in the get_chat_response
function of openai_helper.py
.
Is there anything else that I might be overlooking? If not, we could consider adding legacy support for Python 3.9 by using the Union
type from the typing
module in place of the new union operator for greater compatibility.
the revChatGPT has V2, please update, old one can't useable
Hi.
Thank you for the updates and support of the turbo model!
I noticed that each subsequent query within a conversation uses up more prompt tokens. this continues until I reset the session. Does this sound like the correct behavior?
here is an example of the same prompt, with each iteration increasing token usage:
Since I can't log in into OpenAi natively, I authenticate using my google account. so i cant really use your login method here.
Hello! When sending an audio recording, the following error occurs:
Failed to transcribe text: [Errno 13] Permission denied: 'AgADISsAAnNHiEg'
What access is not included please tell me?
After sending /reset command, there are still following errors. How to clean the history? Thanks!
This model's maximum context length is 4096 tokens. However, you requested 4728 tokens (3528 in the messages, 1200 in the completion). Please reduce the length of the messages or completion.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.