Code Monkey home page Code Monkey logo

poe-openai-proxy's Introduction

poe-openai-proxy

A wrapper that lets you use the reverse-engineered Python library poe-api as if it was the OpenAI API for ChatGPT. You can connect your favorite OpenAI API based apps to this proxy and enjoy the ChatGPT API for free!

Poe.com from Quora is a free web app that lets you chat with GPT models. poe-api is a Python library that reverse-engineered poe.com so you can use Python to call poe. This project is a wrapper around poe-api that makes it accessible through an HTTP API, which mimics the official OpenAI API for ChatGPT so it can work with other programs that use OpenAI API for their features.

简体中文

Installation

  1. Clone this repository to your local machine:
git clone https://github.com/juzeon/poe-openai-proxy.git
cd poe-openai-proxy/
  1. Install dependencies from requirements.txt:
pip install -r external/requirements.txt
  1. Create the configuration file in the root folder of the project. Instructions are written in the comments:
cp config.example.toml config.toml
vim config.toml
  1. Start the Python backend for poe-api:
python external/api.py # Running on port 5100
  1. Build and start the Go backend:
go build
chmod +x poe-openai-proxy
./poe-openai-proxy

Docker support

If you would like to use docker, just run docker-compose up -d after creating config.toml according to the instructions above.

Usage

See OpenAI Document for more details on how to use the ChatGPT API.

Just replace https://api.openai.com in your code with http://localhost:3700 and you're good to go.

Supported routes:

  • /models
  • /chat/completions
  • /v1/models
  • /v1/chat/completions

Supported parameters:

Parameter Note
model See [bot] section of config.example.toml. Model names are mapped to bot nicknames.
messages You can use this as in the official API, except for name.
stream You can use this as in the official API.

Other parameters will be ignored.

Credit

https://github.com/ading2210/poe-api

poe-openai-proxy's People

Contributors

classicoldsong avatar easychen avatar juzeon avatar lroccoon avatar onerain233 avatar realnoob007 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

poe-openai-proxy's Issues

能否支持更灵活的cool down设定?

目前支持一分钟内一个token的频率限制和一个token在几秒内只能用一次,能否将上面的所有数值都能开放设定?比如可以自己设定十分钟内一个token只能用30次
下面是已有的可设定参数

Rate limit. Default to 10 api calls per token in 1 minute

rate-limit = 10

Cool down of seconds. One same token cannot be used more than once in n seconds

cool-down = 3

如何调用api

你好,我已经成功搭建了环境,但是不知道怎么在其他网站中调用这个api,比如在这个项目中[https://github.com/Yidadaa/ChatGPT-Next-Web/tree/main]应该填写[http://localhost:3700]还是[http://localhost:3700/v1/models]还是其他的呢,我比较新手,不知道怎么搭建,我请教一下你)

Q about rate limit per client

Hey!

What is the recommended rate limit per token per minute? Using 8 (lowered by 2 from the example), the token was banned after 10 minutes (requests were made 8 times/minute, without stopping). Any ideas?

Thanks

新版ws问题

代码:已更新到最新的版本。timeout=15,api-timeout=12。初始化时开启20个token。

问题:刚启动服务一切正常。等到py服务端出现一系列ws close的警告后,在前端提问全部15秒无响应超时,只有重启py才能恢复。

不知是否与最新的poe-api以及超时设置有关。以前的版本可能慢但至少能用,现在只能重启。

P.S. 间歇性问题。后面又好了。可能是自己其他的配置问题。

[Feature Request] Read tokens from HTTP request sent by clients as API key

Hi @juzeon,

I hope this message finds you well. I'm writing this issue to share a suggestion that could potentially enhance the performance and robustness of the Poe-OpenAI-Proxy project.

Currently, as I understand, clients provide tokens to access the API via server configuration. This method could potentially limit the project's resiliency and robustness, especially in scenarios where a dynamic switch of tokens is needed to bolster the project's self-recovery ability.

To address this, I propose a change to the codebase of the server-side that will allow it to read tokens directly from the Apikey sent through an HTTP request by the client. This way, we can achieve better mimicry of the standard OpenAI API and also allows more flexibility for users.

The proposed approach could take the form of an array – ["token1", "token2",...] – passed in as an Authorization header's field. With this in place, the server could effortlessly retrieve the latest tokens directly from this array without any need for manual server-side configuration by the user.

By incorporating this feature, the project could benefit from improved resilience, robustness, and provide an enhanced user experience for those who use this project.

Please consider this proposal, and let me know your thoughts on this matter. I believe this change will bring significant additions to the project and the users thereof.

Thank you for your time and consideration.

Can not request

image

The token is correct but why I can not send a request to the Poe?
It return an error 500 - internal server error

clear context?

Is it possible to clear context on poe by pressing "new chat" on tools like typingmind.com?
I got this to use gpt-4 & claude+ but can't clear context when using typingmind

Issue with third step

vim config.toml doesn't seem to be a valid command in powershell, command prompt, or python. I tried installing Vim but I couldn't figure out how to use it. How do I make this config.toml file?

Failed to register token to Poe

I succeed to use all funtion yesterday.But it start to be invalid today.
Of course, I guarantee that I have manually changed the token to verify that the token is real and usable.

There are the info in running poe-openai-proxy

INFO: registering client: failed: Invalid token or no bots are available.

请问这个项目能正常运行吗?我搭建了无法正常返回

使用的是chatgpt-web项目,配置文件如下:(其中OPENAI_API_KEY乱写的)

# OpenAI API Key - https://platform.openai.com/overview
OPENAI_API_KEY=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

# change this to an `accessToken` extracted from the ChatGPT site's `https://chat.openai.com/api/auth/session` response
OPENAI_ACCESS_TOKEN=

# OpenAI API Base URL - https://api.openai.com
OPENAI_API_BASE_URL=http://192.168.1.130:3700

# OpenAI API Model - https://platform.openai.com/docs/models
OPENAI_API_MODEL=

# set `true` to disable OpenAI API debug log
OPENAI_API_DISABLE_DEBUG=

# Reverse Proxy - Available on accessToken
# Default: https://bypass.churchless.tech/api/conversation
# More: https://github.com/transitive-bullshit/chatgpt-api#reverse-proxy
API_REVERSE_PROXY=

# timeout
TIMEOUT_MS=100000

# Rate Limit
MAX_REQUEST_PER_HOUR=

# Secret key
AUTH_SECRET_KEY=

# Socks Proxy Host
SOCKS_PROXY_HOST=192.168.1.130

# Socks Proxy Port
SOCKS_PROXY_PORT=7788

# Socks Proxy Username
SOCKS_PROXY_USERNAME=

# Socks Proxy Password
SOCKS_PROXY_PASSWORD=

# HTTPS PROXY
HTTPS_PROXY=

python 日志如下:
image

timeout的值太小了

go端是60,py端默认是20,这两个值都太小了,对于写论文的任务根本不够,设为200可能都不一定够,要测试一个最长token的回答的耗时,并预留一点网络传输的delay

openai.error.APIError: Invalid response object from API: '{}' (HTTP response code was 500)

import openai
openai.api_key = "123"
openai.api_base="http://localhost:3700"
openai.ChatCompletion.create(
... model="gpt-3.5-turbo",
... messages=[
... {"role": "system", "content": "You are a helpful assistant."},
... {"role": "user", "content": "Who won the world series in 2020?"},
... {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
... {"role": "user", "content": "Where was it played?"}
... ]
... )

报错:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/openai/api_requestor.py", line 403, in handle_error_response
error_data = resp["error"]
~~~~^^^^^^^^^
KeyError: 'error'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python3.11/dist-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/openai/api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/openai/api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "/usr/local/lib/python3.11/dist-packages/openai/api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/openai/api_requestor.py", line 405, in handle_error_response
raise error.APIError(
openai.error.APIError: Invalid response object from API: '{}' (HTTP response code was 500)

搭建完成后,该如何使用?

小白一个,在gpt的帮助下磕磕绊绊终于搭建完成了。可是搭建完成了不会用,也太难受了OVO ! 请大佬或者小佬不吝赐教一番??!! 感谢~

[Feature Request]增加检测机制,保证运行稳定性

目前只是在添加token的时候会统一检测一遍token的可用性,但是实际在调用的时候也有可能频繁的触发问题。如果能在请求的时候发现返回的是“ERROR: websocket: RSV1 set, FIN not set on control”或者“Daily usage limit: Beaver .....”, 就自动删除此token并同时重新调用一次,这样能提高至少80%的稳定性。目前使用体验不好的原因主要是有时候一个message要请求四五回才成功,主要是我前面写的那两个错误。

好像没有对话上下文

对于poe的某个账号与某个ai对话,在poe官网是可以有上下文的,由poe内部实现,但是现在repo的写法并不支持。是否可以提供个有上下文的对话接口呢?(比如可以token和model作为conversationid,用户不带conversationi提问就返回新的conversationid,下次提问带了cid就用这个token去请求poe)

api在langchain的使用问题

langchain支持openai官方api的调用,但不支持本项目的api,不能执行使用工具的任务,猜测是某些消息参数的问题。大佬如果有时间,麻烦看看。

POE账号被封

Screenshot_20230623-160655_Kiwi_Browser_1_1
前几天用的好好的,突然不能用了,登录发现被ban了。
我又注册了一个新号,使用这个项目,一次也没请求成功又被ban了。

猜测可能的原因:
1.我自己ip问题
2.浏览器UA被针对

"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 "
                  "Safari/537.36",
    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,"
              "application/signed-exchange;v=b3;q=0.7",
    "Accept-Encoding": "gzip, deflate, br",
    "Accept-Language": "zh-CN,zh-TW;q=0.9,zh;q=0.8,en-US;q=0.7,en;q=0.6",
    "Cache-Control": "no-cache",
    "Pragma": "no-cache",
    "Sec-Ch-Ua": "\"Chromium\";v=\"112\", \"Google Chrome\";v=\"112\", \"Not:A-Brand\";v=\"99\"",
    "Sec-Ch-Ua-Mobile": "?0",
    "Sec-Ch-Ua-Platform": "\"Windows\"",
    "Upgrade-Insecure-Requests": "1"
}

stream=true returns improper response format

When using the stream=true parameter, the structure response format is incorrect compared to when stream=false. Some apps give error

Steps to reproduce

  1. Send request with stream=false
--data-raw '{"messages":[{"role":"user","content":"sayyes"}],"model":"gpt-3.5-turbo","temperature":1,"presence_penalty":0,"top_p":1,"frequency_penalty":0,"stream":false}'    

Response:

{"id":"chatcmpl-CuQKKLmyuGrdzaiqxQefJCyJetvrV","object":"chat.completion","created":1689148811,"choices":[{"index":0,"message":{"role":"assistant","content":"Yes! How may I assist you today?","name":""},"finish_reason":"stop"}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}
  1. Send request with stream=true
--data-raw '{"messages":[{"role":"user","content":"sayyes"}],"model":"gpt-3.5-turbo","temperature":1,"presence_penalty":0,"top_p":1,"frequency_penalty":0,"stream":true}'

Response:

data: {"choices":[{"delta":{"role":"assistant"},"finish_reason":null,"index":0}],"created":1689149092,"id":"chatcmpl-nuNuoobJbxsRfrJgenDXhsIJJDptP","model":"gpt-3.5-turbo","object":"chat.completion.chunk"}  

data: {"choices":[{"delta":{"content":"Yes! "},"finish_reason":null,"index":0}],"created":1689149093,"id":"chatcmpl-nuNuoobJbxsRfrJgenDXhsIJJDptP","model":"gpt-3.5-turbo","object":"chat.completion.chunk"}

Expected behavior

The response format should be consistent whether stream is true or false.

got key error

I follow the config with tokens:A list of poe tokens. You can get them from the cookies on poe.com, they look like this: p-b=fdasac5a1dfa6%3D%3D
but got key error issue.
key be found in poe.com like below
image
I used VPN, otherwise can't login poe.
something I missed?

pip错误

Looking in indexes: https://pypi.org/simple/
ERROR: Could not find a version that satisfies the requirement poe-api (from versions: none)
ERROR: No matching distribution found for poe-api

Response encoding (stream: true)

Hi! I'm testing to work with the stream: true parameter and sometimes characters become issues. What encoding is returned with stream enabled and what could be causing this error?

Screenshot 2023-05-21 at 8 00 45 PM

能否限制bot回复结束之前的再次请求?

似乎在机器人回复结束之前session被再次请求或者清除就会被封号,我测试已经ban了我两个有订阅的号,我看poe-api的issue里有人也有相同情况。除了人工控制以外能否在代码里限制机器人回复结束之前的再次请求?
借用poe-api里别人截的图
243170096-88040202-c06e-4c71-89db-31014c9c841c

KeyError: 'proxy'

python3 external/api.py
Traceback (most recent call last):
File "/root/poe-openai-proxy/external/api.py", line 31, in
proxy = config["proxy"]
KeyError: 'proxy'
是什么原因

docker部署报错

[poe-openai-proxy-external-1]容器日志
Traceback (most recent call last):
File "/app/api.py", line 30, in
config = toml.load(config_path)
File "/usr/local/lib/python3.9/site-packages/toml/decoder.py", line 133, in load
with io.open(_getpath(f), encoding='utf-8') as ffile:
IsADirectoryError: [Errno 21] Is a directory: '/app/../config.toml'
Traceback (most recent call last):
File "/app/api.py", line 30, in
config = toml.load(config_path)
File "/usr/local/lib/python3.9/site-packages/toml/decoder.py", line 133, in load
with io.open(_getpath(f), encoding='utf-8') as ffile:
IsADirectoryError: [Errno 21] Is a directory: '/app/../config.toml'
[poe-openai-proxy-api-1]容器日志
panic: read config.toml: is a directory

goroutine 1 [running]:
github.com/juzeon/poe-openai-proxy/conf.Setup()
/app/conf/conf.go:45 +0x5be
main.main()
/app/main.go:12 +0x1d
panic: read config.toml: is a directory

goroutine 1 [running]:
github.com/juzeon/poe-openai-proxy/conf.Setup()
/app/conf/conf.go:45 +0x5be
main.main()
/app/main.go:12 +0x1d
panic: read config.toml: is a directory

goroutine 1 [running]:
github.com/juzeon/poe-openai-proxy/conf.Setup()
/app/conf/conf.go:45 +0x5be
main.main()
/app/main.go:12 +0x1d
通过你的教程创建好配置文件后,通过命令部署docker,显示是成功的。下面是命令里面的显示:
✔ Network poe-openai-proxy_default Created 0.3s
✔ Container poe-openai-proxy-external-1 Started 2.7s
✔ Container poe-openai-proxy-api-1 Started 3.2s
但是通过宝塔查看容器发现是停止状态,并且端口未进行使用,无法启动,提示启动失败,查看日志就是上面的信息,帮忙看看是什么问题!!

docker部署请求报错500

docker 运行报错,
ERROR: registering client error: Post "http://localhost:5100/add_token": dial tcp 127.0.0.1:5100: connect: connection refused
然后post请求报错500 [GIN] 2023/06/26 - 04:54:53 | 500 | 2.694727ms | xxxxxx | POST "/v1/chat/completions"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.