Code Monkey home page Code Monkey logo

llm-red-team / qwen-free-api Goto Github PK

View Code? Open in Web Editor NEW
270.0 4.0 81.0 1.62 MB

🚀 阿里通义千问大模型逆向API白嫖测试【特长:六边形战士】,支持高速流式输出、无水印AI绘图、长文档解读、图像解析、多轮对话,零配置部署,多路token支持,自动清理会话痕迹。

Home Page: https://udify.app/chat/qOXzVl5kkvhQXM8r

License: GNU General Public License v3.0

Dockerfile 0.48% TypeScript 99.25% HTML 0.27%
chat-api chatbot chatgpt-api llm qwen tongyi qwen-api

qwen-free-api's Introduction

Qwen AI Free 服务

支持高速流式输出、支持多轮对话、支持无水印AI绘图、支持长文档解读、图像解析,零配置部署,多路token支持,自动清理会话痕迹。

与ChatGPT接口完全兼容。

还有以下六个free-api欢迎关注:

Moonshot AI(Kimi.ai)接口转API kimi-free-api

阶跃星辰 (跃问StepChat) 接口转API step-free-api

智谱AI (智谱清言) 接口转API glm-free-api

秘塔AI (Metaso) 接口转API metaso-free-api

讯飞星火(Spark)接口转API spark-free-api

聆心智能 (Emohaa) 接口转API emohaa-free-api

目录

免责声明

逆向API是不稳定的,建议前往阿里云官方 https://dashscope.console.aliyun.com/ 付费使用API,避免封禁的风险。

本组织和个人不接受任何资金捐助和交易,此项目是纯粹研究交流学习性质!

仅限自用,禁止对外提供服务或商用,避免对官方造成服务压力,否则风险自担!

仅限自用,禁止对外提供服务或商用,避免对官方造成服务压力,否则风险自担!

仅限自用,禁止对外提供服务或商用,避免对官方造成服务压力,否则风险自担!

在线体验

此链接仅临时测试功能,长期使用请自行部署。

https://udify.app/chat/qOXzVl5kkvhQXM8r

效果示例

验明正身Demo

验明正身

多轮对话Demo

多轮对话

AI绘图Demo

AI绘图

长文档解读Demo

AI绘图

图像解析Demo

AI绘图

10线程并发测试

10线程并发测试

接入准备

通义千问 登录

进入通义千问随便发起一个对话,然后F12打开开发者工具,从Application > Cookies中找到login_tongyi_ticket的值,这将作为Authorization的Bearer Token值:Authorization: Bearer TOKEN

获取login_tongyi_ticket

多账号接入

你可以通过提供多个账号的login_tongyi_ticket,并使用,拼接提供:

Authorization: Bearer TOKEN1,TOKEN2,TOKEN3

每次请求服务会从中挑选一个。

Docker部署

请准备一台具有公网IP的服务器并将8000端口开放。

拉取镜像并启动服务

docker run -it -d --init --name qwen-free-api -p 8000:8000 -e TZ=Asia/Shanghai vinlic/qwen-free-api:latest

查看服务实时日志

docker logs -f qwen-free-api

重启服务

docker restart qwen-free-api

停止服务

docker stop qwen-free-api

Docker-compose部署

version: '3'

services:
  qwen-free-api:
    container_name: qwen-free-api
    image: vinlic/qwen-free-api:latest
    restart: always
    ports:
      - "8000:8000"
    environment:
      - TZ=Asia/Shanghai

Render部署

注意:部分部署区域可能无法连接qwen,如容器日志出现请求超时或无法连接,请切换其他区域部署! 注意:免费账户的容器实例将在一段时间不活动时自动停止运行,这会导致下次请求时遇到50秒或更长的延迟,建议查看Render容器保活

  1. fork本项目到你的github账号下。

  2. 访问 Render 并登录你的github账号。

  3. 构建你的 Web Service(New+ -> Build and deploy from a Git repository -> Connect你fork的项目 -> 选择部署区域 -> 选择实例类型为Free -> Create Web Service)。

  4. 等待构建完成后,复制分配的域名并拼接URL访问即可。

Vercel部署

注意:Vercel免费账户的请求响应超时时间为10秒,但接口响应通常较久,可能会遇到Vercel返回的504超时错误!

请先确保安装了Node.js环境。

npm i -g vercel --registry http://registry.npmmirror.com
vercel login
git clone https://github.com/LLM-Red-Team/qwen-free-api
cd qwen-free-api
vercel --prod

原生部署

请准备一台具有公网IP的服务器并将8000端口开放。

请先安装好Node.js环境并且配置好环境变量,确认node命令可用。

安装依赖

npm i

安装PM2进行进程守护

npm i -g pm2

编译构建,看到dist目录就是构建完成

npm run build

启动服务

pm2 start dist/index.js --name "qwen-free-api"

查看服务实时日志

pm2 logs qwen-free-api

重启服务

pm2 reload qwen-free-api

停止服务

pm2 stop qwen-free-api

推荐使用客户端

使用以下二次开发客户端接入free-api系列项目更快更简单,支持文档/图像上传!

Clivia 二次开发的LobeChat https://github.com/Yanyutin753/lobe-chat

时光@ 二次开发的ChatGPT Web https://github.com/SuYxh/chatgpt-web-sea

接口列表

目前支持与openai兼容的 /v1/chat/completions 接口,可自行使用与openai或其他兼容的客户端接入接口,或者使用 dify 等线上服务接入使用。

对话补全

对话补全接口,与openai的 chat-completions-api 兼容。

POST /v1/chat/completions

header 需要设置 Authorization 头部:

Authorization: Bearer [login_tongyi_ticket]

请求数据:

{
    // 模型名称随意填写
    "model": "qwen",
    "messages": [
        {
            "role": "user",
            "content": "你是谁?"
        }
    ],
    // 如果使用SSE流请设置为true,默认false
    "stream": false
}

响应数据:

{
    "id": "4c4267e7919a41baad8199414ceb5cea",
    "model": "qwen",
    "object": "chat.completion",
    "choices": [
        {
            "index": 0,
            "message": {
                "role": "assistant",
                "content": "我是阿里云研发的超大规模语言模型,我叫通义千问。"
            },
            "finish_reason": "stop"
        }
    ],
    "usage": {
        "prompt_tokens": 1,
        "completion_tokens": 1,
        "total_tokens": 2
    },
    "created": 1710152062
}

AI绘图

对话补全接口,与openai的 images-create-api 兼容。

POST /v1/images/generations

header 需要设置 Authorization 头部:

Authorization: Bearer [login_tongyi_ticket]

请求数据:

{
    // 可以乱填
    "model": "wanxiang",
    "prompt": "一只可爱的猫"
}

响应数据:

{
    "created": 1711507734,
    "data": [
        {
            "url": "https://wanx.alicdn.com/wanx/1111111111/text_to_image/7248e85cfda6491aae59c54e7e679b17_0.png"
        }
    ]
}

文档解读

提供一个可访问的文件URL或者BASE64_URL进行解析。

POST /v1/chat/completions

header 需要设置 Authorization 头部:

Authorization: Bearer [refresh_token]

请求数据:

{
    "model": "qwen",
    "messages": [
        {
            "role": "user",
            "content": [
                {
                    "type": "file",
                    "file_url": {
                        "url": "https://mj101-1317487292.cos.ap-shanghai.myqcloud.com/ai/test.pdf"
                    }
                },
                {
                    "type": "text",
                    "text": "文档里说了什么?"
                }
            ]
        }
    ]
}

响应数据:

{
    "id": "b56ea6c9e86140429fa2de6a6ec028ff",
    "model": "qwen",
    "object": "chat.completion",
    "choices": [
        {
            "index": 0,
            "message": {
                "role": "assistant",
                "content": "文档中包含了四个古代魔法仪式或咒语的描述,它们似乎旨在影响或控制一个特定女性的情感和行为,使其对施术者产生强烈的爱意。以下是每个仪式的具体内容:\n\n1. **仪式一**(PMG 4.1390 – 1495):\n   - 施术者需留下一些面包,将其掰成七小块。\n   - 前往一处英雄、角斗士或其他暴力死亡者丧生的地方。\n   - 对着面包碎片念诵咒语后丢弃,并从该地取一些受污染的泥土扔进目标女性的住所。\n   - 咒语内容包括向命运三女神(Moirai)、罗马版的命运女神(Fates)、自然力量(Daemons)、饥荒与嫉妒之神以及非正常死亡者献祭食物,并请求他们以痛苦折磨目标,使她在梦中惊醒,心生忧虑与恐惧,最终跟随施术者的步伐并顺从其意愿。此过程以赫卡忒(Hecate)女神为命令的源泉。\n\n2. **仪式二**(PMG 4.1342 – 57):\n   - 施术者召唤恶魔(Daemon),通过一系列神秘的神祇名号(如Erekisephthe Araracharara Ephthesikere)要求其将名为Tereous的女子(Apia所生)带至施术者Didymos(Taipiam所生)身边。\n   - 请求该女子在灵魂、心智及女性器官上遭受剧烈痛苦,直至她主动找寻Didymos并与之紧密相连(唇对唇、发对发、腹部对腹部)。整个过程要求立即执行。\n\n3. **仪式三**(PGM 4.1265 – 74):\n   - 揭示了阿佛洛狄忒(Aphrodite)鲜为人知的名字——NEPHERIĒRI[nfr-iry-t]。\n   - 如果想赢得一位美丽女子的芳心,施术者应保持三天纯净,献上乳香,并在心中默念该名字七次。\n   - 这样的做法需持续七天,据说这样便能成功吸引女子。\n\n4. **仪式四**(PGM 4.1496 – 1):\n   - 施术者在燃烧的煤炭上供奉没药(myrrh),同时念诵咒语。\n   - 咒语将没药称为“苦涩的调和者”、“热力的激发者”,并命令它前往指定的女子(及其母亲的名字)处,阻止她进行日常活动(如坐、饮、食、注视他人、亲吻他人),迫使她心中只有施术者,对其产生强烈的欲望与爱意。\n   - 咒语还指示没药直接穿透女子的灵魂,驻留在其心中,焚烧其内脏、胸部、肝脏、气息、骨骼、骨髓,直到她来到施术者身边。\n\n这些仪式反映了古代魔法实践中试图借助超自然力量操控他人情感与行为的企图,涉及对神灵、恶魔、神秘名字及特定物质(如面包、泥土、乳香、没药)的运用,通常伴随着严格的仪式规程和咒语念诵。此类行为在现代伦理和法律框架下被视为不恰当甚至违法,且缺乏科学依据。"
            },
            "finish_reason": "stop"
        }
    ],
    "usage": {
        "prompt_tokens": 1,
        "completion_tokens": 1,
        "total_tokens": 2
    },
    "created": 1712253736
}

图像解析

提供一个可访问的图像URL或者BASE64_URL进行解析。

此格式兼容 gpt-4-vision-preview API格式,您也可以用这个格式传送文档进行解析。

POST /v1/chat/completions

header 需要设置 Authorization 头部:

Authorization: Bearer [refresh_token]

请求数据:

{
    "model": "qwen",
    "messages": [
        {
            "role": "user",
            "content": [
                {
                    "type": "file",
                    "file_url": {
                        "url": "https://img.alicdn.com/imgextra/i1/O1CN01CC9kic1ig1r4sAY5d_!!6000000004441-2-tps-880-210.png"
                    }
                },
                {
                    "type": "text",
                    "text": "图像描述了什么?"
                }
            ]
        }
    ]
}

响应数据:

{
    "id": "895fbe7fa22442d499ba67bb5213e842",
    "model": "qwen",
    "object": "chat.completion",
    "choices": [
        {
            "index": 0,
            "message": {
                "role": "assistant",
                "content": "图像展示了通义千问的标志,一个紫色的六边形和一个蓝色的三角形,以及“通义千问”四个白色的汉字。"
            },
            "finish_reason": "stop"
        }
    ],
    "usage": {
        "prompt_tokens": 1,
        "completion_tokens": 1,
        "total_tokens": 2
    },
    "created": 1712254066
}

login_tongyi_ticket存活检测

检测login_tongyi_ticket是否存活,如果存活live未true,否则为false,请不要频繁(小于10分钟)调用此接口。

POST /token/check

请求数据:

{
    "token": "QIhaHrrXUaIrWMUmL..."
}

响应数据:

{
    "live": true
}

注意事项

Nginx反代优化

如果您正在使用Nginx反向代理qwen-free-api,请添加以下配置项优化流的输出效果,优化体验感。

# 关闭代理缓冲。当设置为off时,Nginx会立即将客户端请求发送到后端服务器,并立即将从后端服务器接收到的响应发送回客户端。
proxy_buffering off;
# 启用分块传输编码。分块传输编码允许服务器为动态生成的内容分块发送数据,而不需要预先知道内容的大小。
chunked_transfer_encoding on;
# 开启TCP_NOPUSH,这告诉Nginx在数据包发送到客户端之前,尽可能地发送数据。这通常在sendfile使用时配合使用,可以提高网络效率。
tcp_nopush on;
# 开启TCP_NODELAY,这告诉Nginx不延迟发送数据,立即发送小数据包。在某些情况下,这可以减少网络的延迟。
tcp_nodelay on;
# 设置保持连接的超时时间,这里设置为120秒。如果在这段时间内,客户端和服务器之间没有进一步的通信,连接将被关闭。
keepalive_timeout 120;

Token统计

由于推理侧不在qwen-free-api,因此token不可统计,将以固定数字返回。

Star History

Star History Chart

qwen-free-api's People

Contributors

kpcofgs avatar vinlic avatar yanyutin753 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

qwen-free-api's Issues

流式输出似乎不太正常

要等很久才有输出,一输出就是一大段内容,然后又停住,等半天再输出一大段内容。
使用docker命令直接装的,没做什么调整。
也同样装了kimi-free和glm-free,这两个都正常,只有qwen-free流式不正常,应该不是我操作的问题吧,这也没什么可操作的空间了

生成图像的问题

接入one-api.使用nextchat,
用户: "生成 小狗图像"
AI: 正在生成中
就没有图像
在官网,也是会提示正在生成中,但是比较慢还是会有图片生成。

为什么我部署的,总是返回chatgpt的答案。

WEBUI用的NextChat,不管怎么问,总是返回一些奇奇怪怪的回答。
kimi版本的就没问题。

compose.yaml
version: "3.3" services: qwen-free-api: stdin_open: true tty: true init: true container_name: qwen-free-api ports: - 8006:8000 environment: - TZ=Asia/Shanghai image: vinlic/qwen-free-api:latest networks: {}

LOG:

qwen-free-api | [2024-03-31 01:12:01.596][success][index<1224,22>] Stream has completed transfer 4ms
qwen-free-api | [2024-03-31 01:12:18.929][info][index<1029,26>] -> POST /v1/chat/completions
qwen-free-api | [2024-03-31 01:12:18.930][info][index<1196,20>] [
qwen-free-api | {
qwen-free-api | role: 'system',
qwen-free-api | content: '\n' +
qwen-free-api | 'You are ChatGPT, a large language model trained by OpenAI.\n' +
qwen-free-api | 'Knowledge cutoff: 2021-09\n' +
qwen-free-api | 'Current model: gpt-4\n' +
qwen-free-api | 'Current time: 2024/3/31 01:12:18\n' +
qwen-free-api | 'Latex inline: $x^2$ \n' +
qwen-free-api | 'Latex block: $$e=mc^2$$\n' +
qwen-free-api | '\n'
qwen-free-api | },
qwen-free-api | { role: 'user', content: '你是谁开发的' }
qwen-free-api | ]
qwen-free-api | [2024-03-31 01:12:20.291][info][index<1001,30>] <- POST /v1/chat/completions 1362ms
qwen-free-api | [2024-03-31 01:12:20.293][success][index<1224,22>] Stream has completed transfer 2ms
qwen-free-api |
qwen-free-api | > [email protected] start
qwen-free-api | > node dist/index.js
qwen-free-api |
qwen-free-api | [2024-04-01 13:05:13.941][success][index<980,20>] Server initialized
qwen-free-api | [2024-04-01 13:05:13.960][info][index<1688,18>] <<<< qwen free server >>>>
qwen-free-api | [2024-04-01 13:05:13.962][info][index<1689,18>] Version: 0.0.7
qwen-free-api | [2024-04-01 13:05:13.963][info][index<1690,18>] Process id: 19
qwen-free-api | [2024-04-01 13:05:13.965][info][index<1691,18>] Environment: dev
qwen-free-api | [2024-04-01 13:05:13.965][info][index<1692,18>] Service name: qwen-free-api
qwen-free-api | [2024-04-01 13:05:13.970][info][index<1005,22>] Route /v1/chat attached
qwen-free-api | [2024-04-01 13:05:13.971][info][index<1005,22>] Route /v1/images attached
qwen-free-api | [2024-04-01 13:05:13.973][info][index<1005,22>] Route /ping attached
qwen-free-api | [2024-04-01 13:05:13.989][success][index<1094,20>] Server listening on port 8000 (0.0.0.0)
qwen-free-api | [2024-04-01 13:05:13.990][success][index<1697,24>] Service startup completed (32ms)
qwen-free-api |
qwen-free-api | > [email protected] start
qwen-free-api | > node dist/index.js
qwen-free-api |
qwen-free-api | [2024-04-01 22:49:18.599][success][index<980,20>] Server initialized
qwen-free-api | [2024-04-01 22:49:18.617][info][index<1688,18>] <<<< qwen free server >>>>
qwen-free-api | [2024-04-01 22:49:18.618][info][index<1689,18>] Version: 0.0.7
qwen-free-api | [2024-04-01 22:49:18.619][info][index<1690,18>] Process id: 19
qwen-free-api | [2024-04-01 22:49:18.621][info][index<1691,18>] Environment: dev
qwen-free-api | [2024-04-01 22:49:18.622][info][index<1692,18>] Service name: qwen-free-api
qwen-free-api | [2024-04-01 22:49:18.626][info][index<1005,22>] Route /v1/chat attached
qwen-free-api | [2024-04-01 22:49:18.628][info][index<1005,22>] Route /v1/images attached
qwen-free-api | [2024-04-01 22:49:18.629][info][index<1005,22>] Route /ping attached
qwen-free-api | [2024-04-01 22:49:18.643][success][index<1094,20>] Server listening on port 8000 (0.0.0.0)
qwen-free-api | [2024-04-01 22:49:18.644][success][index<1697,24>] Service startup completed (30ms)
qwen-free-api | [2024-04-02 20:10:16.439][info][index<1029,26>] -> POST /v1/chat/completions?path=v1&path=chat&path=completions
qwen-free-api | [2024-04-02 20:10:16.451][info][index<1196,20>] [
qwen-free-api | {
qwen-free-api | role: 'system',
qwen-free-api | content: '\n' +
qwen-free-api | 'You are ChatGPT, a large language model trained by OpenAI.\n' +
qwen-free-api | 'Knowledge cutoff: 2021-09\n' +
qwen-free-api | 'Current model: gpt-3.5-turbo\n' +
qwen-free-api | 'Current time: 2024/4/2 20:10:15\n' +
qwen-free-api | 'Latex inline: $x^2$ \n' +
qwen-free-api | 'Latex block: $$e=mc^2$$\n' +
qwen-free-api | '\n'
qwen-free-api | },
qwen-free-api | { role: 'user', content: '你是谁' }
qwen-free-api | ]
qwen-free-api | [2024-04-02 20:10:24.536][info][index<1001,30>] <- POST /v1/chat/completions?path=v1&path=chat&path=completions 8095ms
qwen-free-api | [2024-04-02 20:10:24.594][success][index<1224,22>] Stream has completed transfer 63ms
qwen-free-api | [2024-04-02 20:10:24.740][info][index<1029,26>] -> POST /v1/chat/completions?path=v1&path=chat&path=completions
qwen-free-api | [2024-04-02 20:10:24.741][info][index<1149,20>] [
qwen-free-api | { role: 'user', content: '你是谁' },
qwen-free-api | {
qwen-free-api | role: 'assistant',
qwen-free-api | content: '我是ChatGPT,由OpenAI训练的大型语言模型。我的知识截止日期为2021年9月,当前模型版本为gpt-3.5-turbo。现在的时间是2024年4月2日20点10分15秒。我可以使用LaTeX格式呈现数学表达式,例如:行内样式为$x^2$,块状样式为$$e=mc^2$$。有什么我可以帮助您的吗?'
qwen-free-api | },
qwen-free-api | {
qwen-free-api | role: 'user',
qwen-free-api | content: '使用四到五个字直接返回这句话的简要主题,不要解释、不要标点、不要语气词、不要多余文本,不要加粗,如果没有主题,请直接返回“闲聊”'
qwen-free-api | }
qwen-free-api | ]
qwen-free-api | {
qwen-free-api | content: '闲',
qwen-free-api | contentType: 'text',
qwen-free-api | id: '7cb71506e34c4c17a70d93186d30e163_0',
qwen-free-api | role: 'assistant',
qwen-free-api | status: 'generating'
qwen-free-api | }
qwen-free-api | {
qwen-free-api | content: '闲聊',
qwen-free-api | contentType: 'text',
qwen-free-api | id: '7cb71506e34c4c17a70d93186d30e163_0',
qwen-free-api | role: 'assistant',
qwen-free-api | status: 'generating'
qwen-free-api | }
qwen-free-api | {
qwen-free-api | content: '闲聊',
qwen-free-api | contentType: 'text',
qwen-free-api | id: '7cb71506e34c4c17a70d93186d30e163_0',
qwen-free-api | role: 'assistant',
qwen-free-api | status: 'finished'
qwen-free-api | }
qwen-free-api | [2024-04-02 20:10:25.537][success][index<1177,20>] Stream has completed transfer 6ms
qwen-free-api | [2024-04-02 20:10:25.543][info][index<1001,30>] <- POST /v1/chat/completions?path=v1&path=chat&path=completions 802ms
qwen-free-api | [2024-04-02 20:14:07.943][info][index<1029,26>] -> POST /v1/chat/completions?path=v1&path=chat&path=completions
qwen-free-api | [2024-04-02 20:14:07.943][info][index<1196,20>] [
qwen-free-api | {
qwen-free-api | role: 'system',
qwen-free-api | content: '\n' +
qwen-free-api | 'You are ChatGPT, a large language model trained by OpenAI.\n' +
qwen-free-api | 'Knowledge cutoff: 2021-09\n' +
qwen-free-api | 'Current model: gpt-3.5-turbo\n' +
qwen-free-api | 'Current time: 2024/4/2 20:14:06\n' +
qwen-free-api | 'Latex inline: $x^2$ \n' +
qwen-free-api | 'Latex block: $$e=mc^2$$\n' +
qwen-free-api | '\n'
qwen-free-api | },
qwen-free-api | { role: 'user', content: '你是谁' },
qwen-free-api | {
qwen-free-api | role: 'assistant',
qwen-free-api | content: '我是ChatGPT,由OpenAI训练的大型语言模型。我的知识截止日期为2021年9月,当前模型版本为gpt-3.5-turbo。现在的时间是2024年4月2日20点10分15秒。我可以使用LaTeX格式呈现数学表达式,例如:行内样式为$x^2$,块状样式为$$e=mc^2$$。有什么我可以帮助您的吗?'
qwen-free-api | },
qwen-free-api | { role: 'user', content: '鲁迅是谁' }
qwen-free-api | ]
qwen-free-api | [2024-04-02 20:14:21.080][info][index<1001,30>] <- POST /v1/chat/completions?path=v1&path=chat&path=completions 13138ms
qwen-free-api | [2024-04-02 20:14:51.486][success][index<1224,22>] Stream has completed transfer 30407ms
qwen-free-api | [2024-04-02 20:14:51.614][info][index<1029,26>] -> POST /v1/chat/completions?path=v1&path=chat&path=completions
qwen-free-api | [2024-04-02 20:14:51.614][info][index<1196,20>] [
qwen-free-api | { role: 'system', content: '' },
qwen-free-api | '\n' +
qwen-free-api | '此外,鲁迅在翻译、美术理论引进、基础科学介绍、古籍校勘与研究等领域也有显著贡献。他翻译了许多外国文学作品,促进了东西方文化交流;关注并推动了新兴木刻运动,对**的现代美术发展起到了积极推动作用。\n' +
qwen-free-api | '\n' +
qwen-free-api | '***曾评价鲁迅为“**文化革命的主将”,认为“鲁迅的方向,就是中华民族新文化的方向。”鲁迅在国内外享有极高声誉,尤其是在东亚文化圈,如日本、韩国等地,其影响力尤为显著,被尊为“二十世纪东亚文化地图上占最大领土的作家”。\n' +
qwen-free-api | '\n' +
qwen-free-api | '综上所述,鲁迅是一位在**乃至世界文化史上占有重要地位的全能型文化巨匠,其文学创作、**成果和革命实践对**近现代文化的转型与发展产生了深远影响。'
qwen-free-api | },
qwen-free-api | {
qwen-free-api | role: 'system',
qwen-free-api | content: '简要总结一下对话内容,用作后续的上下文提示 prompt,控制在 200 字以内'
qwen-free-api | }
qwen-free-api | ]
qwen-free-api | [2024-04-02 20:15:04.429][info][index<1001,30>] <- POST /v1/chat/completions?path=v1&path=chat&path=completions 12815ms

vercel部署后出错

服务部署后的链接
404: NOT_FOUND
Code: NOT_FOUND
ID: cle1::tc5cp-1713763543805-85c2ae99bdcc

[请求qwen失败]: STRING_IS_BLANK-sessionId不能为空

项目非常不错,有时会出现这个错误
APIException [Error]: [请求qwen失败]: STRING_IS_BLANK-sessionId不能为空
at checkResult (file:///dist/index.js:1380:9)
at removeConversation (file:///dist/index.js:1154:3)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
errcode: -2001,
errmsg: '[请求qwen失败]: STRING_IS_BLANK-sessionId不能为空',
data: undefined,
httpStatusCode: undefined
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.