chenyme / chenyme-aavt Goto Github PK
View Code? Open in Web Editor NEW这是一个全自动(音频)视频翻译项目。利用Whisper识别声音,AI大模型翻译字幕,最后合并字幕视频,生成翻译后的视频。
License: MIT License
这是一个全自动(音频)视频翻译项目。利用Whisper识别声音,AI大模型翻译字幕,最后合并字幕视频,生成翻译后的视频。
License: MIT License
现在只支持视频文件上传转换,希望能支持音频直接提取字幕的功能
后台命令框内容显示如下
oMP: Error #15: Initializing libiomp5nd. d11, but found 1ibiomp5nd. dll already initialized.l oMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpeniP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library, As anunsafe, u nsupported, undocumented workaround you can set the environment variable KMP DUPLICATE LIB OK-TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.
根据install.bat安装了
pip install streamlit -i https://pypi.tuna.tsinghua.edu.cn/simple some-package
pip install -U openai-whisper -i https://pypi.tuna.tsinghua.edu.cn/simple some-package
pip install openai -i https://pypi.tuna.tsinghua.edu.cn/simple some-package
pip install langchain -i https://pypi.tuna.tsinghua.edu.cn/simple some-package
pip install torch torchvision torchaudio -i https://pypi.tuna.tsinghua.edu.cn/simple some-package
pip install faster-whisper -i https://pypi.tuna.tsinghua.edu.cn/simple some-package
启动后gpu加速无法运行,/workspace/venv/lib/python3.10/site-packages/nvidia/cudnn/lib/libcudnn_ops_infer.so.8有这个
Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory
Please make sure libcudnn_ops_infer.so.8 is in your library path!
root@ae950ec2447b:/workspace# find / -type f -name libcudnn_ops_infer.so.8
/opt/conda/lib/python3.10/site-packages/torch/lib/libcudnn_ops_infer.so.8
/opt/conda/pkgs/pytorch-2.1.2-py3.10_cuda11.8_cudnn8.7.0_0/lib/python3.10/site-packages/torch/lib/libcudnn_ops_infer.so.8
find: '/proc/17/task/17/net': Invalid argument
find: '/proc/17/net': Invalid argument
find: '/proc/18/task/18/net': Invalid argument
find: '/proc/18/net': Invalid argument
find: '/proc/19/task/19/net': Invalid argument
find: '/proc/19/net': Invalid argument
find: '/sys/kernel/slab': Input/output error
/workspace/venv/lib/python3.10/site-packages/nvidia/cudnn/lib/libcudnn_ops_infer.so.8
挂着程序在后台跑了两个多小时顺便看了部电影, 然后edge直接把挂在后台的标签页杀了, 命令行窗口还能看到网页上显示的东西全都跟被重置了一样. 程序跑完的结果根本看不到.
部署在本地,似乎是跨域问题?你们没遇到这个问题吗?
我使用开源的whisper 音频转文字软件可以用amd转换,这个软件支持amd显卡吗
建议写到readme 里面
kimi 是哪个平台 链接是什么?
点了install.bat完了不会使用,我想卸载,如何操作?
LocalEntryNotFoundError: Cannot find an appropriate cached snapshot folder for the specified revision on the local disk and outgoing traffic has been disabled. To enable repo look-ups and downloads online, pass 'local_files_only=False' as input.
rt
首先感谢作者开源了本项目,整体还是非常好用的,个人非常喜欢
下面列出几个我遇到的bug,希望能帮助项目变得更好:
project\video.py
:
vad
的赋值应为boolean而非string,即 vad = True if VAD_on else False
。在当前实现中,vad无论UI如何选择都会开启language2
,将会导致后续调用 local_translate
函数出现未定义引用;另外language = ('中文', 'English', '日本語', '한국인', 'Italiano', 'Deutsch')
也许可以放在更早的位置(如93行)进行赋值,以覆盖不同翻译设置关于翻译使用的prompt,本人测试的时候使用性能较弱的本地部署的ChatGLM3-6B-int4,发现当前prompt的翻译效果并不理想,模型会输出很多废话。个人目前将prompt改成如下,可以实现无废话的翻译:
messages=[
{
"role": "user",
"content": f"请将下列括号内的文本翻译为{language2},只需直接回答翻译后的文本。\n[{text}]"}
])
个人意见,仅供参考。
Debian is best
Windows 平台,按照 README.md 中的说明运行毫无任何效果??
实现了部分的功能,看起来挺不错的。
可能要把背景音和音量均衡的问题解决一下,不然很容易漏字、出幻觉。
Gemini pro 目前有一定的免费额度,功能与 GPT4 类似。
希望增加对 Gemini Pro API 的支持,可参考文档。
开webui后,chrome后台就报错,但是前端能正常交互,但是点击“运行程序”就还是报错
Failed to load resource: net::ERR_NAME_NOT_RESOLVED
main.eccc579f.js:2
GET http://localhost:8501/%E9%9F%B3%E9%A2%91(Audio)/_stcore/health net::ERR_CONNECTION_REFUSED
(anonymous) @ main.eccc579f.js:2
xhr @ main.eccc579f.js:2
ke @ main.eccc579f.js:2
_request @ main.eccc579f.js:2
request @ main.eccc579f.js:2
P.forEach.Be. @ main.eccc579f.js:2
(anonymous) @ main.eccc579f.js:2
c @ main.eccc579f.js:2
(anonymous) @ main.eccc579f.js:2
pingServer @ main.eccc579f.js:2
setFsmState @ main.eccc579f.js:2
stepFsm @ main.eccc579f.js:2
websocket.onclose @ main.eccc579f.js:2
main.eccc579f.js:2
GET http://localhost:8501/%E9%9F%B3%E9%A2%91(Audio)/_stcore/host-config net::ERR_CONNECTION_REFUSED
python 3.12版本会在安装faster-whisper失败,3.11版本可以
RateLimitError: Error code: 429 - {'error': {'message': 'max request per minute reached: 3, please try again after 1 seconds', 'type': 'rate_limit_reached_error'}}
Traceback:
File "C:\ai\Chenyme_AAVT_0.6.3_FIixbug\Chenyme_AAVT_0.6.3_FIixbug\env\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 584, in _run_script
exec(code, module.__dict__)
File "C:\ai\Chenyme_AAVT_0.6.3_FIixbug\Chenyme_AAVT_0.6.3_FIixbug\pages\📽️视频(Video).py", line 130, in <module>
result = kimi_translate(st.session_state.kimi_key, translate_option, result, language1, language2, token_num)
File "C:\ai\Chenyme_AAVT_0.6.3_FIixbug\Chenyme_AAVT_0.6.3_FIixbug\utils\utils.py", line 190, in kimi_translate
completion = client.chat.completions.create(
File "C:\ai\Chenyme_AAVT_0.6.3_FIixbug\Chenyme_AAVT_0.6.3_FIixbug\env\lib\site-packages\openai\_utils\_utils.py", line 275, in wrapper
return func(*args, **kwargs)
File "C:\ai\Chenyme_AAVT_0.6.3_FIixbug\Chenyme_AAVT_0.6.3_FIixbug\env\lib\site-packages\openai\resources\chat\completions.py", line 667, in create
return self._post(
File "C:\ai\Chenyme_AAVT_0.6.3_FIixbug\Chenyme_AAVT_0.6.3_FIixbug\env\lib\site-packages\openai\_base_client.py", line 1233, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "C:\ai\Chenyme_AAVT_0.6.3_FIixbug\Chenyme_AAVT_0.6.3_FIixbug\env\lib\site-packages\openai\_base_client.py", line 922, in request
return self._request(
File "C:\ai\Chenyme_AAVT_0.6.3_FIixbug\Chenyme_AAVT_0.6.3_FIixbug\env\lib\site-packages\openai\_base_client.py", line 998, in _request
return self._retry_request(
File "C:\ai\Chenyme_AAVT_0.6.3_FIixbug\Chenyme_AAVT_0.6.3_FIixbug\env\lib\site-packages\openai\_base_client.py", line 1046, in _retry_request
return self._request(
File "C:\ai\Chenyme_AAVT_0.6.3_FIixbug\Chenyme_AAVT_0.6.3_FIixbug\env\lib\site-packages\openai\_base_client.py", line 998, in _request
return self._retry_request(
File "C:\ai\Chenyme_AAVT_0.6.3_FIixbug\Chenyme_AAVT_0.6.3_FIixbug\env\lib\site-packages\openai\_base_client.py", line 1046, in _retry_request
return self._request(
File "C:\ai\Chenyme_AAVT_0.6.3_FIixbug\Chenyme_AAVT_0.6.3_FIixbug\env\lib\site-packages\openai\_base_client.py", line 1013, in _request
raise self._make_status_error_from_response(err.response) from None
Log as below.
Maybe Add a configuration of Max request?
本次任务目录:D:/idm/AAVT_0.8.4_small/project/cache/2024-07-25 03-16-03
2024-07-25 03:16:07.513 WARNING streamlit:
Warning: to view a Streamlit app on a browser, use Streamlit in a file and
run it with the following command:
streamlit run [FILE_NAME] [ARGUMENTS]
2024-07-25 03:16:07.522 WARNING streamlit.runtime.state.session_state_proxy: Session state does not function when running a script without streamlit run
2024-07-25 03:16:11.833 WARNING streamlit:
Warning: to view a Streamlit app on a browser, use Streamlit in a file and
run it with the following command:
streamlit run [FILE_NAME] [ARGUMENTS]
2024-07-25 03:16:11.843 WARNING streamlit.runtime.state.session_state_proxy: Session state does not function when running a script without streamlit run
*** Faster Whisper 本地模型加载模式 ***
A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.1 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.
Traceback (most recent call last): File "", line 1, in
File "C:\Users\lenovo\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Users\lenovo\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 129, in _main
return self._bootstrap(parent_sentinel)
File "C:\Users\lenovo\AppData\Local\Programs\Python\Python310\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
File "C:\Users\lenovo\AppData\Local\Programs\Python\Python310\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self.kwargs)
File "D:\idm\AAVT_0.8.4_small\project\utils\utils2.py", line 143, in faster_whisper_result
segments, _ = model.transcribe(file_path,
File "D:\idm\AAVT_0.8.4_small\env\lib\site-packages\faster_whisper\transcribe.py", line 333, in transcribe
speech_chunks = get_speech_timestamps(audio, vad_parameters)
File "D:\idm\AAVT_0.8.4_small\env\lib\site-packages\faster_whisper\vad.py", line 74, in get_speech_timestamps
model = get_vad_model()
File "D:\idm\AAVT_0.8.4_small\env\lib\site-packages\faster_whisper\vad.py", line 229, in get_vad_model
return SileroVADModel(path)
File "D:\idm\AAVT_0.8.4_small\env\lib\site-packages\faster_whisper\vad.py", line 235, in init
import onnxruntime
File "D:\idm\AAVT_0.8.4_small\env\lib\site-packages\onnxruntime_init.py", line 23, in
from onnxruntime.capi.pybind_state import ExecutionMode # noqa: F401
File "D:\idm\AAVT_0.8.4_small\env\lib\site-packages\onnxruntime\capi_pybind_state.py", line 32, in
from .onnxruntime_pybind11_state import * # noqa
AttributeError: ARRAY_API not found
Process Process-2:
Traceback (most recent call last):
File "D:\idm\AAVT_0.8.4_small\env\lib\site-packages\faster_whisper\vad.py", line 235, in init
import onnxruntime
File "D:\idm\AAVT_0.8.4_small\env\lib\site-packages\onnxruntime_init.py", line 57, in
raise import_capi_exception
File "D:\idm\AAVT_0.8.4_small\env\lib\site-packages\onnxruntime_init.py", line 23, in
from onnxruntime.capi._pybind_state import ExecutionMode # noqa: F401
File "D:\idm\AAVT_0.8.4_small\env\lib\site-packages\onnxruntime\capi_pybind_state.py", line 32, in
from .onnxruntime_pybind11_state import * # noqa
ImportError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\lenovo\AppData\Local\Programs\Python\Python310\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
File "C:\Users\lenovo\AppData\Local\Programs\Python\Python310\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "D:\idm\AAVT_0.8.4_small\project\utils\utils2.py", line 143, in faster_whisper_result
segments, _ = model.transcribe(file_path,
File "D:\idm\AAVT_0.8.4_small\env\lib\site-packages\faster_whisper\transcribe.py", line 333, in transcribe
speech_chunks = get_speech_timestamps(audio, vad_parameters)
File "D:\idm\AAVT_0.8.4_small\env\lib\site-packages\faster_whisper\vad.py", line 74, in get_speech_timestamps
model = get_vad_model()
File "D:\idm\AAVT_0.8.4_small\env\lib\site-packages\faster_whisper\vad.py", line 229, in get_vad_model
return SileroVADModel(path)
File "D:\idm\AAVT_0.8.4_small\env\lib\site-packages\faster_whisper\vad.py", line 237, in init
raise RuntimeError(
RuntimeError: Applying the VAD filter requires the onnxruntime package
正在生成SRT字幕文件
2024-07-25 03:16:17.843 Uncaught app exception
Traceback (most recent call last):
File "D:\idm\AAVT_0.8.4_small\env\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 589, in _run_script
exec(code, module.dict)
File "D:\idm\AAVT_0.8.4_small\Chenyme-AAVT.py", line 42, in
media()
File "D:\idm\AAVT_0.8.4_small\project\media.py", line 457, in media
srt_content = generate_srt_from_result(result)
File "D:\idm\AAVT_0.8.4_small\project\utils\utils2.py", line 525, in generate_srt_from_result
segments = result['segments']
TypeError: 'NoneType' object is not subscriptable
执行安装文件的时候创建一个专用的虚拟环境,在里面安装依赖包,而不是在全局Python环境中安装
APIStatusError: <title>413 Request Entity Too Large</title>
在翻译完成后对字幕进行发音。
例如将一个英文视频转换成中文视频。
若使用kimi进行翻译,如果字幕超过一定数量之后,就无法进行翻译了,例如,一个视频超过10分钟,之后,就没办法进行翻译了,所提供的字幕就是还是英文的
You can now view your Streamlit app in your browser.
Local URL: http://localhost:8501
Network URL: http://192.168.1.117:8501
本次任务目录:C:/Users/G1763/Downloads/AAVT_0.8.4_full (1)/AAVT_0.8.4_full/project/cache/2024-07-05 11-52-57
2024-07-05 11:53:03.043 WARNING streamlit:
Warning: to view a Streamlit app on a browser, use Streamlit in a file and
run it with the following command:
streamlit run [FILE_NAME] [ARGUMENTS]
2024-07-05 11:53:03.053 WARNING streamlit.runtime.state.session_state_proxy: Session state does not function when running a script without streamlit run
2024-07-05 11:53:06.299 WARNING streamlit:
Warning: to view a Streamlit app on a browser, use Streamlit in a file and
run it with the following command:
streamlit run [FILE_NAME] [ARGUMENTS]
2024-07-05 11:53:06.308 WARNING streamlit.runtime.state.session_state_proxy: Session state does not function when running a script without streamlit run
*** Faster Whisper 本地模型加载模式 ***
2024-07-05 11:53:10.179 Uncaught app exception
Traceback (most recent call last):
File "C:\Users\G1763\Downloads\AAVT_0.8.4_full (1)\AAVT_0.8.4_full\env\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 589, in _run_script
exec(code, module.dict)
File "C:\Users\G1763\Downloads\AAVT_0.8.4_full (1)\AAVT_0.8.4_full\Chenyme-AAVT.py", line 42, in
media()
File "C:\Users\G1763\Downloads\AAVT_0.8.4_full (1)\AAVT_0.8.4_full\project\media.py", line 701, in media
srt_content = generate_srt_from_result(result)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\G1763\Downloads\AAVT_0.8.4_full (1)\AAVT_0.8.4_full\project\utils\utils2.py", line 525, in generate_srt_from_result
segments = result['segments']
~~~~~~^^^^^^^^^^^^
TypeError: 'NoneType' object is not subscriptable
本 issue 严格意义上并非此项目的问题,根本原因出在了 faster-whisper 库中。以防有人遇到和我同样的问题,仅在此分享我的解决方式。
在Windows的NVidia GPU环境下,选择使用本地的 faster-whisper 进行视频转文字的任务时,由于 faster-whisper 库自身存在的已知 issue,可能导致 faster_whisper_result
函数在被调用完毕即将返回结果时崩溃,表现为终端在输出类似以下内容后:
- whisper识别内容:
柔らかそうな感じなんですよ!
紧接着就会直接崩溃,不会产生任何报错信息。pdb 等调试器同样不会返回报错。在 Windows 的 Event Viewer 中可以看到程序的错误退出日志。
由于是 faster-whisper 自身的问题,参考该库 issue 中的临时解决方式,可以使用多进程来规避主进程的崩溃。这样 faster-whisper 所在的进程崩溃了也不会导致本项目主程序的崩溃。
具体代码改动可以参考这个 commit。核心修改为 project/utils/utils2.py
中添加一个 runWhisperSeperateProc
函数,之中使用新的进程调用原本的 faster_whisper_result
函数。
st.session_state
,所以理论上应该是安全的和本 issue 内容无直接关联的PS: 重构后的界面挺好的;在个人的测试中,本地 LLM 的翻译能力 aya:8B 较为出众
或者可以替换已经生成的字幕,再进行合并
【错误/建议/求助】[模块名称] 的 [简要错误/建议/求助描述]
简要描述 错误/建议/求助 的情况,包括发生 错误/建议/求助 的使用的模块名称和简要错误描述。
任何其他可能有助于理解和解决问题的信息。
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python39\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 542, in _run_script
exec(code, module.dict)
File "C:\Chenyme_AAVT_0.6.1\pages\🎙️音频(Audio).py", line 46, in
result = get_whisper_result(uploaded_file, cache_dir, device, w_model_option, w_version, vad)
TypeError: get_whisper_result() missing 3 required positional arguments: 'lang', 'beam_size', and 'min_vad'
2024-03-12 07:30:23.183 Uncaught app exception
Traceback (most recent call last):
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python39\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 542, in _run_script
exec(code, module.dict)
File "C:\Chenyme_AAVT_0.6.1\pages\🎙️音频(Audio).py", line 46, in
result = get_whisper_result(uploaded_file, cache_dir, device, w_model_option, w_version, vad)
TypeError: get_whisper_result() missing 3 required positional arguments: 'lang', 'beam_size', and 'min_vad'
上传视频时出现AxiosError: Request failed with status code 403
rt
在翻译时,希望能够使用自定义的提示词, 以便向AI指明待翻译文字的专业领域,特定的术语词汇,目标文字的语言风格等翻译需求,优化翻译效果。
OSError: [WinError 126] 找不到指定的模块。 Error loading "D:\AI\AAVT_0.8.4_full\env\Lib\site-packages\torch\lib\fbgemm.dll" or one of its dependencies.
Traceback:
File "D:\AI\AAVT_0.8.4_full\env\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 589, in run_script
exec(code, module.dict)
File "D:\AI\AAVT_0.8.4_full\Chenyme-AAVT.py", line 3, in
from project.media import media
File "D:\AI\AAVT_0.8.4_full\project\media.py", line 5, in
import torch
File "D:\AI\AAVT_0.8.4_full\env\Lib\site-packages\torch_init.py", line 148, in
raise err
本地调用模式
加载模型:D:/BigModel/Chenyme-AAVT-main/models/medium
OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.
WIN11
0.9
1.12.5
ルッカースをどうもり合わせってやつですね。 そうです。 という訳で楽しんでいただいて、ください。 これに楽しめるって、そう いうことなんでいいの。 でも、明らうは楽しいです。 んん、ごめんなさい。 えっえーっと こんな形のサンドバックですが えぇ バック 実用制はかいむ 申し訳ございません ああ・・・ ああ・・・ ああ・・・ ああ・・・ これからも アキラのやりたいことと好きなこと あと、刺してることで これからも続いていくと思います あと、トラブル! うん! ああ・・・ これからもよろしくお願いします ああ・・・ めちゃくちゃ 下気持ち悪いって事だけわかりますね という訳で 本日も サイズマッコークリをいただき まっかりに ありがとうございました ハラバンもしましたね あぁ またの コリをお待ちしておりますね 必然いたします
[❌ ERROR] 运行出错!
"若显示 缺失 fbgemm.dll,请使用Install选择修复版本!"
"若显示 缺失 cudnn_ops_infer64_8.dll 请前往GitHub下载相关dll!"
"若有其他报错,请阅读常见问题,或前往GitHub 或 群组讨论!"
第一步,设置GPU加速
第二步,添加文件
第三步,识别
No response
假如再大胆一点
这个是不是就是heygen video translation
的大致实现思路,当然我是一个rookie,真的过程想必远比这个复杂,这里最大的难点是,如何识别出不同的声音的前后时间轴,中间还有相关的去背景音,识别误差校准等很多问题
ValueError: [Errno 22] Invalid argument
File "D:\Chenyme_AAVT_0.6.3_FIixbug\env\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 584, in _run_script
exec(code, module.dict)
File "D:\Chenyme_AAVT_0.6.3_FIixbug\pages\📽️视频(Video).py", line 116, in
result = get_whisper_result(uploaded_file, output_file, device, models_option,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Chenyme_AAVT_0.6.3_FIixbug\utils\utils.py", line 82, in get_whisper_result
segments, _ = model.transcribe(path_video,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Chenyme_AAVT_0.6.3_FIixbug\env\Lib\site-packages\faster_whisper\transcribe.py", line 294, in transcribe
audio = decode_audio(audio, sampling_rate=sampling_rate)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Chenyme_AAVT_0.6.3_FIixbug\env\Lib\site-packages\faster_whisper\audio.py", line 52, in decode_audio
for frame in frames:
File "D:\Chenyme_AAVT_0.6.3_FIixbug\env\Lib\site-packages\faster_whisper\audio.py", line 103, in _resample_frames
for frame in itertools.chain(frames, [None]):
File "D:\Chenyme_AAVT_0.6.3_FIixbug\env\Lib\site-packages\faster_whisper\audio.py", line 90, in _group_frames
for frame in frames:
File "D:\Chenyme_AAVT_0.6.3_FIixbug\env\Lib\site-packages\faster_whisper\audio.py", line 80, in _ignore_invalid_frames
yield next(iterator)
^^^^^^^^^^^^^^
File "av\container\input.pyx", line 212, in decode
File "av\packet.pyx", line 87, in av.packet.Packet.decode
File "av\stream.pyx", line 168, in av.stream.Stream.decode
File "av\codec\context.pyx", line 513, in av.codec.context.CodecContext.decode
File "av\codec\context.pyx", line 416, in av.codec.context.CodecContext._send_packet_and_recv
File "av\error.pyx", line 336, in av.error.err_check
rt
非常好的自动化项目,希望可以增加docker的安装方式,这样可以在服务器运行
Traceback (most recent call last):
File "C:\Users\Administrator\pinokio\bin\miniconda\lib\site-packages\streamlit\runtime\state\session_state_proxy.py", line 119, in getattr
return self[key]
File "C:\Users\Administrator\pinokio\bin\miniconda\lib\site-packages\streamlit\runtime\state\session_state_proxy.py", line 90, in getitem
return get_session_state()[key]
File "C:\Users\Administrator\pinokio\bin\miniconda\lib\site-packages\streamlit\runtime\state\safe_session_state.py", line 91, in getitem
return self._state[key]
File "C:\Users\Administrator\pinokio\bin\miniconda\lib\site-packages\streamlit\runtime\state\session_state.py", line 400, in getitem
raise KeyError(_missing_key_error_message(key))
KeyError: 'st.session_state has no key "w_model_option". Did you forget to initialize it? More info: https://docs.streamlit.io/library/advanced-features/session-state#initialization'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Administrator\pinokio\bin\miniconda\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 535, in _run_script
exec(code, module.dict)
File "E:\software\Chenyme_AAVT_0.5.1\Chenyme_AAVT_0.5.1\pages\📽️视频(Video).py", line 59, in
result = get_whisper_result(uploaded_file, output_file, device, st.session_state.w_model_option,
File "C:\Users\Administrator\pinokio\bin\miniconda\lib\site-packages\streamlit\runtime\state\session_state_proxy.py", line 121, in getattr
raise AttributeError(_missing_attr_error_message(key))
AttributeError: st.session_state has no attribute "w_model_option". Did you forget to initialize it? More info: https://docs.streamlit.io/library/advanced-features/session-state#initialization
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.