Code Monkey home page Code Monkey logo

localaivoicechat's People

Contributors

f1am3d avatar koljab avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

localaivoicechat's Issues

ERROR: AttributeError: 'ForkAwareLocal' object has no attribute 'connection'

log:

Traceback (most recent call last):
  File "C:\Users\f1am3d\miniconda3\envs\localchat\lib\multiprocessing\managers.py", line 802, in _callmethod
    conn = self._tls.connection
AttributeError: 'ForkAwareLocal' object has no attribute 'connection'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\f1am3d\miniconda3\envs\localchat\lib\multiprocessing\process.py", line 315, in _bootstrap
    self.run()
  File "C:\Users\f1am3d\miniconda3\envs\localchat\lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\f1am3d\miniconda3\envs\localchat\lib\site-packages\RealtimeSTT\audio_recorder.py", line 443, in _audio_data_worker
    audio_queue.put(data)
  File "<string>", line 2, in put
  File "C:\Users\f1am3d\miniconda3\envs\localchat\lib\multiprocessing\managers.py", line 806, in _callmethod
    self._connect()
  File "C:\Users\f1am3d\miniconda3\envs\localchat\lib\multiprocessing\managers.py", line 793, in _connect
    conn = self._Client(self._token.address, authkey=self._authkey)
  File "C:\Users\f1am3d\miniconda3\envs\localchat\lib\multiprocessing\connection.py", line 500, in Client
    c = PipeClient(address)
  File "C:\Users\f1am3d\miniconda3\envs\localchat\lib\multiprocessing\connection.py", line 702, in PipeClient
    _winapi.WaitNamedPipe(address, 1000)
FileNotFoundError: [WinError 2] The system cannot find the file specified

error python ai_voicetalk_local.py

python ai_voicetalk_local.py

ValueError: Model path does not exist: D:\Projekte\LLaMa\text-gen\text-generation-webui\models\zephyr-7b-beta.Q5_K_M.gguf

Error with 'cloning_reference_wav'

All libraries seem to have been installed properly. However I get this error when trying to run

C:\Users\USER\code\LocalAIVoiceChat-main>start.bat cuda not available llama_cpp_lib: return llama_cpp Initializing LLM llama.cpp model ... llama.cpp model initialized Initializing TTS CoquiEngine ... Traceback (most recent call last): File "C:\Users\USER\code\LocalAIVoiceChat-main\ai_voicetalk_local.py", line 111, in <module> coqui_engine = CoquiEngine(cloning_reference_wav="female.wav", language="en", speed=1.0) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\USER\AppData\Roaming\Python\Python311\site-packages\RealtimeTTS\engines\base_engine.py", line 11, in __call__ instance = super().__call__(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: CoquiEngine.__init__() got an unexpected keyword argument 'cloning_reference_wav'

Please advise

Voice stuttering, Macbook Pro M1 16GB, what to change?

Love this project! Was playing around with it.
The voice works fine, but stutters. It starts correctly "This is how ..." then stops "voice x", stops "sounds like".
What would you recommend to change?

Thanks for your input!

Couqi Engine takes brakes mid sentence to load.

Couqi Engine takes brakes mid sentence to load. IT takes sometimes between words or even in the middle of say the word. I tried to adjust setting but nothing works. I use i7 10th and RTX3060 computer.

license

if I dont use coqui is it allowed for commercial use ; thanks

Error : CUDA with multiprocessing

Thanks for this good work.

/home/mypc/miniconda3/envs/VoiceAgent/bin/python /home/mypc/Downloads/LocalAIVoiceChat-main/ai_voicetalk_local.py 
try to import llama_cpp_cuda
llama_cpp_cuda import failed
llama_cpp_lib: return llama_cpp
Initializing LLM llama.cpp model ...
llama.cpp model initialized
Initializing TTS CoquiEngine ...
Downloading config.json to /home/mypc/Downloads/LocalAIVoiceChat-main/models/v2.0.2/config.json...
Downloading model.pth to /home/mypc/Downloads/LocalAIVoiceChat-main/models/v2.0.2/model.pth...
100%|██████████| 4.36k/4.36k [00:00<00:00, 21.9MiB/s]
100%|██████████| 1.86G/1.86G [03:03<00:00, 10.2MiB/s]
Downloading vocab.json to /home/mypc/Downloads/LocalAIVoiceChat-main/models/v2.0.2/vocab.json...
100%|██████████| 335k/335k [00:00<00:00, 579kiB/s]
Downloading speakers_xtts.pth to /home/mypc/Downloads/LocalAIVoiceChat-main/models/v2.0.2/speakers_xtts.pth...
100%|██████████| 7.75M/7.75M [00:00<00:00, 9.87MiB/s]
 > Using model: xtts
Error loading model for checkpoint /home/mypc/Downloads/LocalAIVoiceChat-main/models/v2.0.2: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
Process Process-1:
Traceback (most recent call last):
  File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/site-packages/RealtimeTTS/engines/coqui_engine.py", line 501, in _synthesize_worker
    tts = load_model(checkpoint, tts)
  File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/site-packages/RealtimeTTS/engines/coqui_engine.py", line 485, in load_model
    tts.to(device)
  File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1152, in to
    return self._apply(convert)
  File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/site-packages/torch/nn/modules/module.py", line 802, in _apply
    module._apply(fn)
  File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/site-packages/torch/nn/modules/module.py", line 802, in _apply
    module._apply(fn)
  File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/site-packages/torch/nn/modules/module.py", line 802, in _apply
    module._apply(fn)
  File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/site-packages/torch/nn/modules/module.py", line 825, in _apply
    param_applied = fn(param)
  File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1150, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
  File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/site-packages/torch/cuda/__init__.py", line 288, in _lazy_init
    raise RuntimeError(
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/site-packages/RealtimeTTS/engines/coqui_engine.py", line 506, in _synthesize_worker
    logging.exception(f"Error initializing main coqui engine model: {e}")
  File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/logging/__init__.py", line 2113, in exception
    error(msg, *args, exc_info=exc_info, **kwargs)
  File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/logging/__init__.py", line 2105, in error
    root.error(msg, *args, **kwargs)
  File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/logging/__init__.py", line 1506, in error
    self._log(ERROR, msg, args, **kwargs)
TypeError: Log._log() got an unexpected keyword argument 'exc_info'

While running the test script , I am getting above error.
Running env. Ubunutu, python 3.10. with latest STT and TTS code.

use existing llama.cpp install

I've been using llama.cpp for quite a while (M1 Mac). Is there a way I can get ai_voicetalk_local.py to point to that installation instead of reinstalling it here? Sorry, newbie question...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.