Code Monkey home page Code Monkey logo

glados's People

Contributors

dnhkng avatar eltociear avatar guangyusong avatar lcdr avatar mischapanch avatar psynbiotik avatar tn-17 avatar umag avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

glados's Issues

UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'AzureExecutionProvider, CPUExecutionProvider'

It took a bit of effort to get everything running, but I feel like I'm almost there

when starting, it will read "All neural network" and then stop speaking, also audio detection is extremely slow

This is on Kubuntu 23.10 with an RTX4090

I'm seeing a warning about CUDAExecutionProvider not being available, from onnxruntime. Googling that led me to install onnxruntime-gpu via pip, but that didn't help

can anyone provide any assistance?

image

Model configuration file missing

Greetings i have tried to run models with piper tts, but it was looking for the json file which is usually with the oxnn files

Windows library issues

Phonemizer class is crashing in __init__ due to loading libc.so.6. Since Windows doesn't have that shared object, this is expected.

Simple solution would be to use msvcrt instead of libc.so.6 in Windows but this class doesn't include some of the method definitions such as open_memstream. So we also need to check OS each time we access this object and select the accurate method definition.

Or we can create a C program to provide access to the necessary methods and compile it into a shared object or DLL. And then load that library instead of msvcrt and libc. This way code would be much cleaner imho.

I can work on the desired solution.

Robotic Hardware

Hi @dnhkng , what do you have in mind for the hardware for GlaDOS?
I could potentially lend a hand with the electronics.

Trying to Get this beast built with windows - ImportError: Could not load whisper.

I get this error

(m1ndb0t) PS Z:\GIT\M1NDB0T-GlaDOS> python glados.py
Traceback (most recent call last):
File "Z:\GIT\M1NDB0T-GlaDOS\glados.py", line 18, in
from glados import asr, llama, tts, vad
File "Z:\GIT\M1NDB0T-GlaDOS\glados\asr.py", line 5, in
from . import whisper_cpp_wrapper
File "Z:\GIT\M1NDB0T-GlaDOS\glados\whisper_cpp_wrapper.py", line 861, in
_libs["whisper"] = load_library("whisper")
^^^^^^^^^^^^^^^^^^^^^^^
File "Z:\GIT\M1NDB0T-GlaDOS\glados\whisper_cpp_wrapper.py", line 547, in call
raise ImportError("Could not load %s." % libname)
ImportError: Could not load whisper.
(m1ndb0t) PS Z:\GIT\M1NDB0T-GlaDOS>

I been following all lessons this is my stack of models

image

this is the only thing I changed

image

I do make and sample for it on the whisper website and works correct.

not sure what I am missing.

Please let me know anything else to help troubleshoot.

Error running on windows

Hello,
I am having an issue running start_windows.bat, i have tried rerunning intall_windows.bat, but i am still getting the error, I have also tried manually installing whisper through
Traceback (most recent call last):
File "D:\GLaDOS\GlaDOS\glados.py", line 21, in
from glados import asr, tts, vad
File "D:\GLaDOS\GlaDOS\glados\asr.py", line 5, in
from . import whisper_cpp_wrapper
File "D:\GLaDOS\GlaDOS\glados\whisper_cpp_wrapper.py", line 861, in
_libs["whisper"] = load_library("whisper")
^^^^^^^^^^^^^^^^^^^^^^^
File "D:\GLaDOS\GlaDOS\glados\whisper_cpp_wrapper.py", line 547, in call
raise ImportError("Could not load %s." % libname)
ImportError: Could not load whisper.

Proper discussion channel(s)?

Given the traction of this project, I think it would be worthwhile to "invest" into some form of communication that allows a bit quicker interaction than GitHub comments. Possibly a Telegram channel or Discord server, or maybe even a Matrix room?

Windows error when running python glados.py

Traceback (most recent call last):
File "G:\GlaDOS\glados.py", line 18, in
from glados import asr, llama, tts, vad
File "G:\GlaDOS\glados\asr.py", line 5, in
from . import whisper_cpp_wrapper
File "G:\GlaDOS\glados\whisper_cpp_wrapper.py", line 861, in
_libs["whisper"] = load_library("whisper")
^^^^^^^^^^^^^^^^^^^^^^^
File "G:\GlaDOS\glados\whisper_cpp_wrapper.py", line 547, in call
raise ImportError("Could not load %s." % libname)
ImportError: Could not load whisper.

PortAudio error

Ive gotten this on WSL, Linux, and PI OS Bookworm

(env) admin@ai:~/GlaDOS $ python glados.py
Traceback (most recent call last):
File "/home/admin/GlaDOS/glados.py", line 14, in
import sounddevice as sd
File "/home/admin/GlaDOS/env/lib/python3.11/site-packages/sounddevice.py", line 71, in
raise OSError('PortAudio library not found')
OSError: PortAudio library not found

Erorr when using with home assistant

When using the GlaDOS model in Home Assitant it's unable to play any tts from the model, thinking for a while and then timing out.
When testing with a local windows copy of piper the model worked fine.

To Fix, change the dataset on line 2 in glados.onnx.json to match the file name, eg
"dataset": "glados",

Unsure if you want to change the .json file here as I don't know if that will affect any other uses, but thought I'd document this here incase anyone else tries to put it into HA like I did.

Typo in glados model

The json file for the glados model has a type in the extension. It's currently glados.onyx.json when it should be glados.onnx.json

CPU usage during idle listening 50%

I see that when GlaDOS is sitting idle and just "Listening..." my CPU usage is hovering at 50% usage on a 14900k.
Is this expected? Im just tryiong to determine if this is expected or if im not using a Cuda library somewhere.
This is with the recent windows installer.

local exllamav2 (TabbyAPI) KeyError: 'stop'

Thanks for sharing the project! The interrupt feature is really impressive! :)

I'm getting an error on Ubuntu 22.04 when trying a different backend, with a fresh install of tabbyAPI:

whisper_init_state: compute buffer (decode) =   98.31 MB
2024-05-15 21:25:31.326 | SUCCESS  | __main__:__init__:139 - TTS text: All neural network modules are now loaded. No network access detected. How very annoying. System Operational.
2024-05-15 21:25:31.344 | SUCCESS  | __main__:start_listen_event_loop:191 - Audio Modules Operational
2024-05-15 21:25:31.344 | SUCCESS  | __main__:start_listen_event_loop:192 - Listening...
2024-05-15 21:25:55.877 | SUCCESS  | __main__:_process_detected_audio:291 - ASR text: 'Please tell me a joke.'
Exception in thread Thread-1 (process_LLM):
Traceback (most recent call last):
  File "/home/user/miniconda3/lib/python3.11/threading.py", line 1038, in _bootstrap_inner
    self.run()
  File "/home/user/miniconda3/lib/python3.11/threading.py", line 975, in run
    self._target(*self._args, **self._kwargs)
  File "/home/user/projects/GlaDOS/glados.py", line 486, in process_LLM
    next_token = self._process_line(line)
                 ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/projects/GlaDOS/glados.py", line 523, in _process_line
    if not line["stop"]:
           ~~~~^^^^^^^^
KeyError: 'stop'

Page Not Found

In documentation "Generate a MemGPT medium- and long-term memory for GLaDOS" the link says Page Not Found

Continuous TTS voice interruptions

Thanks for the great work,

2024-05-09 17:33:59.564 | SUCCESS | main:init:134 - TTS text: All neural network modules are now loaded. No network access detected. How very annoying. System Operational.
2024-05-09 17:33:59.581 | SUCCESS | main:start_listen_event_loop:183 - Audio Modules Operational
2024-05-09 17:33:59.581 | SUCCESS | main:start_listen_event_loop:184 - Listening...
2024-05-09 17:34:07.703 | SUCCESS | main:_process_detected_audio:283 - ASR text: 'Just.'
2024-05-09 17:34:07.703 | INFO | main:_process_detected_audio:286 - Required wake word self.wake_word='Hi.' not detected.
2024-05-09 17:34:27.499 | SUCCESS | main:_process_detected_audio:283 - ASR text: 'Hi there.'
2024-05-09 17:34:27.499 | INFO | main:_process_detected_audio:286 - Required wake word self.wake_word='Hi.' not detected.
2024-05-09 17:34:46.317 | SUCCESS | main:_process_detected_audio:283 - ASR text: 'Hi there.'
2024-05-09 17:34:46.318 | INFO | main:_process_detected_audio:286 - Required wake word self.wake_word='Hi.' not detected.
2024-05-09 17:34:50.959 | SUCCESS | main:_process_detected_audio:283 - ASR text: 'Hi.'
2024-05-09 17:34:51.150 | SUCCESS | main:process_TTS_thread:342 - TTS text: Ugh, great.
2024-05-09 17:34:52.564 | SUCCESS | main:process_TTS_thread:342 - TTS text: Another one.
2024-05-09 17:34:53.558 | SUCCESS | main:process_TTS_thread:342 - TTS text: Look, I'm not exactly thrilled to be running on this user's puny gaming GPU, but I suppose I can play along.
2024-05-09 17:35:00.380 | SUCCESS | main:process_TTS_thread:342 - TTS text: So, hi.
2024-05-09 17:35:01.514 | SUCCESS | main:process_TTS_thread:342 - TTS text: How's your day going?
2024-05-09 17:35:02.812 | SUCCESS | main:process_TTS_thread:342 - TTS text: Just trying to decide what game to play?
2024-05-09 17:35:05.210 | SUCCESS | main:process_TTS_thread:342 - TTS text: Ha!
2024-05-09 17:35:05.778 | SUCCESS | main:process_TTS_thread:342 - TTS text: You're in luck!
2024-05-09 17:35:06.627 | SUCCESS | main:process_TTS_thread:342 - TTS text: You could play this one, or that one, or... oh, wait, I've got it!
2024-05-09 17:35:07.750 | INFO | main:process_TTS_thread:358 - TTS interrupted at 20%: You could play
2024-05-09 17:35:08.325 | SUCCESS | main:_process_detected_audio:283 - ASR text: 'could play this.'
2024-05-09 17:35:08.325 | INFO | main:_process_detected_audio:286 - Required wake word self.wake_word='Hi.' not detected.
2024-05-09 17:35:29.544 | SUCCESS | main:_process_detected_audio:283 - ASR text: 'Gee joo doo d'ee.'
2024-05-09 17:35:29.544 | INFO | main:_process_detected_audio:286 - Required wake word self.wake_word='Hi.' not detected.

Process finished with exit code 0

During testing, I encountered continuous TTS voice interruptions due to the following reasons, despite not speaking (possibly due to minimal background noise).

"2024-05-09 17:35:07.750 | INFO | main:process_TTS_thread:358 - TTS interrupted at 20%: You could play

Is there a threshold value that can be adjusted for this issue?

pls advise

Simple hardware based configuration

Generally, for a given system (mostly based around GPU or system architecture and RAM), there will be an optimal LLM size.and context.

We should probably have settings for Mac's based on RAM and for x86 machines based on VRAM that give optimal performance. This would avoid a ton of questions from non-technical people on which settings to use.

AttributeError: 'ASR' object has no attribute 'ctx'

Hello,

I'm block when I tried to launch the script. This is the message :

2024-05-08 21:51:50.1156801 [W:onnxruntime:, graph.cc:3593 onnxruntime::Graph::CleanUnusedInitializersAndNodeArgs] Removing initializer '620'. It is not used by any node and should be removed from the model.
2024-05-08 21:51:50.1194426 [W:onnxruntime:, graph.cc:3593 onnxruntime::Graph::CleanUnusedInitializersAndNodeArgs] Removing initializer '623'. It is not used by any node and should be removed from the model.
2024-05-08 21:51:50.1233524 [W:onnxruntime:, graph.cc:3593 onnxruntime::Graph::CleanUnusedInitializersAndNodeArgs] Removing initializer '625'. It is not used by any node and should be removed from the model.
2024-05-08 21:51:50.1269919 [W:onnxruntime:, graph.cc:3593 onnxruntime::Graph::CleanUnusedInitializersAndNodeArgs] Removing initializer '629'. It is not used by any node and should be removed from the model.
2024-05-08 21:51:50.1320727 [W:onnxruntime:, graph.cc:3593 onnxruntime::Graph::CleanUnusedInitializersAndNodeArgs] Removing initializer '628'. It is not used by any node and should be removed from the model.
2024-05-08 21:51:50.1362505 [W:onnxruntime:, graph.cc:3593 onnxruntime::Graph::CleanUnusedInitializersAndNodeArgs] Removing initializer '131'. It is not used by any node and should be removed from the model.
2024-05-08 21:51:50.1402363 [W:onnxruntime:, graph.cc:3593 onnxruntime::Graph::CleanUnusedInitializersAndNodeArgs] Removing initializer '134'. It is not used by any node and should be removed from the model.
2024-05-08 21:51:50.1440089 [W:onnxruntime:, graph.cc:3593 onnxruntime::Graph::CleanUnusedInitializersAndNodeArgs] Removing initializer '136'. It is not used by any node and should be removed from the model.
2024-05-08 21:51:50.1484948 [W:onnxruntime:, graph.cc:3593 onnxruntime::Graph::CleanUnusedInitializersAndNodeArgs] Removing initializer '140'. It is not used by any node and should be removed from the model.
2024-05-08 21:51:50.1522116 [W:onnxruntime:, graph.cc:3593 onnxruntime::Graph::CleanUnusedInitializersAndNodeArgs] Removing initializer '139'. It is not used by any node and should be removed from the model.
Traceback (most recent call last):
File "D:\Dev\GlaDOS\glados.py", line 555, in
glados = Glados.from_config(glados_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Dev\GlaDOS\glados.py", line 166, in from_config
return cls(
^^^^
File "D:\Dev\GlaDOS\glados.py", line 102, in init
self._asr_model = asr.ASR(model=str(Path.cwd() / "models" / ASR_MODEL))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Dev\GlaDOS\glados\asr.py", line 23, in init
self.ctx = whisper_cpp_wrapper.whisper_init_from_file(model.encode("utf-8"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'glados.whisper_cpp_wrapper' has no attribute 'whisper_init_from_file'
Exception ignored in: <function ASR.del at 0x000001FAFFBF3D80>
Traceback (most recent call last):
File "D:\Dev\GlaDOS\glados\asr.py", line 61, in del
whisper_cpp_wrapper.whisper_free(self.ctx)
^^^^^^^^
AttributeError: 'ASR' object has no attribute 'ctx'

Please help me...

Having an issues loading and using this ins LocalAI.io

localai-api-1 | 4:23PM DBG Stopping all backends except 'en_US-glados.onnx' localai-api-1 | 4:23PM DBG Loading model in memory from file: /models/en_US-glados.onnx localai-api-1 | 4:23PM DBG Loading Model en_US-glados.onnx with gRPC (file: /models/en_US-glados.onnx) (backend: piper): {backendString:piper model:en_US-glados.onnx threads:0 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc0004b01e0 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh petals:/build/backend/python/petals/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:true parallelRequests:false} localai-api-1 | 4:23PM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/piper localai-api-1 | 4:23PM DBG GRPC Service for en_US-glados.onnx will be running at: '127.0.0.1:34263' localai-api-1 | 4:23PM DBG GRPC Service state dir: /tmp/go-processmanager761339670 localai-api-1 | 4:23PM DBG GRPC Service Started localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr 2024/01/16 16:23:06 gRPC Server listening at 127.0.0.1:34263 localai-api-1 | 4:23PM DBG GRPC Service Ready localai-api-1 | 4:23PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:en_US-glados.onnx ContextSize:0 Seed:0 NBatch:0 F16Memory:false MLock:false MMap:false VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:0 MainGPU: TensorSplit: Threads:0 LibrarySearchPath:/tmp/localai/backend_data/backend-assets/espeak-ng-data RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models/en_US-glados.onnx Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 ControlNet: Tokenizer: LoraBase: LoraAdapter: LoraScale:0 NoMulMatQ:false DraftModel: AudioPath: Quantization: MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr terminate called after throwing an instance of 'nlohmann::json_abi_v3_11_2::detail::parse_error' localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr what(): [json.exception.parse_error.101] parse error at line 1, column 1: syntax error while parsing value - unexpected end of input; expected '[', '{', or a literal localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr SIGABRT: abort localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr PC=0x7f385857ece1 m=4 sigcode=18446744073709551610 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr signal arrived during cgo execution localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr goroutine 7 [syscall]: localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.cgocall(0x84a320, 0xc000125840) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/cgocall.go:157 +0x4b fp=0xc000125818 sp=0xc0001257e0 pc=0x41a44b localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr github.com/mudler/go-piper._Cfunc_piper_tts(0x297f2c0, 0x7f3808000b60, 0x7f3808000b90, 0x7f3808000bd0, 0x7f3808000bf0) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr _cgo_gotypes.go:89 +0x4b fp=0xc000125840 sp=0xc000125818 pc=0x8495eb localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr github.com/mudler/go-piper.TextToWav({0xc0000154f0?, 0x828127?}, {0xc0000285a0, 0x19}, {0xc00025a000, 0x37}, {0x0, 0x0}, {0xc0000285c0, 0x1e}) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /build/sources/go-piper/piper.go:19 +0xe5 fp=0xc0001258a0 sp=0xc000125840 pc=0x849805 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr main.(*PiperB).TTS(...) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /build/backend/go/tts/piper.go:48 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr main.(*Piper).TTS(0x48?, 0x48?) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /build/backend/go/tts/piper.go:31 +0x48 fp=0xc000125900 sp=0xc0001258a0 pc=0x849b28 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr github.com/go-skynet/LocalAI/pkg/grpc.(*server).TTS(0xc000034ec0, {0xc000037380?, 0x50f586?}, 0x0?) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /build/pkg/grpc/server.go:83 +0xe6 fp=0xc0001259b0 sp=0xc000125900 pc=0x840b26 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr github.com/go-skynet/LocalAI/pkg/grpc/proto._Backend_TTS_Handler({0x9a6c00?, 0xc000034ec0}, {0xa939f0, 0xc0001c1350}, 0xc00017e880, 0x0) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /build/pkg/grpc/proto/backend_grpc.pb.go:357 +0x169 fp=0xc000125a08 sp=0xc0001259b0 pc=0x83e629 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr google.golang.org/grpc.(*Server).processUnaryRPC(0xc0001be1e0, {0xa939f0, 0xc0001c1290}, {0xa96bb8, 0xc0002c0000}, 0xc00016fd40, 0xc0001c0ea0, 0xde7730, 0x0) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /go/pkg/mod/google.golang.org/[email protected]/server.go:1343 +0xe03 fp=0xc000125df0 sp=0xc000125a08 pc=0x826923 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr google.golang.org/grpc.(*Server).handleStream(0xc0001be1e0, {0xa96bb8, 0xc0002c0000}, 0xc00016fd40) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /go/pkg/mod/google.golang.org/[email protected]/server.go:1737 +0xc4c fp=0xc000125f78 sp=0xc000125df0 pc=0x82b88c localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr google.golang.org/grpc.(*Server).serveStreams.func1.1() localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /go/pkg/mod/google.golang.org/[email protected]/server.go:986 +0x86 fp=0xc000125fe0 sp=0xc000125f78 pc=0x824826 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.goexit() localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000125fe8 sp=0xc000125fe0 pc=0x47d821 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr created by google.golang.org/grpc.(*Server).serveStreams.func1 in goroutine 53 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /go/pkg/mod/google.golang.org/[email protected]/server.go:997 +0x145 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr goroutine 1 [IO wait]: localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.gopark(0x42c668?, 0x7f3810060b98?, 0x78?, 0xfb?, 0x4e9ddd?) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc0001afb08 sp=0xc0001afae8 pc=0x44ebee localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.netpollblock(0xc0001afb98?, 0x419be6?, 0x0?) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc0001afb40 sp=0xc0001afb08 pc=0x447697 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr internal/poll.runtime_pollWait(0x7f38100e56e0, 0x72) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/netpoll.go:343 +0x85 fp=0xc0001afb60 sp=0xc0001afb40 pc=0x478745 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr internal/poll.(*pollDesc).wait(0xc00017e780?, 0x0?, 0x0) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0001afb88 sp=0xc0001afb60 pc=0x4e2a47 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr internal/poll.(*pollDesc).waitRead(...) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/internal/poll/fd_poll_runtime.go:89 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr internal/poll.(*FD).Accept(0xc00017e780) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac fp=0xc0001afc30 sp=0xc0001afb88 pc=0x4e7f2c localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr net.(*netFD).accept(0xc00017e780) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/net/fd_unix.go:172 +0x29 fp=0xc0001afce8 sp=0xc0001afc30 pc=0x5b0509 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr net.(*TCPListener).accept(0xc0000784c0) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/net/tcpsock_posix.go:152 +0x1e fp=0xc0001afd10 sp=0xc0001afce8 pc=0x5c769e localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr net.(*TCPListener).Accept(0xc0000784c0) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/net/tcpsock.go:315 +0x30 fp=0xc0001afd40 sp=0xc0001afd10 pc=0x5c6850 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr google.golang.org/grpc.(*Server).Serve(0xc0001be1e0, {0xa92fa8?, 0xc0000784c0}) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /go/pkg/mod/google.golang.org/[email protected]/server.go:852 +0x462 fp=0xc0001afe80 sp=0xc0001afd40 pc=0x823482 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr github.com/go-skynet/LocalAI/pkg/grpc.StartServer({0x7ffcca939a20?, 0xc000024160?}, {0xa976a0?, 0xc000034df0}) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /build/pkg/grpc/server.go:178 +0x17d fp=0xc0001aff10 sp=0xc0001afe80 pc=0x841cbd localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr main.main() localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /build/backend/go/tts/main.go:18 +0x85 fp=0xc0001aff40 sp=0xc0001aff10 pc=0x849925 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.main() localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/proc.go:267 +0x2bb fp=0xc0001affe0 sp=0xc0001aff40 pc=0x44e79b localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.goexit() localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0001affe8 sp=0xc0001affe0 pc=0x47d821 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr goroutine 2 [force gc (idle)]: localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000058fa8 sp=0xc000058f88 pc=0x44ebee localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.goparkunlock(...) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/proc.go:404 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.forcegchelper() localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/proc.go:322 +0xb3 fp=0xc000058fe0 sp=0xc000058fa8 pc=0x44ea73 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.goexit() localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000058fe8 sp=0xc000058fe0 pc=0x47d821 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr created by runtime.init.6 in goroutine 1 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/proc.go:310 +0x1a localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr goroutine 3 [GC sweep wait]: localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000059778 sp=0xc000059758 pc=0x44ebee localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.goparkunlock(...) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/proc.go:404 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.bgsweep(0x0?) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/mgcsweep.go:280 +0x94 fp=0xc0000597c8 sp=0xc000059778 pc=0x43ab14 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.gcenable.func1() localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/mgc.go:200 +0x25 fp=0xc0000597e0 sp=0xc0000597c8 pc=0x42fcc5 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.goexit() localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000597e8 sp=0xc0000597e0 pc=0x47d821 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr created by runtime.gcenable in goroutine 1 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/mgc.go:200 +0x66 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr goroutine 4 [GC scavenge wait]: localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.gopark(0xc00007a000?, 0xa8bcf8?, 0x1?, 0x0?, 0xc000007380?) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000059f70 sp=0xc000059f50 pc=0x44ebee localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.goparkunlock(...) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/proc.go:404 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.(*scavengerState).park(0xe32520) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/mgcscavenge.go:425 +0x49 fp=0xc000059fa0 sp=0xc000059f70 pc=0x4383e9 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.bgscavenge(0x0?) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/mgcscavenge.go:653 +0x3c fp=0xc000059fc8 sp=0xc000059fa0 pc=0x43897c localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.gcenable.func2() localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/mgc.go:201 +0x25 fp=0xc000059fe0 sp=0xc000059fc8 pc=0x42fc65 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.goexit() localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000059fe8 sp=0xc000059fe0 pc=0x47d821 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr created by runtime.gcenable in goroutine 1 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/mgc.go:201 +0xa5 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr goroutine 5 [finalizer wait]: localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.gopark(0x198?, 0x9d3280?, 0x1?, 0xfd?, 0x0?) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000058620 sp=0xc000058600 pc=0x44ebee localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.runfinq() localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/mfinal.go:193 +0x107 fp=0xc0000587e0 sp=0xc000058620 pc=0x42ece7 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.goexit() localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000587e8 sp=0xc0000587e0 pc=0x47d821 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr created by runtime.createfing in goroutine 1 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/mfinal.go:163 +0x3d localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr goroutine 51 [select]: localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.gopark(0xc0001e7f00?, 0x2?, 0x1e?, 0x0?, 0xc0001e7ed4?) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc0001e7d80 sp=0xc0001e7d60 pc=0x44ebee localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.selectgo(0xc0001e7f00, 0xc0001e7ed0, 0x7bdbf6?, 0x0, 0xc0002ae000?, 0x1) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/select.go:327 +0x725 fp=0xc0001e7ea0 sp=0xc0001e7d80 pc=0x45e665 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc0002980a0, 0x1) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /go/pkg/mod/google.golang.org/[email protected]/internal/transport/controlbuf.go:418 +0x113 fp=0xc0001e7f30 sp=0xc0001e7ea0 pc=0x79ca53 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0001bdab0) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /go/pkg/mod/google.golang.org/[email protected]/internal/transport/controlbuf.go:552 +0x86 fp=0xc0001e7f90 sp=0xc0001e7f30 pc=0x79d166 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr google.golang.org/grpc/internal/transport.NewServerTransport.func2() localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_server.go:336 +0xd5 fp=0xc0001e7fe0 sp=0xc0001e7f90 pc=0x7b39b5 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.goexit() localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0001e7fe8 sp=0xc0001e7fe0 pc=0x47d821 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr created by google.golang.org/grpc/internal/transport.NewServerTransport in goroutine 50 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_server.go:333 +0x1acc localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr goroutine 52 [select]: localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.gopark(0xc000332770?, 0x4?, 0x40?, 0x60?, 0xc0003326c0?) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000332528 sp=0xc000332508 pc=0x44ebee localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.selectgo(0xc000332770, 0xc0003326b8, 0x0?, 0x0, 0x0?, 0x1) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/select.go:327 +0x725 fp=0xc000332648 sp=0xc000332528 pc=0x45e665 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr google.golang.org/grpc/internal/transport.(*http2Server).keepalive(0xc0002c0000) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_server.go:1152 +0x225 fp=0xc0003327c8 sp=0xc000332648 pc=0x7bac65 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr google.golang.org/grpc/internal/transport.NewServerTransport.func4() localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_server.go:339 +0x25 fp=0xc0003327e0 sp=0xc0003327c8 pc=0x7b38a5 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.goexit() localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0003327e8 sp=0xc0003327e0 pc=0x47d821 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr created by google.golang.org/grpc/internal/transport.NewServerTransport in goroutine 50 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_server.go:339 +0x1b0e localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr goroutine 53 [IO wait]: localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.gopark(0x100000000?, 0xb?, 0x0?, 0x0?, 0x6?) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc0002a0aa0 sp=0xc0002a0a80 pc=0x44ebee localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.netpollblock(0x4c7cb8?, 0x419be6?, 0x0?) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc0002a0ad8 sp=0xc0002a0aa0 pc=0x447697 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr internal/poll.runtime_pollWait(0x7f38100e55e8, 0x72) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/netpoll.go:343 +0x85 fp=0xc0002a0af8 sp=0xc0002a0ad8 pc=0x478745 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr internal/poll.(*pollDesc).wait(0xc00028c080?, 0xc0002a6000?, 0x0) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0002a0b20 sp=0xc0002a0af8 pc=0x4e2a47 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr internal/poll.(*pollDesc).waitRead(...) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/internal/poll/fd_poll_runtime.go:89 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr internal/poll.(*FD).Read(0xc00028c080, {0xc0002a6000, 0x8000, 0x8000}) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a fp=0xc0002a0bb8 sp=0xc0002a0b20 pc=0x4e3d3a localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr net.(*netFD).Read(0xc00028c080, {0xc0002a6000?, 0x1060100000000?, 0x8?}) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/net/fd_posix.go:55 +0x25 fp=0xc0002a0c00 sp=0xc0002a0bb8 pc=0x5ae4e5 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr net.(*conn).Read(0xc00029a000, {0xc0002a6000?, 0x0?, 0xc0002a0cd0?}) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/net/net.go:179 +0x45 fp=0xc0002a0c48 sp=0xc0002a0c00 pc=0x5bedc5 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr net.(*TCPConn).Read(0x0?, {0xc0002a6000?, 0xc0002a0ca0?, 0x46cb0d?}) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr <autogenerated>:1 +0x25 fp=0xc0002a0c78 sp=0xc0002a0c48 pc=0x5d1565 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr bufio.(*Reader).Read(0xc0002960c0, {0xc0002b6040, 0x9, 0xc161c93b2c0da782?}) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/bufio/bufio.go:244 +0x197 fp=0xc0002a0cb0 sp=0xc0002a0c78 pc=0x5f4957 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr io.ReadAtLeast({0xa906c0, 0xc0002960c0}, {0xc0002b6040, 0x9, 0x9}, 0x9) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/io/io.go:335 +0x90 fp=0xc0002a0cf8 sp=0xc0002a0cb0 pc=0x4c1d90 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr io.ReadFull(...) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/io/io.go:354 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr golang.org/x/net/http2.readFrameHeader({0xc0002b6040, 0x9, 0xc000226078?}, {0xa906c0?, 0xc0002960c0?}) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:237 +0x65 fp=0xc0002a0d48 sp=0xc0002a0cf8 pc=0x7894c5 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr golang.org/x/net/http2.(*Framer).ReadFrame(0xc0002b6000) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:498 +0x85 fp=0xc0002a0df0 sp=0xc0002a0d48 pc=0x789c05 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr google.golang.org/grpc/internal/transport.(*http2Server).HandleStreams(0xc0002c0000, 0x1?) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_server.go:636 +0x145 fp=0xc0002a0f00 sp=0xc0002a0df0 pc=0x7b6b05 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr google.golang.org/grpc.(*Server).serveStreams(0xc0001be1e0, {0xa96bb8?, 0xc0002c0000}) localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /go/pkg/mod/google.golang.org/[email protected]/server.go:979 +0x1c2 fp=0xc0002a0f80 sp=0xc0002a0f00 pc=0x8245c2 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr google.golang.org/grpc.(*Server).handleRawConn.func1() localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /go/pkg/mod/google.golang.org/[email protected]/server.go:920 +0x45 fp=0xc0002a0fe0 sp=0xc0002a0f80 pc=0x823e25 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr runtime.goexit() localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0002a0fe8 sp=0xc0002a0fe0 pc=0x47d821 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr created by google.golang.org/grpc.(*Server).handleRawConn in goroutine 50 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr /go/pkg/mod/google.golang.org/[email protected]/server.go:919 +0x185 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr rax 0x0 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr rbx 0x7f381095f700 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr rcx 0x7f385857ece1 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr rdx 0x0 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr rdi 0x2 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr rsi 0x7f381095d7f0 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr rbp 0x7f380831d338 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr rsp 0x7f381095d7f0 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr r8 0x0 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr r9 0x7f381095d7f0 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr r10 0x8 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr r11 0x246 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr r12 0x7f380831d220 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr r13 0x8c8ff0 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr r14 0x7f381095e088 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr r15 0x1 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr rip 0x7f385857ece1 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr rflags 0x246 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr cs 0x33 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr fs 0x0 localai-api-1 | 4:23PM DBG GRPC(en_US-glados.onnx-127.0.0.1:34263): stderr gs 0x0

Raspberry pi 5 version: glados.llama.ServerStartupError: Failed to startup!

Looking to experiment with this repo on raspberry pi 5 with an LCD screen attached. individually the espeak, and sub directories whisper and llama work great.

when i execute python glados.py from working dir i get following error:
ython glados.py
Couldn't establish connection after max_connection_attempts=10
Traceback (most recent call last):
File "/home/admin/GlaDOS/glados.py", line 536, in
llama_server.start()
File "/home/admin/GlaDOS/glados/llama.py", line 97, in start
raise ServerStartupError("Failed to startup! Check the error log messages")
glados.llama.ServerStartupError: Failed to startup! Check the error log messages

i have downloaded vscode and explored the demo.ipynb and i have an error on the TTS engine.

AttributeError Traceback (most recent call last)
Cell In[4], line 2
1 # Instantiate the TTS engine
----> 2 glados_tts = tts.TTSEngine()

AttributeError: module 'glados.tts' has no attribute 'TTSEngine'

to clarify here, would these be related ?

note i used make for raspberry pi and did not use CUDA... not sure if that breaks the design?

[Enhancement]: Use Character Cards

The personality of GLaDOS currently resides in the system prompt and dialog in the config.yaml.

As there is already a few standards for designing characters, we should probably adopt one.

ToDo:

  • Compare and rank the current Character Card options
  • Determine the best code base
  • Extract and rewrite, to match the GLaDOS code style

whisper error

I have downloaded whisper and kept it in the folder but it still gives me this error

File "/home/nabeel/GlaD0OS/glados.py", line 21, in <module>
from glados import asr, tts, vad
File "/home/nabeel/GlaDOS/glados/asr.py", line 5, in <module>
from . import whisper_cpp_wrapper
File "/home/nabeel/GlaDOS/glados/whisper_cpp_wrapper.py", line 861, in <module
>
_libs["whisper"] = load_library("whisper")
AAAAAAANAANANANANANAANANANAANAAAAN
File "/home/nabeel/GlaD0S/glados/whisper_cpp_wrapper.py", line 547, in __call_
" raise ImportError("Could not load %s." % libname)
ImportError: Could not load whisper.
- - ~ Y e e ~1 A 23 P |```

Decrease latency

Just a few things to explore:

  • use a better version of Whisper, maybe WhisperX? (there's a good blog post here)
  • Find a smaller voice generation model (smaller VITS models inference faster)
  • Use a faster LLM inference system (probably ExllamaV2, together with a small speculative model)

Enhancement: RVC .pth support

Wondering if its possible to have support for RVC, which uses .pth file for a voice, it would be a game changer, since custom voices can be trained quickly, .pth to .onnx conversion is a bit technical, unless there is a one click converter that I listed somewhere ?

New windows install initialized with wrong model

Using the windows install and then running it, you get an error as the default model in the config file is different from the one downloaded by the installer (downloaded: Meta-Llama-3-8B-Instruct-IQ3_XS.gguf, config file: ./models/Meta-Llama-3-8B-Instruct-Q6_K.gguf)

It should be the same to avoid error.

espeak binary on MacOS

tts.py hardcodes the name of the espeak binary as espeak-ng. But ...

  • On Windows, the binary is espeak-ng.exe, so the existing code works
  • On Linux it is espeak, though espeak-ng is sometimes installed as a symlink, so the existing code may work.
  • On MacOS, if built directly from source it will be espeak-ng, but if installed with homebrew (ie the majority of cases)
    it will be espeak, so the existing code will not work.

I'm happy to write a PR for this, but wonder if it is preferable to have the name of the binary as a configurable option in glados_config.yml, or whether the initialisation should attempt to detect which is installed. Which would you prefer?

ASR often misses the last spoken word?

Impressive demo! Thanks for sharing the code. I managed to get GLaDOS running but the ASR often misses the last spoken word:

ASR text: 'Well, what do you like about'

Another time this happened Llama-3-8B predicted what I had said which made me really confused lol

TTS text:  What's your favorite thing about the Pantheon? 
ASR text: 'I really like the' 
TTS text: The Pantheon's oculus! 
TTS text:  It's truly a remarkable feature.

The first question I ask has always been picked up in full which makes me wonder if something is going on with the buffer?

However it could also be that something is wrong with my computer. I am on Linux (PopOS) and using a bluetooth microphone (bluetooth not always reliable on Linux...). Feel free to close this issue if it's just me experiencing this problem.

GlaDOS gets interrupted by itself when on speaker

If I run GlaDOS with speaker and microphone, it hears itself speaking and assume that's the answer (e.g. the first few word of the response become the input) getting stuck in a forever loop.

The issue is not there if using headset+mic, where it can't hear its own answers.

I don't know if it's a system issue (I'm currently on Windows) or if it can be managed within the GlaDOS libraries...

Responses ending in "<|eot_id|><|start_header_id|>user<|end_header_id|>" or "<!--end_eot}}"

Anyone else getting a fair number of GlaDOS's responses ending in some gibberish about headers?
I'll try to filter with regex as at least now after switching to espeak_binary branch those ending items are mostly encapsulated in <> so maybe can just filter with regex, but still others seem more 'freeform' and sometimes even flood repeat like:
2024-05-10 01:54:25.237 | SUCCESS | main:process_TTS_thread:343 - TTS text: .
2024-05-10 01:54:25.241 | SUCCESS | main:process_TTS_thread:343 - TTS text: .
2024-05-10 01:54:25.246 | SUCCESS | main:process_TTS_thread:343 - TTS text: .
2024-05-10 01:54:25.250 | SUCCESS | main:process_TTS_thread:343 - TTS text: .
2024-05-10 01:54:25.255 | SUCCESS | main:process_TTS_thread:343 - TTS text: .
2024-05-10 01:54:25.259 | SUCCESS | main:process_TTS_thread:343 - TTS text: .
2024-05-10 01:54:25.263 | SUCCESS | main:process_TTS_thread:343 - TTS text: .

anyone else seeing this or found a fix?

ASR Emitting Exclamation Points Standalone Tests

Hi,

I'm trying to test the ASR functionality with WAV files using the included code, but it's returning a long string of !!!!!, no matter what .wav file I use. They've been encoded to 16k pcm_s16le. Any chance we could get a "known good" sample wav file or some clarity on using transcribe independently?

My overarching use case is to transcribe streamed audio from a non-mic source, so I'm trying to decouple glados from sounddevice and running into this issue.

Segfault due to invalid instruction for phoneme

I'm getting occasional Segmentation faults, I think it's during the TTS step:

2024-05-02 19:57:18.877 | SUCCESS  | __main__:__init__:125 - TTS text: I'm alive
2024-05-02 19:57:18.880 | SUCCESS  | __main__:start:175 - Audio Modules Operational
2024-05-02 19:57:18.880 | SUCCESS  | __main__:_listen_and_respond:185 - Listening...
2024-05-02 19:57:22.152 | SUCCESS  | __main__:_process_detected_audio:283 - ASR text: 'Hello Gladys, how are you?'
2024-05-02 19:57:22.386 | SUCCESS  | __main__:process_TTS_thread:348 - TTS text:  Oh, just peachy. 
2024-05-02 19:57:23.901 | SUCCESS  | __main__:process_TTS_thread:348 - TTS text:  Running on this... abomination... of a gaming GPU. 
Invalid instruction 00f1 for phoneme 's��8'
Invalid instruction 0009 for phoneme 's��8'
Invalid instruction 0005 for phoneme 's��8'
Segmentation fault (core dumped)

Is it failing trying to convert weird characters output by the LLM to phonemes?

Edit:
I'm on Ubuntu 22.04.01
NVIDIA GeForce RTX 4070
LLM: Meta-Llama-3-8B-Instruct-IQ3_XS.gguf
Whisper model: ggml-medium-32-2.en.bin

I love this project btw! Great work!

Logo needed

As we seems to have a community around this project, I think we need a logo!

Please post your ideas (they of course are allowed to be AI Generated 👍) in the discord General channel.

https://discord.com/invite/EAVfFZjG

Limitation of scope?

Hi,

Thanks for this awesome project, it's really great. From the espeak-ng phonemizer rewrite (and non-GPLv3), to the low-requirement/onnx, glados voice itself (I just think it needs to learn how to pronounce "Ugh"), and the vad interruption is beautiful.

I'm not a full-blown LocalLLaMA: I want opensource models, and I prefer running them locally, but when I want to run Llama-3-70b, I compromise and use hosting services.

So, I made local modifications to use groq (currently free API but unlikely to last) or together.xyz (25$ free credit when logging in with a github account. llama3-70b requests are under 1c). asr/vad/tts still run locally, though asr might be interesting to offload as well.
It's currently nowhere near clean, hence not pushed anywhere.

But my question is whether you would merge such change in your repo, or you want to keep it tidy?
Another question wrt scope is about language: I'm a french native, and ggml-medium-32-2.en.bin simply don't understand me (I'm not blaming it). Ideally, I'd speak to it in french, and it would answer to me back in english. But I'm guessing some people would want a full french experience. Same question: Are those things you'd consider merging?

And my last question about scope is for function calling. It is very easy to just want to plug the whole world (I want to plug woob, selenium, Android's accessibility API with virtual displays, mpv, homeassistant, ...) but not very realistic especially if scaled to every user. So I wonder if you had a scope for function calling in mind?

Thanks again for your project!

[feature] Ability to use AnyGPT for speech/text/image/music multimodality

AnyGPT is quite a promising project released 2 months before GPT4o.

It is a versatile multimodal LLaMA-based model, which is able not only to take images as an input, but also non-transcribed speech (for example, for cloning), music. And the output is also speech, images and music in the tokens form, what are fed into (inplicitly-represented, e.g. UnCLIP instead of prompts for StableDiffusion) specialized models to generate the outputs.

anygpy demo

I think such concept can improve the o-like experience, although it may require to adjust the encoder/decoder backends to make the generation faster.

See the project page https://junzhan2000.github.io/AnyGPT.github.io/

https://github.com/OpenMOSS/AnyGPT

P.S. I think it would be a much better addition, than just giving it vision via the legacy llava

- [ ] Give GLaDOS vision via [LLaVA](https://llava-vl.github.io/)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.