Comments (3)
I currently played with llama.cpp:
$ ./examples/alpaca.sh
> User prompt: Turn on the light; Available actions: TurnLightOn, TurnLightOff
TurnLightOn
> Available actions: TurnLightOn, TurnLightOff; User prompt: Toggle the light
TurnLightOn
TurnLightOff
> Available actions: TurnLightOn(name), TurnLightOff(name); User prompt: Toggle the bathroom light
TurnLightOn("Bathroom")
I can see how this could work :)
from rhasspy3.
vicuna seems to perform better:
Below is an instruction that describes a user request. Respond with one of the following categories:
- FindMyPhone
- TurnLightOn
- TurnLightOff
- UnknownAction
User prompt: Make me a sandwhich
UnknownAction
from rhasspy3.
This is expected, since Kaldi matches to a particular expected set of sentences and Whisper transcription does not. However, a few things I've noticed:
- Be careful with wording. "Turn on my office light" does not work, "Turn on the office light" does. It might be possible to make the intent templates in Home Assistant more general to deal with variants like this.
- Watch out for homophones and near homophones. Whisper consistently misunderstands "hall light" as "whole light" with my accent, for some reason (weirdly, only when I try to turn it off, turning it on works fine...). If I try to very carefully pronounce it I may get "haul light" which also fails... In the long run, adding alias names for entities in Home Assistant (for example) may work around this.
- Transcription may produce spelling or punctuation variants that may not match your entity names. Ones I have run into are "WeatherFlow" turning into "weather flow" and "multisensor" being output by Whisper as "multi-sensor", both leading to a failure. Generally, naming entities around corporate names and other non-standard vocabulary will probably cause trouble. Again, setting up appropriate entity aliases may help here.
In the long run we just need a better system for converting transcriptions to intents. Well... LLMs were designed originally to translate languages. So what might work here is an LLM specifically trained to convert "general" transcriptions into the much smaller set of intents understood by a system. Note that small LLMs can run on lower-end hardware, even just using CPU (see gpt4all for a very cool demo). Of course this would add latency. But what's ALSO interesting is that Whisper includes a language model itself, and it might be possible to fine-tune it (e.g. with a LoRA model, which can be done relatively cheaply) to directly target Home Assistant intents (and maybe a set of non-standard-English corporate device names), which would avoid the latency issue.
Something simpler might also work, i.e. an intent recognition system that can find "near matches".
BTW1 it would be really nice if the medium model for Whisper were available with the download script. From what I've read it's a big step up from the small model in terms of transcription accuracy. Unfortunately, I'm too lazy to generate it myself :)
BTW2 this is a very cool project, thank you for working on it! I do hope integration with Home Assistant voice assistants improves, in particular I run Rhasspy (and other expensive AI things, like Frigate) on a different system (a larger Intel machine...) than where I run Home Assistant (a Rasp Pi HA Yellow) and I want to keep that distributed architecture.
from rhasspy3.
Related Issues (20)
- Custom wake word support HOT 3
- write() failed: Broken pipe HOT 2
- Pipeline seems to randomly hang after loading the wav2txt model HOT 4
- handle_adapter_text.py only supports one line of the response text
- suggestion: save and reuse conversation_id
- suggestion: simultaneous tts and snd
- [Question] Faster-Whisper Home Assistant GPU or Tensor (Coral) Suport? HOT 8
- suggestion: detect language using asr
- Enhancement: allow piper to use any of the voices available in libritts
- Outputting sound after wake word (satellite) HOT 1
- Docker image: non-ASCII characters in voice name trigger UnicodeEncodeError
- Add support for CUDA in piper HOT 1
- Suggestion: continuously listen to the wake word
- Timeout or lockup after "rhasspy3.vad:segment: speaking ended" with longer reply HOT 2
- Multi-Channel Audio Input Issues
- Pipeline that sends a feeback message after the wake word HOT 1
- Request: missing SDIST on Pypi or matching tag for 1.4.0 on Github, for easier Integration in Gentoo Overlay
- Adding whisper instead of faster-whisper as stt HOT 1
- Suggestion: Allow handle program to initiate conversations
- The `wyoming-v1` branch HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from rhasspy3.