Code Monkey home page Code Monkey logo

whisperfusion's Introduction

WhisperFusion

WhisperFusion

Seamless conversations with AI (with ultra-low latency)

Welcome to WhisperFusion. WhisperFusion builds upon the capabilities of the WhisperLive and WhisperSpeech by integrating Mistral, a Large Language Model (LLM), on top of the real-time speech-to-text pipeline. Both LLM and Whisper are optimized to run efficiently as TensorRT engines, maximizing performance and real-time processing capabilities. While WhiperSpeech is optimized with torch.compile.

Features

  • Real-Time Speech-to-Text: Utilizes OpenAI WhisperLive to convert spoken language into text in real-time.

  • Large Language Model Integration: Adds Mistral, a Large Language Model, to enhance the understanding and context of the transcribed text.

  • TensorRT Optimization: Both LLM and Whisper are optimized to run as TensorRT engines, ensuring high-performance and low-latency processing.

  • torch.compile: WhisperSpeech uses torch.compile to speed up inference which makes PyTorch code run faster by JIT-compiling PyTorch code into optimized kernels.

Hardware Requirements

  • A GPU with at least 24GB of RAM
  • For optimal latency, the GPU should have a similar FP16 (half) TFLOPS as the RTX 4090. Here are the hardware specifications for the RTX 4090.

The demo was run on a single RTX 4090 GPU. WhisperFusion uses the Nvidia TensorRT-LLM library for CUDA optimized versions of popular LLM models. TensorRT-LLM supports multiple GPUs, so it should be possible to run WhisperFusion for even better performance on multiple GPUs.

Getting Started

  • We provide a pre-built TensorRT-LLM docker container that has both whisper and phi converted to TensorRT engines and WhisperSpeech model is pre-downloaded to quickly start interacting with WhisperFusion.
 docker run --gpus all --shm-size 64G -p 6006:6006 -p 8888:8888 -it ghcr.io/collabora/whisperfusion:latest
  • Start Web GUI
 cd examples/chatbot/html
 python -m http.server

Build Docker Image

  • We provide the docker image for cuda-architecures 89 and 90. If you have a GPU with a different cuda architecture. For e.g. to build for RTX 3090 with cuda- architecture 86
 bash build.sh 86-real

This should build the ghcr.io/collabora/whisperfusion:latest for RTX 3090.

Contact Us

For questions or issues, please open an issue. Contact us at: [email protected], [email protected], [email protected]

whisperfusion's People

Contributors

makaveli10 avatar zoq avatar jpc avatar damianb-bitflipper avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.