QA-Pilot can chat with github repository or a compressd file(e.g. xz, zip) and using the online/local LLM.
- Chat with github public repository with git clone way
- Chat with compressed file(directories, e.g. xz, zip) with upload way
- Store the chat history
- Select the different LLM models
- ollama
- openai
- This is a test project to validate the feasibility of a fully local solution for question answering using LLMs and Vector embeddings. It is not production ready, and it is not meant to be used in production.
Do not use models for analyzing your critical or production data!!
Do not use models for analyzing customer data to ensure data privacy and security!!
To deploy QA-Pilot, you can follow the below steps:
- Clone the QA-Pilot repository:
git clone https://github.com/reid41/QA-Pilot.git
- Install conda for virtual environment management. Create and activate a new virtual environment.
conda create -n QA-Pilot python=3.10.14
conda activate QA-Pilot
- Install the required dependencies: NOTE: you might need to re-install the pytorch with cuda pytorch
pip install -r requirements.txt
- Setup ollama website and ollama github to manage the local LLM model. e.g.
ollama pull <model_name>
ollama list
-
Setup OpenAI, add the key in
.env
-
Set the related parameters in
config/config.ini
, e.g.model provider
,model
,variable
-
Run the QA-Pilot:
streamlit run qa_pilot.py
- Do not use url and upload at the same time.
- The remove button cannot really remove the local chromadb, need to remove it manually when stop it.
- Switch to
New Source Button
to add a new project