Dataset Crafting, Efficient Fine-Tuning, and Multi-User Deployment Using Only Free Open-Source Tools
The Vodalus Expert LLM Forge includes several key components and functionalities:
-
Data Generation: Utilizes local language models (LLMs) to generate synthetic data based on Wikipedia content. See
main.py
for implementation details. -
LLM Interaction: Manages interactions with LLMs through the
llm_handler.py
, which configures and handles messaging with the LLM. -
RAG and Wikipedia Content Processing: Uses RAG as well as processes and searches Wikipedia content to find relevant details to use as ground truth.
While I'm releasing this tool for free, I've also completed an extensive tutorial/course with lots of videos and instructions that guide you through each step of maximizing the potential of this stack. This course is available for purchase at Vodalus LLM Course and is designed to enhance your experience and results with the Vodalus Expert LLM Forge.
-
Model Training and Fine-Tuning: Supports training and fine-tuning of MLX models with custom datasets, as detailed in the MLX_Fine-Tuning guide.
-
Quantizing Models: Guides on quantizing models to GGUF format for efficient local execution, as described in the Quantize_GGUF guide.
-
Interactive Notebooks: Provides Jupyter notebooks for training and fine-tuning models, such as
mlx-fine-tuning.ipynb
andconvert_to_gguf.ipynb
.
- Comprehensive Documentation: Each component is accompanied by detailed guides and instructions to assist users in setup, usage, and customization.
For more detailed information on each component, refer to the respective guides and source files included in the repository.
- Ensure Python is installed on your system.
- Familiarity with basic command line operations is helpful.
- Clone the repository to your local machine.
- Navigate to the project directory in your command line interface.
- Run the following commands to set up the environment:
Create env: conda create -n vodalus -y
conda activate vodalus
pip install -r requirements.txt
Execute the main script to start data generation:
python main.py
- Imports and Setup: Imports libraries and modules, sets the provider for the LLM.
- Data Generation (
generate_data
function): Fetches Wikipedia content, constructs prompts, and generates data using the LLM. - Execution (
main
function): Manages the data generation process using multiple workers for efficiency.
- OpenAI Client Configuration: Sets up the client for interacting with the LLM.
- Message Handling Functions: Includes functions to send messages to the LLM and handle the responses.
- Model Loading: Loads necessary models for understanding and processing Wikipedia content.
- Search Function: Implements semantic search to find relevant Wikipedia articles based on a query.
- To change the topics, edit
topics.py
. - To modify system messages, adjust
system_messages.py
.
- Adjust the number of workers and other parameters in
params.py
to optimize performance based on your system's capabilities.
If this project aids your work, please consider supporting it through a donation at my www.ko-fi.com/severian42. Your support helps sustain my further LLM developments and experiments, always with a focus on using those efforts to give back to the LLM community
Also, if you love this concept and approach but don't want to do it yourself, you can hire me and we will work together to accomplish your ideal Expert LLM! I also offer 1-on-1 sessions to help with your LLM needs.
Feel free to reach out! You can find the details on my Ko-Fi: www.ko-fi.com/severian42