Code Monkey home page Code Monkey logo

win4r / graphrag4openwebui Goto Github PK

View Code? Open in Web Editor NEW
231.0 5.0 51.0 123 KB

GraphRAG4OpenWebUI integrates Microsoft's GraphRAG technology into Open WebUI, providing a versatile information retrieval API. It combines local, global, and web searches for advanced Q&A systems and search engines. This tool simplifies graph-based retrieval integration in open web environments.

Home Page: https://www.youtube.com/@AIsuperdomain

License: Apache License 2.0

Python 100.00%
aiagents graphrag llms ollama openai openwebui rag

graphrag4openwebui's Introduction

🔥🔥🔥如有问题请联系我的微信 stoeng

🔥🔥🔥项目对应的视频演示请看 https://youtu.be/z4Si6O5NQ4c

GraphRAG4OpenWebUI

Integrate Microsoft's GraphRAG Technology into Open WebUI for Advanced Information Retrieval

English | 简体中文

GraphRAG4OpenWebUI is an API interface specifically designed for Open WebUI, aiming to integrate Microsoft Research's GraphRAG (Graph-based Retrieval-Augmented Generation) technology. This project provides a powerful information retrieval system that supports multiple search models, particularly suitable for use in open web user interfaces.

Project Overview

The main goal of this project is to provide a convenient interface for Open WebUI to leverage the powerful features of GraphRAG. It integrates three main retrieval methods and offers a comprehensive search option, allowing users to obtain thorough and precise search results.

Key Retrieval Features

  1. Local Search

    • Utilizes GraphRAG technology for efficient retrieval in local knowledge bases
    • Suitable for quick access to pre-defined structured information
    • Leverages graph structures to improve retrieval accuracy and relevance
  2. Global Search

    • Searches for information in a broader scope, beyond local knowledge bases
    • Suitable for queries requiring more comprehensive information
    • Utilizes GraphRAG's global context understanding capabilities to provide richer search results
  3. Tavily Search

    • Integrates external Tavily search API
    • Provides additional internet search capabilities, expanding information sources
    • Suitable for queries requiring the latest or extensive web information
  4. Full Model Search

    • Combines all three search methods above
    • Provides the most comprehensive search results, meeting complex information needs
    • Automatically integrates and ranks information from different sources

Local LLM and Embedding Model Support

GraphRAG4OpenWebUI now supports the use of local language models (LLMs) and embedding models, increasing the project's flexibility and privacy. Specifically, we support the following local models:

  1. Ollama

    • Supports various open-source LLMs run through Ollama, such as Llama 2, Mistral, etc.
    • Can be configured by setting the API_BASE environment variable to point to Ollama's API endpoint
  2. LM Studio

    • Compatible with models run by LM Studio
    • Connect to LM Studio's service by configuring the API_BASE environment variable
  3. Local Embedding Models

    • Supports the use of locally run embedding models, such as SentenceTransformers
    • Specify the embedding model to use by setting the GRAPHRAG_EMBEDDING_MODEL environment variable

This support for local models allows GraphRAG4OpenWebUI to run without relying on external APIs, enhancing data privacy and reducing usage costs.

Installation

Ensure that you have Python 3.8 or higher installed on your system. Then, follow these steps to install:

  1. Clone the repository:

    git clone https://github.com/your-username/GraphRAG4OpenWebUI.git
    cd GraphRAG4OpenWebUI
  2. Create and activate a virtual environment:

    python -m venv venv
    source venv/bin/activate  # On Windows use venv\Scripts\activate
  3. Install dependencies:

    pip install -r requirements.txt

    Note: The graphrag package might need to be installed from a specific source. If the above command fails to install graphrag, please refer to Microsoft Research's specific instructions or contact the maintainer for the correct installation method.

Configuration

Before running the API, you need to set the following environment variables. You can do this by creating a .env file or exporting them directly in your terminal:

# Set the TAVILY API key 
export TAVILY_API_KEY="your_tavily_api_key"

export INPUT_DIR="/path/to/your/input/directory"

# Set the API key for LLM
export GRAPHRAG_API_KEY="your_actual_api_key_here"

# Set the API key for embedding (if different from GRAPHRAG_API_KEY)
export GRAPHRAG_API_KEY_EMBEDDING="your_embedding_api_key_here"

# Set the LLM model 
export GRAPHRAG_LLM_MODEL="gemma2"

# Set the API base URL 
export API_BASE="http://localhost:11434/v1"

# Set the embedding API base URL (default is OpenAI's API)
export API_BASE_EMBEDDING="https://api.openai.com/v1"

# Set the embedding model (default is "text-embedding-3-small")
export GRAPHRAG_EMBEDDING_MODEL="text-embedding-3-small"

Make sure to replace the placeholders in the above commands with your actual API keys and paths.

Usage

  1. Start the server:

    python main-en.py
    

    The server will run on http://localhost:8012.

  2. API Endpoints:

    • /v1/chat/completions: POST request for performing searches
    • /v1/models: GET request to retrieve the list of available models
  3. Integration with Open WebUI: In the Open WebUI configuration, set the API endpoint to http://localhost:8012/v1/chat/completions. This will allow Open WebUI to use the search functionality of GraphRAG4OpenWebUI.

  4. Example search request:

    import requests
    import json
    
    url = "http://localhost:8012/v1/chat/completions"
    headers = {"Content-Type": "application/json"}
    data = {
        "model": "full-model:latest",
        "messages": [{"role": "user", "content": "Your search query"}],
        "temperature": 0.7
    }
    
    response = requests.post(url, headers=headers, data=json.dumps(data))
    print(response.json())

Available Models

  • graphrag-local-search:latest: Local search
  • graphrag-global-search:latest: Global search
  • tavily-search:latest: Tavily search
  • full-model:latest: Comprehensive search (includes all search methods above)

Notes

  • Ensure that you have the correct input files (such as Parquet files) in the INPUT_DIR directory.
  • The API uses asynchronous programming, make sure your environment supports async operations.
  • For large-scale deployment, consider using a production-grade ASGI server.
  • This project is specifically designed for Open WebUI and can be easily integrated into various web-based applications.

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

License

Apache-2.0 License

graphrag4openwebui's People

Contributors

win4r avatar

Stargazers

Rambo avatar  avatar  avatar Jianbin Li avatar  avatar ZA avatar  avatar  avatar  avatar  avatar  avatar  avatar A-kun avatar Sam Lorax avatar seven avatar hugo avatar Mark Bain avatar 曹杰 avatar Bin avatar Willy Tunson avatar Jamin  avatar Serhii Zabolotnii avatar Latin avatar  avatar  avatar  avatar  avatar dodge avatar  avatar  avatar  avatar  avatar  avatar Xiong Daowen avatar  avatar  avatar Sathianphong Phongsathian avatar Yu MianYang(余绵阳) avatar YunYuLai avatar  avatar FellowTraveler avatar  avatar 曙霜飞 avatar illikea avatar  avatar  avatar  avatar  avatar  avatar  avatar Gary Blankenship avatar Ariaki avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar Oscar avatar ivan avatar Ziyi Zhou avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar Guichao avatar Andrea Shih avatar  avatar  avatar  avatar  avatar  avatar Gorizon avatar  avatar  avatar  avatar  avatar  avatar  avatar daboe01 avatar  avatar Jeremi Joslin avatar  avatar  avatar  avatar  avatar YJ avatar Axunrun avatar  avatar  avatar ZiChuan avatar  avatar id-2 avatar Water avatar

Watchers

 avatar Xule Lin avatar Flipped avatar  avatar  avatar

graphrag4openwebui's Issues

Retrying request to /embeddings

I didn't config TAVILY_API_KEY

export TAVILY_API_KEY=""

export INPUT_DIR="xxx/input/artifacts"

export GRAPHRAG_API_KEY=""

export GRAPHRAG_API_KEY_EMBEDDING=""

export GRAPHRAG_LLM_MODEL="Qwen1.5-14B-Chat-GPTQ-Int4"

export API_BASE="http://172.17.22.174:9997/v1"

export API_BASE_EMBEDDING="http://172.17.22.174:9997/v1"

export GRAPHRAG_EMBEDDING_MODEL="m3e-base"

LLM and embedding served by xinference, but always get logs like this:

2024-08-02 15:45:51,474 - openai._base_client - INFO - Retrying request to /embeddings in 0.971267 seconds

404 Client Error: Not Found for url: http://xxx:8012/api/version

INFO: 172.17.0.1:41748 - "POST /ollama/urls/update HTTP/1.1" 200 OK
2024-07-23 17:33:22 ERROR [apps.ollama.main] 404 Client Error: Not Found for url: http://host.docker.internal:8012/api/version
2024-07-23 17:33:22 Traceback (most recent call last):
2024-07-23 17:33:22 File "/app/backend/apps/ollama/main.py", line 315, in get_ollama_versions
2024-07-23 17:33:22 r.raise_for_status()
2024-07-23 17:33:22 File "/usr/local/lib/python3.11/site-packages/requests/models.py", line 1024, in raise_for_status
2024-07-23 17:33:22 raise HTTPError(http_error_msg, response=self)
2024-07-23 17:33:22 requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http://host.docker.internal:8012/api/version
2024-07-23 17:33:22 INFO: 172.17.0.1:41748 - "GET /ollama/api/version/1 HTTP/1.1" 500 Internal Server Error

When running, the interface will retry infinitely and no result will be output

INFO: Waiting for application startup.
2024-08-12 11:22:55,049 - main - INFO - Initializing search engines and question generator...
2024-08-12 11:22:55,049 - main - INFO - Setting up LLM and embedder
2024-08-12 11:22:55,235 - main - INFO - LLM and embedder setup complete
2024-08-12 11:22:55,236 - main - INFO - Loading context data
2024-08-12 11:22:55,291 - main - INFO - Number of claim records: 44
2024-08-12 11:22:55,291 - main - INFO - Context data loading complete
2024-08-12 11:22:55,291 - main - INFO - Setting up search engines
2024-08-12 11:22:55,291 - main - INFO - Search engines setup complete
2024-08-12 11:22:55,291 - main - INFO - Initialization complete.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8012 (Press CTRL+C to quit)
2024-08-12 11:23:03,935 - main - INFO - Received chat completion request: model='graphrag-local-search:latest' messages=[Message(role='user', content='who is spike')] temperature=0.7 top_p=1.0 n=1 stream=False stop=None max_tokens=None presence_penalty=0 frequency_penalty=0 logit_bias=None user=None
2024-08-12 11:23:03,936 - main - INFO - Processing prompt: who is spike
local search prompt: who is spike
2024-08-12 11:23:04,554 - httpx - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
2024-08-12 11:23:04,669 - graphrag.query.structured_search.local_search.search - INFO - GENERATE ANSWER: 1723432983.9361222. QUERY: who is spike
2024-08-12 11:23:04,690 - openai._base_client - INFO - Retrying request to /chat/completions in 0.892320 seconds

local search erro

调用本地模型,返回{
"detail": "object of type 'int' has no len()"
}

Where to enter the endpoint?

From README.md:

Integration with Open WebUI: In the Open WebUI configuration, set the API endpoint to http://localhost:8012/v1/chat/completions. This will allow Open WebUI to use the search functionality of GraphRAG4OpenWebUI.

Where exactly do I have to enter the endpoint? In Connections --> OpenAI API? Because there you have to enter an API-key and although GraphRAG4OpenWebUI is up and running, it will result in connection failures.

What am I missing here?

INPUT_DIR路径有问题

INPUT_DIR这个路径有问题
执行:python main-cn.py报以下错误

2024-07-16 18:30:28,682 - main - ERROR - 加载上下文数据时出错: [Errno 2] No such file or directory: './ragtest/create_final_nodes.parquet'
2024-07-16 18:30:28,682 - main - ERROR - 初始化过程中出错: [Errno 2] No such file or directory: './ragtest/create_final_nodes.parquet'

parquet文件是不是找的路径有问题呢?最新的ragtest目录结构是这样的
image

加载上下文数据时出错: [Errno 2] No such file or directory: '..artifacts/create_final_covariates.parquet'

有时没有生成create_final_covariates.parquet。代码中需要兼容一下:
main-cn.py 将

covariate_df = pd.read_parquet(f"{INPUT_DIR}/{COVARIATE_TABLE}.parquet")
claims = read_indexer_covariates(covariate_df)

改为

        final_covariates_path = f"{INPUT_DIR}/{COVARIATE_TABLE}.parquet"
        final_covariates = (
            pd.read_parquet(final_covariates_path)
            if os.path.exists(final_covariates_path)
            else None
        )
        claims = (
            read_indexer_covariates(final_covariates)
            if final_covariates is not None
            else []
        )

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.