Rag
Combines search and Large Language Models (LLMs) to generate insights from your data using Retrieval Augmented Generation (RAG).
About
Rag is a Streamlit application leveraging Retrieval Augmented Generation (RAG) with txtai, enabling factually correct content generation by grounding LLMs with relevant context. It supports both Vector RAG, using vector search for context, and Graph RAG, using graph path traversal. It allows users to upload and index data, configure various parameters, and query the system to generate answers based on the retrieved context. This project allows users to leverage local data with LLMs.
Key Features
- 371 GitHub stars
- Provides graph visualization to understand graph RAG queries
- Configurable with environment variables to control application behavior (LLM, Embeddings, Context Size, etc.)
- Allows data ingestion from files, URLs, and direct text input
- Supports Vector RAG and Graph RAG methodologies
- Utilizes txtai for embeddings and LLM interactions
Use Cases
- Generating answers to questions based on a local knowledge base
- Exploring relationships between concepts using graph-based queries
- Building a custom knowledge assistant using local data and LLMs