Rag icon

Rag

Enables Retrieval Augmented Generation (RAG) capabilities for Large Language Models (LLMs) by indexing and retrieving relevant information from documents.

About

Rag empowers Large Language Models (LLMs) to answer questions based on your document content by indexing and retrieving relevant information efficiently. It parses documents into chunks, generates vector embeddings using various embedding providers like Ollama and OpenAI, and stores them in a local vector store. This enables downstream LLMs, via MCP clients, to generate contextually relevant responses by querying the stored embeddings.

Key Features

  • Indexes documents in various formats (TXT, MD, JSON, CSV)
  • Supports multiple embedding providers (Ollama, OpenAI, etc.)
  • Integrates with MCP protocol for seamless AI agent usage
  • Customizable chunk sizes for document processing
  • Uses a local vector store for efficient retrieval
  • 1 GitHub stars

Use Cases

  • Integrating RAG capabilities into existing applications
  • Providing contextual information to AI agents for better responses
  • Enabling LLMs to answer questions based on indexed documentation
Craft Better Prompts with AnyPrompt