Empower any LLM, cloud or local, with a RAG-powered private knowledge base and integrated web search, ensuring documents remain on your machine.
Sponsored
This server transforms your AI assistant into an expert on your personal or team data, providing a secure, RAG-powered private knowledge base and essential web search capabilities. It computes document embeddings entirely locally using Nomic v1.5, guaranteeing your sensitive information never leaves your machine. With zero-ceremony setup, documents in your designated folder are automatically indexed on startup, making it incredibly simple to give tools-lacking local LLMs like Ollama or even cloud models like Claude instant access to your specialized knowledge and the live internet.
Key Features
01Integrated semantic knowledge base search and web search
02Local document embeddings for maximum privacy
03No Docker required for default local storage
04Simple and extensible Python codebase
050 GitHub stars
06Automatic document ingestion from local folders
Use Cases
01Consolidating research papers and notes for academic study
02Enhancing local LLMs with private document and web access
03Creating a searchable knowledge base for team project documentation