Real-time Web Search RAG icon

Real-time Web Search RAG

Enhances conversational agents by combining static document knowledge with dynamic real-time web search capabilities.

About

This advanced AI assistant repository offers a hybrid approach to Retrieval-Augmented Generation (RAG), seamlessly integrating a user's local knowledge base with real-time web search. When local documents fall short, the system intelligently queries the web via an MCP server, ensuring accurate and up-to-date answers. Leveraging technologies like LangChain, FAISS, PyTorch, and OpenAI, it provides robust agentic orchestration, dynamic decision-making between RAG and web search, and clear citation for all responses, making it production-ready with Docker support and extensibility.

Key Features

  • Hybrid RAG Pipeline: Merges local vector search with fresh web/API results.
  • MCP Server Integration: Calls external tools or web APIs when local data is insufficient.
  • Agentic Orchestration: Dynamically decides between RAG and MCP using advanced logic.
  • Citations & Transparency: Clearly indicates sources for each response.
  • Production-Ready: Includes Docker deployment, health checks, error handling, and logging.
  • 0 GitHub stars

Use Cases

  • Developing intelligent systems that dynamically adapt their information retrieval strategy based on query needs.
  • Building advanced AI assistants that combine internal documentation with external, real-time information.
  • Creating conversational agents capable of answering questions from both static knowledge bases and the live web.
Advertisement

Advertisement