About
This skill provides a comprehensive framework for implementing Retrieval-Augmented Generation (RAG) to enhance LLM applications with domain-specific knowledge. It covers the entire pipeline from document ingestion and semantic chunking to vector database integration and advanced retrieval strategies like hybrid search and reranking. Whether you are building a proprietary Q&A system, a documentation assistant, or a research tool, this skill offers production-grade patterns to minimize hallucinations and ensure factual accuracy using industry-standard tools like LangChain and various vector stores.