소개
This skill provides a comprehensive framework for implementing Retrieval-Augmented Generation (RAG) within LLM applications to ensure grounded, factual, and domain-specific responses. It covers the entire pipeline from document ingestion and semantic chunking to vector database integration and advanced retrieval patterns like hybrid search and reranking. Ideal for developers building documentation assistants, proprietary knowledge bases, or complex search systems, it offers standardized implementation patterns and best practices to minimize hallucinations and maximize retrieval precision.