概要
This skill empowers developers to bridge the gap between Large Language Models and external knowledge bases by implementing high-performance RAG architectures. It provides comprehensive guidance on document chunking, embedding generation, vector storage, and advanced retrieval patterns like hybrid search and reranking. Use this skill to minimize hallucinations and enable LLMs to provide grounded, source-cited responses based on proprietary or domain-specific data, making it essential for building production-grade AI search and Q&A applications.