概要
This skill empowers developers to architect and implement sophisticated Retrieval-Augmented Generation (RAG) pipelines that connect LLMs to external knowledge sources. It provides deep technical guidance on the entire RAG lifecycle, from document preprocessing and optimal chunking strategies to embedding model selection and vector database configuration. By leveraging advanced patterns like hybrid search, cross-encoder reranking, and contextual compression, this skill helps reduce hallucinations and ensures that AI applications provide factually accurate, source-cited responses based on private or domain-specific documentation.