关于
This skill provides a comprehensive framework for implementing Retrieval-Augmented Generation (RAG) within LLM applications, allowing developers to connect AI models to proprietary knowledge bases. It covers the entire pipeline from document loading and optimized chunking to embedding generation and vector database integration. By utilizing advanced patterns like hybrid search, reranking, and contextual compression, this skill helps developers reduce hallucinations and build high-accuracy Q&A systems, research tools, and documentation assistants with verifiable source citations.