关于
This skill provides a comprehensive framework for architecting and implementing high-performance Retrieval-Augmented Generation (RAG) systems. It bridges the gap between LLMs and external knowledge bases by providing production-ready patterns for document chunking, embedding generation, and multi-stage retrieval. From local Chroma setups to enterprise-scale Pinecone deployments, this skill covers advanced strategies like hybrid search, reranking, and contextual compression, making it essential for developers building document Q&A assistants, proprietary research tools, or any AI application requiring grounded, fact-based responses with citations.