概要
This skill provides a comprehensive framework for building Retrieval-Augmented Generation (RAG) systems, enabling LLM applications to provide accurate, grounded responses using proprietary or domain-specific data. It covers the entire pipeline, from document chunking and embedding generation to advanced retrieval strategies like hybrid search and reranking. Whether you are building a documentation assistant, a research tool, or a factual Q&A chatbot, this skill offers production-ready patterns and configurations for popular vector databases like Pinecone, Chroma, and Weaviate to help reduce hallucinations and improve AI reliability.