Acerca de
This skill provides a comprehensive framework for implementing Retrieval-Augmented Generation (RAG) within LLM applications, enabling developers to connect AI models with external knowledge bases. It covers the entire pipeline from document loading and recursive chunking to advanced retrieval strategies like hybrid search, multi-query generation, and reranking. By integrating this skill, users can build reliable Q&A systems, documentation assistants, and research tools that provide accurate, source-cited responses while significantly reducing model hallucinations through grounded context.