소개
This skill provides comprehensive guidance and implementation patterns for building Retrieval-Augmented Generation (RAG) systems within LLM applications. It masters the integration of vector databases, embedding models, and advanced retrieval strategies like hybrid search and reranking to significantly reduce hallucinations and provide domain-specific knowledge. Whether you are building a documentation assistant, a research tool with source citations, or a Q&A system over private documents, this skill offers the architectures and code snippets needed to ensure factual, grounded, and high-performance AI responses.