소개
This skill provides a comprehensive framework for implementing Retrieval-Augmented Generation (RAG), enabling LLMs to access and utilize external knowledge bases for more accurate, grounded responses. It covers the entire pipeline from document loading and sophisticated chunking strategies to vector database integration and advanced retrieval patterns like hybrid search and reranking. By following these industry-standard implementation patterns, developers can significantly reduce hallucinations in AI applications, create documentation assistants, and build complex Q&A systems that cite proprietary sources reliably.