소개
This skill provides a comprehensive framework for implementing Retrieval-Augmented Generation (RAG) within LLM applications. It guides developers through the entire pipeline, including document ingestion, semantic chunking, vector database integration, and advanced retrieval strategies like hybrid search and reranking. By following these patterns, developers can create AI systems that provide factually accurate, source-cited responses while significantly reducing hallucinations and improving context-aware performance across proprietary datasets.