소개
This skill provides a comprehensive framework for building production-grade LLM applications by offering standardized patterns for structured prompts, few-shot learning, and Chain-of-Thought reasoning. It bridges the gap between raw model access and robust software engineering, including built-in support for Retrieval-Augmented Generation (RAG) pipelines, intelligent document chunking, and resilient error handling with exponential backoff. Additionally, it incorporates unique scientific skill interleaving and categorical structures to ensure compositional coherence in complex AI workflows.