概要
This skill provides a comprehensive framework for building production-grade LLM applications by implementing proven prompt engineering strategies. It enables developers to maximize model controllability and output quality through structured reasoning, dynamic example selection, and iterative refinement workflows. By standardizing the way prompts are constructed—from system context to output formatting—it helps eliminate inconsistent results and reduces token consumption in complex AI workflows.