소개
This skill provides a comprehensive framework for understanding how context functions within agentic systems. It explores the 'attention budget' constraint of language models, detailng the anatomy of context including system prompts, tool definitions, and message history. By mastering techniques like progressive disclosure and context budgeting, developers can optimize token usage, reduce costs, and prevent model degradation during long-horizon tasks. This is an essential resource for anyone designing, debugging, or scaling complex AI-driven workflows.