概要
This skill provides a specialized framework for managing large-scale Model Context Protocol (MCP) implementations by treating servers as code APIs rather than direct tool calls. It enables agents to progressively load tool definitions, filter large datasets within the execution environment before returning results to the LLM, and persist complex multi-step workflows as reusable skills. By shifting processing to the execution layer and utilizing lazy-loading patterns, it dramatically reduces token overhead—up to 98.7%—making it ideal for data-intensive operations or environments with over 50 tools.