关于
This skill provides a standardized framework for tracking interactions between AI agents and human operators, ensuring complete observability into hybrid decision-making processes. It enables developers to monitor when and why agents request help, measure human response times against SLAs, and analyze the impact of human feedback on agent performance. By instrumenting approvals, clarifications, and escalations, teams can identify bottlenecks in hybrid workflows, audit manual overrides, and improve the long-term autonomy and reliability of their AI systems.