Acerca de
The Claude Code Skill Evaluator is a specialized tool designed to ensure your AI agent capabilities are performant, concise, and compliant with Anthropic's quality standards. It analyzes skill files across ten critical dimensions—including token economy, prompt engineering quality, and anti-pattern detection—to provide actionable feedback. By identifying bloated descriptions, vague instructions, or structural flaws, this skill helps developers build more reliable and efficient agent behaviors, resulting in better LLM performance and reduced token consumption.