The Claude Code Skill Evaluator is a specialized tool designed to ensure your AI agent capabilities are performant, concise, and compliant with Anthropic's quality standards. It analyzes skill files across ten critical dimensions—including token economy, prompt engineering quality, and anti-pattern detection—to provide actionable feedback. By identifying bloated descriptions, vague instructions, or structural flaws, this skill helps developers build more reliable and efficient agent behaviors, resulting in better LLM performance and reduced token consumption.
Key Features
0110-point dimensional quality analysis
02Standardized compliance checking for SKILL.md files
03Actionable improvement reports with priority levels
04Automated anti-pattern and red-flag detection
050 GitHub stars
06Token economy and prompt engineering optimization