The Analyzing Component Quality skill provides a rigorous framework for evaluating the effectiveness of Claude Code agents, skills, commands, and hooks. It moves beyond simple syntax validation to assess high-level attributes such as description clarity, the principle of least privilege in tool permissions, and the specificity of auto-invoke triggers. By generating detailed quality scorecards and providing concrete 'before and after' improvement suggestions, this skill ensures that AI components are secure, user-friendly, and trigger exactly when intended.
主要功能
01Multi-dimensional quality scoring (1-5 scale)
02Automated scoring via heuristic scripts
031 GitHub stars
04Principle of least privilege security auditing
05Auto-invoke trigger effectiveness analysis
06Concrete code-level improvement suggestions