The Evaluation Framework skill provides a structured methodology for assessing artifacts through configurable weighted criteria and automated decision logic. By abstracting the process of defining metrics, assigning importance, and establishing clear quality gates, it enables developers to build consistent scoring rubrics for code reviews, project proposals, and system audits. This skill helps teams transition from subjective feedback to data-driven decisions by mapping numerical scores to specific, actionable outcomes like approval, conditional acceptance, or rejection.
주요 기능
01Customizable decision thresholds
02Consistent scoring methodology
03Weighted criteria scoring system
04Reusable evaluation patterns
05Automated quality gate logic
0610 GitHub stars