This skill acts as a critical integrity gatekeeper for Claude Code, preventing premature or false claims of task completion. It mandates a 'Proof Before Assertions' approach, requiring the AI to identify, execute, and analyze relevant verification commands—such as unit tests, linters, or build scripts—before committing code or signaling success. By eliminating guesswork and 'confidence-based' reporting, it ensures high-integrity development, maintains trust between human and AI, and prevents broken code from entering the codebase.
主要功能
01Comprehensive exit code and failure count analysis for builds and linters
02Mandatory verification command execution before any status claims
03Standardized Red-Green-Refactor cycle validation for regression tests
04Automated rationalization prevention for 'should work' assumptions
05Independent verification of VCS diffs against agent success reports
060 GitHub stars