Enforces a strict discipline of running fresh verification commands and analyzing actual output before claiming any task is finished or successful.
The Verification Before Completion skill establishes a non-negotiable standard for AI-assisted development, preventing 'hallucinated' success reports. By requiring evidence through fresh command execution, it ensures that tests, linters, builds, and requirement checks have actually passed before an agent claims completion. This skill transforms the workflow from assumption-based to evidence-based, significantly reducing regression rates and improving the reliability of subagent delegations by mandating an independent check of all outputs and diffs.
主な機能
01Mandatory fresh verification command execution
02Comprehensive checklist for diverse claim types (Tests, Build, Linter)
03Independent verification of subagent success reports
040 GitHub stars
05Structured output analysis for exit codes and failure counts
06Verification-gated completion claims
ユースケース
01Validating that bug fixes actually resolve the original failing scenario
02Confirming build stability and linting compliance in CI/CD pre-checks
03Ensuring all unit and integration tests pass before a final submission