Evidence Verification establishes a 'show, don't tell' standard for AI agents, ensuring that no task is marked complete without verifiable proof of success. By implementing production-grade quality assurance protocols, it guides agents to systematically collect test results, coverage metrics, build logs, and deployment health checks. This skill is particularly valuable for maintaining high-integrity codebases, as it replaces verbal assertions with documented evidence such as exit codes and linter outputs. Whether performing code reviews or deploying to production, this skill ensures every action meets rigorous quality standards and provides a structured, audit-ready verification trail.
Key Features
01Production-grade quality gate enforcement and minimum standards
026 GitHub stars
03Automated evidence collection protocols for tests and builds
04Standardized verification reporting templates for consistent documentation
05Multi-stage verification workflows for reviews and deployments
06Structured evidence storage in shared context and log files