The Define Validation skill empowers Claude Code users to establish rigorous, automated quality gates for their development tasks. It creates a standardized framework for AI self-validation, incorporating essential defaults like 90% test coverage, comprehensive mocking of external dependencies (time, network, filesystem, and LLM calls), and fresh-context sub-agent reviews. Whether you're working with a formal specification or starting from scratch, this skill ensures that every piece of work has clear, actionable criteria that the AI must meet before declaring a task finished, preventing regressions and reducing manual oversight.
主な機能
012 GitHub stars
02Identifies feature-specific verification steps through proactive probing
03Generates standardized AI validation checklists automatically
04Appends validation logic to existing specs or saves to .lore/validation/
05Integrates fresh-context sub-agent code reviews into the workflow
06Enforces 90%+ test coverage and dependency mocking standards