소개
The Guardrails skill provides a comprehensive 4-layer security framework designed to harden AI agents against vulnerabilities and quality issues. It guides developers through implementing input screening for jailbreaks and PII, system-level ethical constraints, output validation to prevent hallucinations, and continuous monitoring. By using standardized templates and checklists, this skill ensures that Claude-driven applications remain safe, compliant, and reliable while preventing unauthorized capability access.