01Scans LLM responses for system prompt leaks, PII, and toxic language.
02Validates SQL queries and file write operations to prevent malicious actions.
03Comprehensive threat detection for AI agents (prompt injection, jailbreak, SQL injection, data exfiltration).
040 GitHub stars
05Implements a fail-closed security model to block requests on errors or timeouts, preventing bypasses.
06PII redaction from prompts and rehydration into responses for enhanced privacy.