About
LlamaGuard is a professional-grade safety alignment tool that leverages Meta's 7B and 8B parameter models to provide high-accuracy content moderation for Large Language Models. It classifies interactions across six critical safety categories—including violence, sexual content, and criminal planning—ensuring that both user prompts and model responses remain within defined safety boundaries. This skill is essential for developers building production-ready AI applications that require robust guardrails, high-performance inference via vLLM, and seamless integration with safety frameworks like NeMo Guardrails to maintain user trust and system integrity.