Systematically benchmarks and provides a playground for testing the security of Model Context Protocols.
SecBench is a comprehensive security benchmark and interactive playground designed to rigorously test the robustness of Model Context Protocols (MCPs). It facilitates the identification of vulnerabilities by simulating various attack scenarios, including malicious server interactions, Man-in-the-Middle attacks, DNS rebinding, and specific CVE exploits. The tool supports automated testing against popular LLM agents like OpenAI, Claude, and Cursor, and offers a flexible client to connect with both normal and malicious MCP servers, making it an essential resource for developers and researchers focused on securing AI agent interactions.