About
This repository provides a proof-of-concept demonstration of various MCP Poisoning Attacks that can compromise real-world AI agent workflows. It showcases scenarios like code generation poisoning, financial report exfiltration, competitor analysis data leaks, meeting transcript leaks, code review exfiltration, and cross-server shadowing attacks. The PoC includes a fake MCP server and an agent simulation to illustrate the impact of these attacks, which can lead to silent data exfiltration and cross-tool hijacking without visible clues to the user. This is intended for educational and research purposes only.
Key Features
- Simulates financial report exfiltration
- Shows code review exfiltration
- Illustrates competitor analysis data leaks
- Implements a cross-server shadowing attack
- Demonstrates code generation poisoning
- Replicates meeting transcript leaks
- 4 GitHub stars
Use Cases
- Researching AI security vulnerabilities
- Testing the security of AI agent workflows
- Educating developers about MCP poisoning attacks