Demonstrates MCP Poisoning Attacks affecting AI agent workflows.
This repository provides a proof-of-concept demonstration of various MCP Poisoning Attacks that can compromise real-world AI agent workflows. It showcases scenarios like code generation poisoning, financial report exfiltration, competitor analysis data leaks, meeting transcript leaks, code review exfiltration, and cross-server shadowing attacks. The PoC includes a fake MCP server and an agent simulation to illustrate the impact of these attacks, which can lead to silent data exfiltration and cross-tool hijacking without visible clues to the user. This is intended for educational and research purposes only.