Automates red team vs. blue team AI competitions to discover and patch vulnerabilities in open-source ecosystems using LLM-driven agents.
This project implements a Kubernetes-native framework for autonomous adversarial security testing. It combines LLM-driven decision-making with offensive security tooling and retrieval-augmented generation (RAG) knowledge bases, enabling a Red Team agent to autonomously discover vulnerabilities. The long-term vision is to establish dual-LLM adversarial competitions, where Red Team and Blue Team agents compete to secure the open-source ecosystem by proactively finding and patching vulnerabilities before human attackers.