PRIMS
Enables LLM agents to safely execute arbitrary Python code within a secure, ephemeral sandbox via a lightweight, open-source Model Context Protocol (MCP) server.
About
PRIMS is a lightweight, open-source Model Context Protocol (MCP) server designed to empower LLM agents with the ability to safely execute arbitrary Python code. It provides a secure, throw-away sandbox for each execution, ensuring isolation and reproducibility. The server exposes a simple `run_code` tool that manages virtual environments, installs dependencies, handles file mounts, and streams back standard output and error, making it versatile for various deployment scenarios including direct execution or Dockerized environments.
Key Features
- Offers zero-configuration setup for local development or Docker deployment.
- 3 GitHub stars
- Executes user-supplied Python code in a secure, isolated sandbox.
- Spins up fresh virtual environments for each execution, installing requested pip packages.
- Streams back standard output (stdout) and standard error (stderr) from executed code.
- Supports mounting optional read-only files into the execution environment.
Use Cases
- Allowing LLM agents to safely perform dynamic code execution and testing.
- Facilitating automated code debugging and error resolution through LLM integration.
- Providing a secure runtime environment for AI-driven code generation and validation.