LLM Sandbox
Securely executes code generated by Large Language Models in an isolated and portable environment.
About
LLM Sandbox provides a robust and portable sandbox runtime for executing code generated by Large Language Models (LLMs) with a strong emphasis on security. It ensures isolated execution by leveraging various container backends like Docker, Kubernetes, and Podman, and offers extensive support for multiple programming languages including Python, JavaScript, Java, C++, Go, and R. The tool also seamlessly integrates with popular LLM frameworks and can capture artifacts like plots. Furthermore, it includes a Model Context Protocol (MCP) server, enabling AI assistants like Claude Desktop to securely run code and capture visualizations directly within sandboxed environments.
Key Features
- Extensive multi-language support for Python, JavaScript, Java, C++, Go, and R
- Flexible container backend support including Docker, Kubernetes, and Podman
- Seamless integration with popular LLM frameworks like LangChain and OpenAI
- Isolated and secure code execution with customizable policies and resource limits
- Automatic capture of plots and visualizations from executed code
- 397 GitHub stars
Use Cases
- Securely executing code generated by Large Language Models
- Enabling AI assistants to perform code interpretation and data analysis in a sandboxed environment
- Integrating a secure code interpreter into LLM-powered applications and agents