Executes Python code generated by LLMs in a secure, locally-hosted environment, leveraging Hugging Face's LocalPythonExecutor and MCP for LLM application integration.
Sponsored
This tool provides a secure way to execute Python code generated by Large Language Models (LLMs) locally, addressing the need for a safe and easily set-up Python runtime. Wrapping Hugging Face's `LocalPythonExecutor` within an MCP (Model Context Protocol) server, it allows LLM applications, such as Claude Desktop or Cursor, to access a restricted Python environment without the complexities of Docker or virtual machines. It offers a balance between ease of use and security, limiting file I/O operations and restricting importable modules to mitigate potential risks associated with executing untrusted code.
Key Features
01Limits available Python imports to a predefined safe list.
026 GitHub stars
03Runs in a `uv`-managed Python virtual environment.
04Offers safer Python code execution than direct `eval()` calls.
05Restricts file I/O operations for enhanced security.
06Exposes a `run_python` tool via MCP.
Use Cases
01Providing a secure Python runtime for AI-powered coding assistants.
02Adding a code interpreter to LLM applications like Claude Desktop.
03Executing LLM-generated Python code in a controlled environment.