Llm Sandbox
Createdtmcarmichael
Augments large language model inference with MCP client-server architecture for streamlined testing and agentic evaluation.
About
This sandbox provides a low-friction environment for validating Model Context Protocol (MCP) servers against a working LLM client using a live chat interface. Initially supporting local LLMs like LLaMA 7B for local network testing, it will expand to include cloud inference, enabling the use of more powerful models. It serves as both a reference architecture and a practical development environment, evolving alongside the MCP specification.
Key Features
- Local and cloud LLM inference support
- Facilitates agentic evaluation
- Live chat interface for interaction
- Reference architecture for MCP development
- MCP client-server architecture
- 1 GitHub stars
Use Cases
- MCP server validation
- Agentic workflow testing
- LLM behavior evaluation