Llm Sandbox
Augments large language model inference with MCP client-server architecture for streamlined testing and agentic evaluation.
关于
This sandbox provides a low-friction environment for validating Model Context Protocol (MCP) servers against a working LLM client using a live chat interface. Initially supporting local LLMs like LLaMA 7B for local network testing, it will expand to include cloud inference, enabling the use of more powerful models. It serves as both a reference architecture and a practical development environment, evolving alongside the MCP specification.
主要功能
- Local and cloud LLM inference support
- Facilitates agentic evaluation
- Live chat interface for interaction
- Reference architecture for MCP development
- MCP client-server architecture
- 1 GitHub stars
使用案例
- MCP server validation
- Agentic workflow testing
- LLM behavior evaluation