Augments large language model inference with MCP client-server architecture for streamlined testing and agentic evaluation.
This sandbox provides a low-friction environment for validating Model Context Protocol (MCP) servers against a working LLM client using a live chat interface. Initially supporting local LLMs like LLaMA 7B for local network testing, it will expand to include cloud inference, enabling the use of more powerful models. It serves as both a reference architecture and a practical development environment, evolving alongside the MCP specification.