About
This project implements a Python-based Model Context Protocol (MCP) server that acts as a bridge, allowing large language models (LLMs) to manipulate the parameters of a JUCE synthesizer in real time. It works by exposing structured tools that an LLM can invoke, translating these tool calls into Open Sound Control (OSC) messages sent to a companion JUCE plugin or application. This architecture facilitates expressive, natural-language control of synthesizers, potentially integrating with speech-to-text systems to enable voice-controlled sound design and real-time audio manipulation.