소개
Integrate high-speed, offline text-to-speech capabilities directly into your Large Language Models (LLMs) with this local MCP server. Leveraging the Supertonic-2 model, it allows LLMs to vocalize their responses on your local system, eliminating the need for external services and providing fast, private speech generation. Simply enable the tool in your chat interface and instruct your model to use it for spoken output.