Provides a simple, experimental server for integrating tools with local AI chat agents.
Designed as an experimental, human-readable server, this tool facilitates the integration of various functionalities with local Large Language Models (LLMs). It leverages FastMCP to create a straightforward server architecture, emphasizing the critical role of Pandoc comments in defining tool capabilities for effective LLM interaction. The server includes a basic web fetching tool as a structural example, and its output is configurable between standard I/O and a web server, making it adaptable for different experimental setups.