Manages and provides prompt templates for large language models, supporting file-based storage, real-time caching, and structured metadata.
The Prompts server acts as a centralized Model Context Protocol (MCP) hub for managing and delivering prompt templates to Large Language Models (LLMs) and other AI clients. It enables users and LLMs to effortlessly add, retrieve, list, and delete prompt templates, which are stored as markdown files with robust YAML frontmatter support for structured metadata. Featuring real-time in-memory caching with automatic file change monitoring, the server ensures efficient prompt access and organization, streamlining interactions with AI models.