This server provides a Model Context Protocol (MCP) interface for AWS Bedrock's managed prompts. It allows users to effortlessly list available prompts, retrieve their detailed configurations (templates, variables, model settings), and invoke them with variable substitution. Key features include batch invocation for parallel processing, streaming responses for real-time output, and comprehensive version management. The server automatically handles various large language models such as Claude, Amazon Titan, Meta Llama, Mistral AI, Cohere, and AI21, making it a versatile tool for integrating Bedrock prompts into applications or AI agents like Claude Desktop.
主要功能
01List and access different versions of prompts
02Batch invoke prompts with different inputs in parallel
030 GitHub stars
04Invoke prompts with variable substitution
05Stream real-time output from supported models
06Support for multiple large language models (Claude, Titan, Llama, etc.)