developer tools向けの厳選されたMCPサーバーコレクションをご覧ください。20552個のサーバーを閲覧し、ニーズに最適なMCPを見つけましょう。
Provides tools for interacting with the Flow blockchain through the Model Context Protocol (MCP).
Build Model Context Protocol (MCP) servers with embedded LLM reasoning capabilities.
Manages model context protocol interactions through a plugin-based architecture, secure vault integration, and real-time web dashboard.
Offers fundamental mathematical operations, such as addition and subtraction, to integrate with AI clients via a Model Context Protocol server.
Provides a foundational backend server for specialized application needs.
Demonstrates building interactive Model Context Protocol (MCP) tools with elicitation capabilities.
Provides seamless access to the LangSmith prompt library, offering over 1000+ community-vetted AI prompts for integration into AI workflows.
Integrates Snowflake Cortex AI features, including search, analyst, and agent capabilities, into the Model Context Protocol ecosystem for AI clients.
Provides a robust WebSocket-based server for managing AI model requests and responses via the Model Control Protocol.
Facilitates the creation of Model Context Protocol (MCP) servers and their integration with any Large Language Model (LLM) for enhanced AI agent capabilities.
Access Jina AI's Reader, Embeddings, and Reranker APIs, providing a comprehensive suite of tools for web content processing, search, and semantic analysis.
Enables read-only PostgreSQL database access and querying for AI assistants.
Synchronizes conversational context between Claude Desktop and Claude Code using a shared GitHub repository.
Convert Excel files to multiple formats like JSON, CSV, and SQL with speed, flexibility, and easy customization.
Manages SQL Create, Read, Update, and Delete operations using AI prompts.
Provides a structured HTTP request tool for AI agents, eliminating `curl` complexities across different platforms.
Enables AI agents to autonomously create, build, and deploy full-stack mobile applications through natural language.
Analyzes Ascend PyTorch Profiler performance data for large language models, identifying bottlenecks and optimizing execution.
Provides AI agents with a persistent, shared memory, allowing operational knowledge to be exchanged across sessions, agents, projects, and teams.
Exposes petroleum engineering data and tools to Large Language Models (LLMs) for natural language interaction.
Scroll for more results...