探索我们完整的 MCP 服务器集合,将 AI 连接到您喜爱的工具。
Provides all-in-one infrastructure for search, recommendations, Retrieval-Augmented Generation (RAG), and analytics via API.
Enables interaction with an Obsidian vault via the Obsidian Local REST API plugin.
Provides AI assistants with comprehensive access to shadcn/ui v4 components, blocks, demos, and metadata via the Model Context Protocol.
Intelligently routes large language model requests to the most suitable models and tools for optimized inference, enhanced security, and improved accuracy.
Serve local large language models natively on Apple Silicon with OpenAI/Ollama-compatible APIs, supporting tool calling, a plugin ecosystem, and a menu bar chat UI.
Accelerate the development and research of complex Retrieval-Augmented Generation (RAG) systems with a low-code, modular framework.
Connects Supabase projects to AI assistants via the Model Context Protocol (MCP).
Provides a catalog of official Microsoft MCP server implementations for AI-powered data access and tool integration.
Enhances AI model reasoning by making it recursively evaluate and refine its responses through self-argumentation.
Exposes MCP stdio-based servers over SSE or WebSockets, enabling remote access and integration.
Converts various file types and web content into Markdown format.
Create and configure development containers from devcontainer.json files, providing isolated development environments.
Build a persistent semantic graph from conversations with AI assistants using standard Markdown files.
Facilitates communication between MCP clients and servers by bridging different transport protocols like stdio and SSE.
Connects Ableton Live to Claude AI, enabling prompt-assisted music production and session manipulation.
Aggregates search results from various web search services through a unified metasearch library.
Integrates advanced AI capabilities directly into Zotero, enabling users to chat with PDFs, extract detailed insights, and streamline research workflows.
Provides a Model Context Protocol (MCP) server for accessing and interacting with Grafana instances and their surrounding ecosystem.
Crawls, extracts, and organizes technical documentation into an LLM-ready format, streamlining research and implementation for developers.
Enables AI assistants to search and analyze arXiv papers through a simple Model Context Protocol interface.
Transforms existing API servers and services into Model Context Protocol (MCP) compliant endpoints with zero code changes.
Automates data integration via a stable, self-healing SDK, providing automated schema-drift detection, retries, and remappings to maintain continuous data flow without connector maintenance or rewrites.
Provides a Gradio web interface for locally running various Llama 2 models on GPU or CPU across different operating systems.
Automates software development task management for AI agents, facilitating chain-of-thought reasoning and code consistency.
Empowers AI agents and coding assistants with web crawling and retrieval-augmented generation (RAG) capabilities.
Streamline LLM Agent development with an all-in-one library offering tools, prompts, frameworks, and models.
Provides a collection of example UI components and Model Context Protocol (MCP) servers to demonstrate building rich applications for ChatGPT.
Serve, benchmark, and deploy large language models (LLMs) on various hardware platforms.
Integrates Perplexity's real-time web search, reasoning, and research capabilities into AI assistants.
Provides n8n workflow templates for building various AI agents.
Scroll for more results...