Explore our complete collection of MCP servers to connect AI to your favorite tools.
Provides access to various Large Language Models (LLMs) through an MCP server.
Enables running multiple isolated instances of MCP servers with unique configurations and namespaces.
Integrates multiple search engines into AI tools like Cursor and Claude Desktop via the MCP protocol.
Provides safe access to your iMessage database through the Model Context Protocol (MCP).
Connects to Google Cloud services to provide context and tools for interacting with resources.
Generates MP3 files from text using the Kokoro-TTS model and optionally uploads them to S3.
Enables AI agents and assistants to interact with VS Code through the Model Context Protocol.
Enables Large Language Models to interact with DataForSEO API functions and other SEO tools via a comprehensive stdio MCP server.
Enables LLMs to compose, send emails, and search for attachments within specified directories.
Enables AI assistants to search and purchase products directly from Amazon.
Automates external attack surface mapping by integrating reconnaissance tools and natural language processing.
Enables Javascript/Typescript developers to build and integrate AI agents and dynamic tools within the Unifai platform.
Provides a curated list of best practices for building Model Context Protocol (MCP) servers and clients.
Exposes existing Gin endpoints as Model Context Protocol (MCP) tools, enabling instant use by MCP-compatible clients.
Enables large language models to interact with and manage Heroku Platform resources.
Enables AI assistants to control robots via ROS 2 topics using the Model Context Protocol.
Enables an LLM to perform static analysis of Portable Executable (PE) files for malware triage.
Provides a privacy firewall for large language models, automatically detecting and redacting sensitive information from PDF documents before processing.
Enables control of SO-ARM100 series robots via an MCP server for AI agents and direct manual operation.
Enables OpenSCAD 3D modeling and rendering through a production-ready Model Context Protocol (MCP) server.
Provides a generalization-capable memory layer for LLMs and AI agents, abstracting specific experiences into generalized concepts.
Enables managing Google Tag Manager assets through natural language via an LLM connection.
Empowers AI agents to interact with native desktop applications through visual and input simulation capabilities.
Equip AI agents with persistent, context-aware memory and consistent decision-making capabilities through semantic understanding.
Connects QGIS to Claude AI through the Model Context Protocol, enabling AI to directly control GIS operations.
Offload routine coding and analysis tasks from Claude Code to local or cloud LLM servers, significantly reducing API token costs.
Provides a unified API proxy for accessing multiple music platforms, including NetEase, QQ Music, KuGou, Kuwo, and Qianqian, specifically designed for AI applications.
Traces GPU workloads from Linux kernel events through CUDA API calls to Python source lines using eBPF for comprehensive causal observability.
Enables AI assistants to read and write Bricks Builder sites, allowing natural language control over web design and content.
Make any repository AI-agent-ready with a single command, generating a token-efficient codebase index.
Scroll for more results...