Discover our curated collection of MCP servers for data science & ml. Browse 7389 servers and find the perfect MCPs for your needs.
Enables interaction with Timeplus clusters through an MCP server, providing tools for querying, listing databases and tables, and managing Kafka topics and streams.
Provides AI models with structured access to Trino's distributed SQL query engine via the Model Context Protocol.
Analyzes text documents by counting words and characters.
Enables AI models to securely access and read local files using the Model Context Protocol (MCP).
Enables AI assistants to perform web searches using Perplexity's chat completions API.
Enables AI assistants to manage Tradovate trading accounts through natural language.
Simulates quantum circuits with noise models and integrates with Model Context Protocol (MCP) clients.
Provides Large Language Models transparent access to ODBC-accessible data sources.
Index a local codebase to provide context-aware information to AI clients like Cursor or Claude.
Connects AI models to creative applications through a standardized Model Context Protocol.
Integrates Fish Audio's Text-to-Speech API with large language models, enabling natural language-driven speech synthesis through the Model Context Protocol.
Provides access to Reactome pathway and systems biology data through a Model Context Protocol server.
Provides seamless programmatic access to Malaysia's official government open data catalogue.
Provides live, comprehensive company and contact data from Explorium's AgentSource platform, exclusively for use within Claude Desktop.
Enables intelligent document search and retrieval from PDF collections by serving as a Model Context Protocol (MCP) server.
Provides AI-powered applications with structured access to integration packages and artifacts within SAP Cloud Integration tenants.
Search and download academic papers from various platforms, including arXiv, PubMed, and bioRxiv.
Provides large language models (LLMs) with persistent, long-term memory capabilities across conversations using a multi-layered architecture and rapid GPU-accelerated search.
Optimizes LLM agent operations by intelligently suppressing expensive LLM calls through a hierarchical decision-making architecture, drastically reducing costs and latency.
Provides a high-performance, enterprise-ready MCP server for intelligent codebase analysis, featuring fast local indexing for millions of lines of code.
Scroll for more results...