Data Science & ML Servidores MCP
Descubre nuestra colección curada de servidores MCP para data science & ml. Explora 4196 servidores y encuentra los MCP perfectos para tus necesidades.
ETF Dashboard
Analyze Exchange-Traded Funds (ETFs) using real-time and historical data, offering insights into market trends and sentiment.
Math Solver
Solves mathematical problems using a client-server architecture.
Attendee
Manages and controls AI meeting bots for recording, transcription, and interactive communication within various meeting platforms.
AI Usecases Examples
Provides practical code snippets and examples for integrating various artificial intelligence features and models.
Exa
Enables AI assistants like Claude to perform web searches using the Exa AI Search API.
Weather
Integrates real-time weather data into large language models and Model Context Protocol clients.
Mongo
Enables Large Language Models (LLMs) to interact with MongoDB databases using natural language.
Edgar
Provides a Model Context Protocol (MCP) server for deep financial document analysis and content extraction from the SEC EDGAR database.
Paylocity
Connects Paylocity data to large language models and AI clients through a read-only Model Context Protocol (MCP) server, enabling natural language queries.
Wikipedia
Provides Wikipedia search and content retrieval tools via a production-ready Model Context Protocol (MCP) server.
Morpho API
Enables querying of the Morpho API for market data, vaults, positions, and more within Claude.
Cloudinary Structured Metadata
Manages structured metadata fields, rules, and associated data to enhance asset organization and searchability.
Model Context Provider
Provides a server for managing model context during development and learning.
LLM
Facilitates the creation of MCP (Model Control Protocol) clients and servers for Large Language Models.
Local Project Sync
Connects local code repositories to AI applications, such as Claude, enabling real-time project knowledge synchronization.
Akab
Provides a production-grade A/B testing framework with three levels of scientific rigor for comparing large language model (LLM) outputs.
Memory
Provides a high-performance C++ server for real-time memory scanning and analysis of Windows processes, accessible via Model Context Protocol or HTTP.
LightRAG
Provides a tool-based interface to a locally running LightRAG HTTP API, enabling programmatic interaction with its capabilities.
Aave
Provides a production-ready server for automating Aave V3 DeFi operations on the Base network through the Model Context Protocol.
Quickbase
Facilitates connecting large language models to live Quickbase data through a Model Context Protocol (MCP) server.
Scroll for more results...