web scraping & data collection向けのClaudeスキルを発見してください。17個のスキルを閲覧し、AIワークフローに最適な機能を見つけましょう。
Automates advanced web scraping and data extraction tasks using the Scrapfly toolkit via Rube MCP.
Automates IP geolocation, threat intelligence, and metadata enrichment using the Ipdata.co API through Rube MCP.
Automates advanced web scraping and anti-bot bypass workflows using the Scrapfly toolkit via Rube MCP.
Executes autonomous search missions across local codebases and the web to return structured, attributed data.
Conducts deep web research, competitive intelligence, and automated data enrichment using specialized Parallel AI APIs.
Searches Israeli restaurants and manages table availability queries through the Ontopo platform directly from the terminal.
Builds resilient data ingestion pipelines that handle paginated API results with state tracking and historical backfills.
Builds resilient, state-aware data ingestion pipelines for paginated APIs using advanced watermark tracking.
Downloads YouTube videos, extracts transcripts, and retrieves comprehensive video metadata for local analysis and research.
Extracts structured data from major social media platforms, Google Maps, and websites using optimized Apify actors and automated workflows.
Scrapes and converts web pages into clean, LLM-ready markdown using the Firecrawl API to handle JavaScript and anti-bot protections.
Enables Claude to search the web for real-time information using multiple search engine providers.
Automates systematic literature reviews and builds structured scholarship databases using the OpenAlex API.
Manages and troubleshoots self-hosted Firecrawl instances for high-performance web-to-markdown scraping.
Retrieves real-time odds and market data from leading prediction platforms like Polymarket and Kalshi.
Retrieves real-time odds, prices, and trading activity from leading prediction markets like Polymarket and Kalshi.
Ensures uninterrupted research capabilities by delegating web searches to autonomous agents when primary search APIs fail or hit limits.
Analyzes brand websites to extract visual identity, voice, and market positioning into a reusable JSON profile for consistent content creation.
Implements a rigorous cross-verification protocol for stock and ETF data to prevent AI hallucinations through multi-source validation.
Establishes rigorous quality standards and verification methods for market research, patent analysis, and professional data collection tasks.
Searches global patent databases using natural language queries to discover prior art and track innovation landscapes.
Conducts deep technical research and documentation synthesis using Perplexity, Firecrawl, and Context7 MCP servers.
Scrapes and extracts actionable technical documentation from any URL using Firecrawl for immediate implementation.
Parses and validates complex podcast RSS feeds using industry standards like iTunes and Podcasting 2.0 namespaces.
Orchestrates multi-agent web research sessions to generate hallucination-resistant reports with verified citations.
Validates the credibility of online sources and technical artifacts to ensure high-quality research and prevent information contamination.
Implements academic research strategies and decomposition techniques to scale AI-driven investigations from simple facts to complex multi-faceted queries.
Automates product review collection and sentiment analysis to identify prioritized product improvements and actionable insights.
Automates complex web data extraction using self-learning algorithms to navigate pagination, bypass blocks, and analyze page structures autonomously.
Standardizes multi-source web search protocols to eliminate AI hallucinations when collecting critical macroeconomic and financial data.
Scroll for more results...