Descubre Habilidades de Claude para web scraping & data collection. Explora 17 habilidades y encuentra las capacidades perfectas para tus flujos de trabajo de IA.
Performs semantic web searches and similar content discovery using the Exa API to retrieve high-quality research data.
Integrates real-time web search and content extraction capabilities into Claude Code using the Tavily API.
Performs neural, semantic web searches and content discovery using the Exa API to find highly relevant data and research.
Empowers Claude to perform advanced web searches, crawl websites, and extract high-quality content using the Tavily AI search engine.
Extracts structured data from complex websites using a robust, three-phase Playwright automation workflow.
Extracts structured requirements and metadata from job descriptions to facilitate automated candidate matching and recruitment analysis.
Integrates high-performance web search, content extraction, and site crawling capabilities using the Tavily API for real-time information retrieval.
Automates documentation collection and structured data extraction using Playwright, BeautifulSoup, and Scrapy templates.
Extracts Twitter/X data using the GraphQL API with automated account rotation and session management.
Integrates advanced vision analysis, real-time web search, and GitHub exploration tools directly into the Claude Code environment.
Aggregates and filters real-time cryptocurrency news from over 50 authoritative sources with automated relevance scoring.
Configures Exa SDKs and handles API authentication for AI-powered search and data retrieval.
Architects scalable Firecrawl integrations using validated monolith, service layer, and microservice patterns.
Optimizes FireCrawl API expenses through intelligent tier selection, usage monitoring, and request efficiency strategies.
Executes secondary search and data retrieval workflows using the Exa API to complement primary AI-driven discovery tasks.
Executes the primary FireCrawl workflow to transform web content into structured data for AI applications.
Optimizes FireCrawl API integrations using advanced caching, request batching, and connection pooling strategies.
Installs and configures the FireCrawl SDK and CLI with secure API authentication for web scraping workflows.
Implements robust error handling and fault-tolerance patterns for stable FireCrawl web scraping integrations.
Diagnoses and resolves common Exa API errors and integration issues to ensure reliable AI search functionality.
Implements production-ready architectural patterns and structured project layouts for FireCrawl web scraping integrations.
Executes optimized secondary FireCrawl data extraction tasks to complement primary web scraping workflows.
Implements resilient FireCrawl API interactions using exponential backoff, jitter, and idempotency patterns.
Audits and optimizes FireCrawl implementations by identifying common anti-patterns and security vulnerabilities.
Bootstraps a minimal FireCrawl integration to verify API connectivity and client initialization.
Downloads high-quality videos and audio from YouTube and other platforms for offline viewing, archiving, and editing.
Streamlines FireCrawl SDK upgrades by identifying breaking changes and automating version migrations across your codebase.
Streamlines the creation of production-ready PySpider web crawlers using standardized templates, advanced scraping patterns, and enterprise-grade best practices.
Performs intelligent web searches via the Zhipu search engine with automated relative date resolution.
Secures and processes FireCrawl webhook notifications with robust signature validation and automated event routing.
Scroll for more results...