Descubre Habilidades de Claude para web scraping & data collection. Explora 17 habilidades y encuentra las capacidades perfectas para tus flujos de trabajo de IA.
Provides AI-optimized web search results and content extraction using the Tavily API.
Researches any topic from the last 30 days across Reddit, X, and the web to deliver real-time insights and copy-paste prompts.
Conducts real-time web and X/Twitter research using Grok AI for live sentiment and current events analysis.
Performs fast, headless web searches and extracts page content as Markdown directly within the terminal.
Provides real-time web search, AI-grounded answers, and automated content extraction optimized for AI agents.
Enables real-time web research and fact-checking using Google Search grounding within the Claude Code environment.
Downloads high-quality video and audio from over 1,000 platforms with automated metadata extraction and QR code support.
Converts complex websites into LLM-ready markdown or structured data with advanced anti-bot bypass and autonomous AI navigation.
Searches and retrieves life science preprints from the bioRxiv database using customizable parameters for authors, keywords, and dates.
Extracts and converts Twitter/X threads into clean, well-formatted markdown files using automated browser interaction.
Provides standardized architectural patterns and Pydantic models for building robust API documentation scrapers.
Queries the Google Places API to retrieve business locations, venue details, and reviews directly through the CLI.
Accesses USPTO APIs to perform comprehensive patent and trademark searches, analyze prosecution history, and track intellectual property assignments.
Extracts web content from static and dynamic sites and transforms it into structured, high-quality Markdown documentation.
Conducts comprehensive web research and generates detailed reports without requiring external search API keys.
Downloads YouTube videos and audio with customizable quality and format settings directly through the command line.
Implements production-ready architectural patterns and best practices for FireCrawl SDK integrations in TypeScript and Python.
Conducts comprehensive market analysis and trend forecasting across the consumer, technology, healthcare, and finance sectors.
Secures and processes FireCrawl webhook notifications with robust signature validation and automated event routing.
Executes a structured, plan-driven implementation workflow that prioritizes context discovery and systematic validation for the kurly-crawler project.
Extracts and analyzes posts, threads, profiles, and media from X (formerly Twitter) directly within your Claude workflow.
Audits and optimizes FireCrawl implementations by identifying common anti-patterns and security vulnerabilities.
Implements resilient FireCrawl API interactions using exponential backoff, jitter, and idempotency patterns.
Executes optimized secondary FireCrawl data extraction tasks to complement primary web scraping workflows.
Automates the retrieval and conversion of online framework documentation into local Markdown files for enhanced AI context.
Enables instant web search capabilities using DuckDuckGo to retrieve real-time documentation, news, and technical resources without API keys.
Integrates real-time web search capabilities using the DuckDuckGo engine to find documentation, news, and technical resources without API keys.
Diagnoses and resolves common Exa API errors and integration issues to ensure reliable AI search functionality.
Implements robust error handling and fault-tolerance patterns for stable FireCrawl web scraping integrations.
Installs and configures the FireCrawl SDK and CLI with secure API authentication for web scraping workflows.
Scroll for more results...