web scraping & data collection Claude 스킬을 발견하세요. 17개의 스킬을 탐색하고 AI 워크플로우에 완벽한 기능을 찾아보세요.
Streamlines the development of Python-based video classification systems with optimized scraping and incremental database management.
Powers Claude with neural, semantic search capabilities to find people, companies, research papers, and GitHub repositories.
Orchestrates multi-layered web research and data synthesis using recursive confidence loops and specialized search engines.
Provides AI-optimized web search results and content extraction using the Tavily API.
Researches any topic from the last 30 days across Reddit, X, and the web to deliver real-time insights and copy-paste prompts.
Conducts real-time web and X/Twitter research using Grok AI for live sentiment and current events analysis.
Performs fast, headless web searches and extracts page content as Markdown directly within the terminal.
Provides real-time web search, AI-grounded answers, and automated content extraction optimized for AI agents.
Automates high-performance web searching, scraping, and site mapping with LLM-optimized markdown output.
Curates specialized AI technology news and technical insights using targeted search strategies and quality filtering rules.
Implements production-ready architectural patterns and structured project layouts for FireCrawl web scraping integrations.
Executes optimized secondary FireCrawl data extraction tasks to complement primary web scraping workflows.
Implements resilient FireCrawl API interactions using exponential backoff, jitter, and idempotency patterns.
Audits and optimizes FireCrawl implementations by identifying common anti-patterns and security vulnerabilities.
Diagnoses and resolves common Exa API errors and integration issues to ensure reliable AI search functionality.
Bootstraps a minimal FireCrawl integration to verify API connectivity and client initialization.
Implements robust error handling and fault-tolerance patterns for stable FireCrawl web scraping integrations.
Streamlines FireCrawl SDK upgrades by identifying breaking changes and automating version migrations across your codebase.
Streamlines the creation of production-ready PySpider web crawlers using standardized templates, advanced scraping patterns, and enterprise-grade best practices.
Installs and configures the FireCrawl SDK and CLI with secure API authentication for web scraping workflows.
Secures and processes FireCrawl webhook notifications with robust signature validation and automated event routing.
Implements production-ready architectural patterns and best practices for FireCrawl SDK integrations in TypeScript and Python.
Optimizes FireCrawl API integrations using advanced caching, request batching, and connection pooling strategies.
Enables real-time web research and fact-checking using Google Search grounding within the Claude Code environment.
Executes the primary FireCrawl workflow to transform web content into structured data for AI applications.
Downloads YouTube videos and audio with customizable quality and format settings directly through the command line.
Executes secondary search and data retrieval workflows using the Exa API to complement primary AI-driven discovery tasks.
Optimizes FireCrawl API expenses through intelligent tier selection, usage monitoring, and request efficiency strategies.
Architects scalable Firecrawl integrations using validated monolith, service layer, and microservice patterns.
Performs intelligent web searches via the Zhipu search engine with automated relative date resolution.
Scroll for more results...