web scraping & data collection Claude 스킬을 발견하세요. 17개의 스킬을 탐색하고 AI 워크플로우에 완벽한 기능을 찾아보세요.
Executes secondary FireCrawl workflows to complement primary data collection and automated web scraping tasks.
Aggregates real-time cryptocurrency news from over 50 authoritative sources with advanced filtering and relevance scoring.
Implements robust rate limiting, exponential backoff, and idempotency patterns for FireCrawl API integrations.
Automates web data extraction using the Scrapling CLI to capture HTML, Markdown, and text from static or dynamic webpages.
Extracts and saves YouTube video subtitles or transcripts to local text files using command-line tools or automated browser interaction.
Integrates vision analysis, real-time web search, and GitHub exploration capabilities into Claude Code workflows.
Generates curated daily technical reports by scraping high-quality news and papers from customizable URL lists.
Enhances Claude with real-time web search, vision-based image analysis, and advanced GitHub repository exploration.
Extracts and organizes brand logos for DeFi vault protocols by identifying homepage links and automating asset retrieval.
Automates the collection and validation of standardized financial data for US publicly traded companies from free public sources.
Performs comprehensive multi-platform internet searches and content retrieval with intelligent tool fallback and zero external dependencies.
Normalizes and merges duplicate data from multiple sources using reputation scoring and semantic hash-based grouping.
Fetches Twitter/X post content and metadata into clean Markdown format using the Jina.ai API to bypass JavaScript restrictions.
Transforms web pages into clean, readable Markdown files optimized for AI ingestion and local documentation.
Performs real-time AI web searches with citations using Perplexity models to provide up-to-date information and scientific literature.
Optimizes text parsing workflows by combining efficient Regex patterns with LLM-based validation for high-accuracy, cost-effective data extraction.
Aggregates and synthesizes real-world developer perspectives from Hacker News, Reddit, and major technical communities.
Researches technical solutions and gathers cross-platform evidence to inform architecture and implementation decisions.
Transforms browser traffic into production-ready Python API clients through automated HAR analysis and code generation.
Scans developer communities like Hacker News, Reddit, and Lobsters to gather real-world experience reports and technology opinions.
Searches GitHub for production-ready implementations, libraries, and code patterns to accelerate development research.
Researches and synthesizes real-world community discussions from the last 30 days across Reddit, X, and the web.
Conducts real-time agentic research and sentiment analysis across X/Twitter to gather developer insights and industry trends.
Enables Claude to search the live web and fetch content from specific URLs to provide up-to-date information.
Equips Claude with high-performance web search capabilities and deep content extraction tools powered by the Tavily API.
Automates the retrieval and normalization of academic paper metadata from arXiv to support research pipelines and literature reviews.
Extracts and analyzes Reddit content including posts, comments, subreddits, and user profiles using the public JSON API.
Performs real-time web searches, content extraction, and deep research to provide LLM-optimized data for coding and analysis tasks.
Crawls websites and extracts content from multiple pages into structured JSON or local markdown files.
Maps entire website structures and discovers specific URLs rapidly without extracting full page content.
Scroll for more results...