web scraping & data collection向けのClaudeスキルを発見してください。17個のスキルを閲覧し、AIワークフローに最適な機能を見つけましょう。
Extracts subtitles and transcripts from YouTube videos directly into local text files using command-line tools or browser automation.
Captures web content from JavaScript-heavy, login-protected, and multi-page sites using the agent-browser CLI.
Captures web content from JavaScript-rendered pages and authenticated sites using the agent-browser CLI.
Transforms documentation websites into structured, categorized reference files optimized for AI context and offline archives.
Detects and ingests LLM-optimized documentation via the llms.txt standard to accelerate context gathering for autonomous agents.
Scrapes documentation websites and transforms them into organized, categorized reference files for AI context and offline archives.
Queries the Google Places API to retrieve detailed location information, reviews, and search results directly within the Claude Code environment.
Enhances Claude with advanced vision analysis, real-time web searching, and deep GitHub repository exploration capabilities.
Aggregates and synthesizes technical opinions and developer reactions from major online communities like Reddit and Hacker News.
Indexes and searches external repositories, documentation, and research papers to provide Claude with high-fidelity context for development tasks.
Enables real-time internet search capabilities using the MiniMax MCP for legal research and general information retrieval.
Performs high-precision semantic search and structured content retrieval using the Exa AI API for deep research and code documentation.
Powers Claude with real-time web search and deep content extraction via the Grok API to bypass knowledge cutoff limits.
Scrapes and converts WeChat Official Account articles into clean Markdown files with automated image handling and dynamic content support.
Downloads video and audio content from major platforms like YouTube, Bilibili, and Douyin directly via terminal commands.
Automates bulk downloading of Douyin videos and metadata with incremental update support and data visualization.
Provides ethical and legal methodologies for accessing paywalled academic papers, news articles, and geo-restricted content.
Builds modular, production-grade Python data processing workflows for content scraping, AI analysis, and cloud integration.
Extracts web content using a multi-tiered cascade architecture that bypasses anti-bot measures and detects paywalls.
Implements systematic OSINT and journalism workflows to verify claims, evaluate source credibility, and authenticate digital media.
Automates the preservation and retrieval of web pages using Wayback Machine, Archive.is, and local archiving tools for research and legal evidence.
Refreshes specific event pages to synchronize and update event details within a local media database.
Summarizes and extracts content from URLs, YouTube videos, and local files directly within the terminal.
Downloads and formats YouTube video transcripts with precise timestamps for streamlined content analysis and text extraction.
Conducts automated domain research and extracts technical requirements using MCP-powered search and documentation tools.
Performs exhaustive, multi-source research and generates professional-grade documentation with verified citations and deep analysis.
Extracts and cleans the latest .NET development articles from Milan Jovanovic's blog with an optimized, cost-effective workflow.
Monitors Anthropic and Claude Code social feeds to capture real-time updates and actionable technical insights before official documentation is released.
Automates multi-ecosystem documentation scraping, validation, and maintenance workflows using local development configurations.
Automates the gathering of AI industry trends, product launches, and developer insights from multiple high-signal web sources.
Scroll for more results...