Discover Agent Skills for web scraping & data collection. Browse 17 skills for Claude, ChatGPT & Codex.
Automates web content retrieval using a progressive four-tier fallback strategy to bypass bot detection and access restrictions.
Downloads high-quality videos and audio from YouTube and other platforms for offline access and archival.
Automates multi-step information gathering and synthesis using structured planning and parallel subagents.
Implements a four-tier progressive escalation strategy to reliably scrape web content and bypass advanced bot detection.
Conducts deep, multi-faceted web research by orchestrating parallel subagents to plan, gather, and synthesize complex information.
Conducts systematic web research through autonomous subagent delegation and multi-source synthesis.
Automates web content extraction using a four-tier fallback strategy to bypass bot detection and CAPTCHAs.
Converts complex file formats including PDF, Office documents, and media into clean Markdown optimized for LLM processing.
Searches and retrieves life sciences preprints from the bioRxiv database with advanced filtering and PDF download capabilities.
Downloads high-quality video and audio content from YouTube and other platforms directly through your terminal workspace.
Packs external GitHub or local repositories into a token-efficient format for deep context analysis within Claude Code.
Retrieves web content through a four-tier progressive fallback strategy to bypass bot detection and access restrictions.
Implements a four-tier progressive scraping strategy to bypass bot detection and reliably extract web content.
Conducts deep web investigations by delegating tasks to specialized subagents and synthesizing findings into organized reports.
Scrapes and organizes remote job listings from workingnomads.com with advanced filtering and multi-format export capabilities.
Analyzes Reddit community discussions to identify feature requests, user pain points, and emerging use cases for Claude AI and Claude Code.
Acquires and stabilizes information from URLs, web searches, and local codebases into reusable markdown artifacts for AI reasoning.
Captures web content from JavaScript-heavy, login-protected, and multi-page sites using the agent-browser CLI.
Extracts subtitles and transcripts from YouTube videos directly into local text files using command-line tools or browser automation.
Captures web content from JavaScript-rendered pages and authenticated sites using the agent-browser CLI.
Transforms documentation websites into structured, categorized reference files optimized for AI context and offline archives.
Detects and ingests LLM-optimized documentation via the llms.txt standard to accelerate context gathering for autonomous agents.
Scrapes documentation websites and transforms them into organized, categorized reference files for AI context and offline archives.
Queries the Google Places API to retrieve detailed location information, reviews, and search results directly within the Claude Code environment.
Retrieves and parses Yandex Search engine results using the official Yandex Cloud Search API v2.
Aggregates and synthesizes technical opinions and developer reactions from major online communities like Reddit and Hacker News.
Enhances Claude with advanced vision analysis, real-time web searching, and deep GitHub repository exploration capabilities.
Indexes and searches external repositories, documentation, and research papers to provide Claude with high-fidelity context for development tasks.
Enables real-time internet search capabilities using the MiniMax MCP for legal research and general information retrieval.
Performs high-precision semantic search and structured content retrieval using the Exa AI API for deep research and code documentation.
Scroll for more results...