Builds automated, AI-powered data collection agents that scrape, enrich, and store information from any public source using free-tier tools.
This skill enables Claude to architect and implement production-ready data scraping pipelines using a robust three-layer architecture: Collect, Enrich, and Store. It provides implementation patterns for scraping HTML, APIs, and RSS feeds, then uses free-tier AI models like Gemini Flash to summarize and score the data. Designed for zero-cost operation, it utilizes GitHub Actions for scheduling and integrates seamlessly with Notion, Google Sheets, or Supabase for data management. It also features a unique feedback-loop system that allows the agent to learn from user decisions to improve its filtering accuracy over time.
主な機能
01Zero-cost deployment and scheduling via GitHub Actions
02Automated AI enrichment using Gemini Flash with built-in rate limiting
03Adaptive feedback system that learns user preferences for improved scoring
04Seamless storage integration with Notion, Google Sheets, and Supabase
05Multi-source scraping templates for HTML, REST APIs, and RSS feeds
06172,009 GitHub stars
ユースケース
01Tracking product prices and sending alerts for significant price drops
02Aggregating and classifying technical news or GitHub releases by sentiment and topic
03Monitoring job boards to find and score roles matching a specific resume