Validates the credibility of online sources and technical artifacts to ensure high-quality research and prevent information contamination.
This skill equips Claude with a rigorous framework for evaluating source quality during deep research tasks, effectively acting as a filter against SEO spam, content farms, and hallucination-prone data. It implements a multi-tier credibility system (T1-T6) and the CRAAP framework to assess academic, technical, and institutional sources, while providing programmatic methods to verify the health and adoption metrics of software packages, GitHub repositories, and APIs. By front-loading quality in the retrieval stage, it eliminates the primary catalyst for cascading errors in AI-driven research trajectories.
主要功能
01Programmatic stat verification via npm, PyPI, Crates.io, and GitHub APIs for ground-truth metrics.
02Six-tier source credibility taxonomy (T1-T6) for ranking evidence quality from primary to unverified.
03Multi-provider search strategy routing queries to domain-specific sources like arXiv, PubMed, and GitHub.
04Technical artifact evaluation for software packages, repos, and APIs using health and adoption signals.
05Automated SEO spam and content farm detection using multi-factor red flag analysis.
069 GitHub stars
使用场景
01Evaluating third-party libraries or dependencies for production readiness and long-term maintenance health.
02Detecting and filtering out AI-generated or SEO-optimized content from search results to improve context window quality.
03Performing deep technical research where factual accuracy and academic rigor are mission-critical.