Discover Agent Skills for data science & ml. Browse 61 skills for Claude, ChatGPT & Codex.
Orchestrates specialized AI agents to conduct systematic, multi-disciplinary research and synthesis on complex topics.
Conducts rigorous evaluations of claims, evidence, and logical arguments to detect bias and validate research methodologies.
Enables high-performance data manipulation and analysis in Nushell using Polars DataFrames and LazyFrames.
Evaluates methodological quality and potential bias in research studies using standardized frameworks like RoB 2 and ROBINS-I.
Evaluates the robustness of research findings by testing how results change under different analytical assumptions and data conditions.
Implements rigorous randomization procedures for experimental research and unbiased participant allocation.
Facilitates the creation of methodologically sound research studies following NIH rigor standards and experimental best practices.
Prepares submission-ready research manuscripts by automating formatting, reporting guideline compliance, and journal selection workflows.
Implements high-precision search by combining semantic dense vectors with keyword-based sparse retrieval for optimized RAG results.
Conducts quantitative synthesis of research data by pooling effect sizes across multiple studies to derive summary conclusions.
Guides the selection, assumption checking, and interpretation of statistical hypothesis tests for rigorous research data analysis.
Accelerates the development of machine learning models and AI systems through expert guidance on MLOps, RAG architectures, and model deployment.
Applies systematic inclusion and exclusion criteria to automate literature screening and ensure PRISMA compliance.
Formulates and refines high-quality research questions using the scientifically recognized FINER criteria.
Calculates statistical power and determines required sample sizes for research studies to ensure experimental rigor and reproducibility.
Interprets and reports complex statistical findings with high precision, focusing on effect sizes, confidence intervals, and p-value accuracy.
Powers tax and benefit microsimulations with a vectorized engine for calculating complex economic policy impacts.
Performs systematic data quality remediation by detecting duplicates, handling outliers, and standardizing datasets for reliable analysis.
Manages annotated data matrices for single-cell genomics and large-scale biological datasets using the Python AnnData framework.
Analyzes survey microdata using weighted pandas DataFrames to calculate inequality, poverty, and distributional metrics.
Conducts systematic, multi-phase investigations into complex, open-ended data questions using structured decomposition and incremental synthesis.
Implements high-performance persistent memory and reinforcement learning patterns for AI agents using AgentDB and ReasoningBank.
Performs rigorous, systematic comparisons of data segments, cohorts, and time periods to uncover actionable drivers of difference.
Transforms raw datasets into impactful visual narratives using advanced data visualization techniques and narrative design principles.
Conducts systematic exploratory data analysis to uncover hidden patterns, anomalies, and actionable insights in unfamiliar datasets.
Integrates Honcho persistent memory and social cognition into Python or TypeScript codebases to build stateful AI agents.
Implements adaptive learning systems for AI agents to recognize patterns, optimize strategies, and improve autonomously over time.
Automates the end-to-end scientific research lifecycle from initial data analysis and hypothesis generation to the production of publication-ready LaTeX manuscripts.
Transforms high-level requirements into production-ready system prompts for complex single and multi-agent AI systems.
Generates professional, executive-ready Plotly visualizations using Treasure Data's standardized branding and strict formatting requirements.
Scroll for more results...