data science & ml Claude 스킬을 발견하세요. 61개의 스킬을 탐색하고 AI 워크플로우에 완벽한 기능을 찾아보세요.
Quantifies uncertainty in estimates by generating plausible ranges to enable more reliable data-driven decision making.
Automates the generation of professional Quarto PDF reports and PowerPoint presentations from CSV survey data.
Initializes a professional Quarto environment for survey data analysis using Python, uv, and Typst PDF pipelines.
Performs systematic qualitative thematic analysis on document collections to extract deep structural insights and categorized themes.
Conducts autonomous, institutional-grade financial analysis using multi-guru perspectives and advanced composite scoring.
Tracks experiment parameters, results, and environment snapshots to ensure full reproducibility and systematic iteration.
Conducts deep technical research and provides implementation guidance for AI-enabled software development including RAG, agentic workflows, and LLM architectures.
Build and deploy production-ready multi-agent systems with MCP integration and automated workflows.
Performs hypothesis-driven statistical analysis and data visualization on datasets, system metrics, and experiment logs.
Conducts comprehensive multi-paper literature reviews with deep gap analysis and automated citation mapping.
Programmatically creates, edits, and optimizes Jupyter and Google Colab notebooks with precise JSON formatting and metadata management.
Standardizes the integration of external machine learning libraries and custom neural network modules within the Haipipe architecture.
Manages a robust four-stage pipeline that converts modular Python scripts into interactive Jupyter notebooks and comprehensive markdown documentation.
Standardizes raw academic and medical data files into structured SourceSet DataFrames for research pipelines.
Orchestrates model lifecycles and provides HuggingFace-style APIs for modular neural network research pipelines.
Provides a foundational architecture map and decision guide for managing neural network pipelines within the HAIPipe research framework.
Standardizes machine learning algorithm implementation through a universal wrapper contract for seamless training, inference, and serialization.
Transforms raw source datasets into temporally-aligned structured record sets for academic research and machine learning.
Transforms temporal RecordSets into event-triggered CaseSets for machine learning feature extraction.
Automates the generation of publication-ready tables and figures from model performance data for academic research.
Orchestrates neural network training, evaluation, and deployment through a model-agnostic pipeline and standardized asset packaging system.
Transcribes audio and video files into text via a specialized Speech2Text API with support for JWT authentication and task polling.
Transforms structured CaseSets into ML-ready datasets with automated splitting, feature engineering, and vocabulary management.
Automates the creation and interactive refinement of LaTeX academic paper drafts from research data and architectural blueprints.
Conducts professional-grade biomedical literature searches and manages citations using the PubMed database.
Standardizes vector memory workflows and MCP tool interactions for persistent codebase intelligence and context management.
Quantifies uncertainty and assesses risk distributions by running thousands of probabilistic scenarios with random variable inputs.
Refines probability estimates and decision-making by systematically updating beliefs as new data or evidence emerges.
Enables seamless model-to-model collaboration by delegating research, code reviews, and architectural validation to Google Gemini.
Verifies mathematical claims and generates Lean 4 formal proofs or counterexamples using the Harmonic Aristotle API.
Scroll for more results...