data science & ml Claude 스킬을 발견하세요. 61개의 스킬을 탐색하고 AI 워크플로우에 완벽한 기능을 찾아보세요.
Provides deterministic, symbolic mathematical computation using SymPy to ensure absolute accuracy in complex calculations.
Executes deterministic mathematical computations using SymPy to ensure exact symbolic results across algebra, calculus, linear algebra, and statistics.
Configures a Docker-based MCP server for searching and downloading academic papers from arXiv, PubMed, and other scholarly sources.
Transforms complex business data into actionable insights through advanced analytics, predictive modeling, and strategic KPI frameworks.
Architects sophisticated LLM applications using the LangChain framework for agents, memory management, and complex workflow orchestration.
Optimizes development efficiency by routing complex Rust architecture tasks to Sonnet 4.5 and routine utility work to Haiku 4.5.
Optimizes vector index performance by tuning HNSW parameters, quantization strategies, and memory usage for high-scale search.
Optimizes LLM performance through advanced prompting techniques, constitutional AI, and production-ready prompt system design.
Transcribes audio files into text or JSON format using OpenAI's state-of-the-art Whisper API.
Generates testable, evidence-based scientific hypotheses and structured experimental designs from observations or literature.
Provides expert-level data analysis, machine learning modeling, and statistical insights to drive data-driven decision-making.
Implements comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking frameworks.
Builds and automates production-grade machine learning pipelines, experiment tracking systems, and scalable ML infrastructure across cloud platforms.
Builds production-grade LLM applications, advanced RAG architectures, and autonomous agent systems with a focus on scalability and safety.
Performs declarative causal interventions and mechanistic interpretability experiments on PyTorch models.
Implements Anthropic's Constitutional AI method to train harmless, helpful models through self-critique and automated AI feedback.
Optimizes large-scale language model training using NVIDIA Megatron-Core with advanced 3D and expert parallelism strategies.
Accelerates Large Language Model inference on NVIDIA GPUs using state-of-the-art optimization techniques for maximum throughput and minimal latency.
Implements programmable safety rails and validation for LLM applications to prevent jailbreaks, hallucinations, and PII leaks.
Implements and trains minimalist GPT architectures for educational and research purposes using Andrej Karpathy's clean, hackable codebase.
Simplifies Large Language Model implementation, training, and fine-tuning using clean, production-ready LitGPT architectures.
Streamlines the fine-tuning process for over 100 large language models using the LLaMA-Factory framework and QLoRA techniques.
Optimizes LLM serving and structured generation using RadixAttention prefix caching for high-performance agentic workflows.
Deploys high-performance Reinforcement Learning from Human Feedback (RLHF) workflows using Ray and vLLM acceleration for large-scale model alignment.
Facilitates mechanistic interpretability research by providing tools to inspect, cache, and manipulate transformer model activations via HookPoints.
Simplifies large language model alignment using reference-free preference optimization to improve model performance without the overhead of PPO or DPO.
Quantizes Large Language Models to 4/3/2-bit precision without calibration data for faster inference and reduced memory footprint.
Fine-tunes large language models using LoRA, QLoRA, and other parameter-efficient methods to drastically reduce memory and compute requirements.
Implements and optimizes Mixture of Experts (MoE) architectures to scale model capacity while reducing training and inference costs.
Orchestrates distributed machine learning training across clusters to scale PyTorch, TensorFlow, and Hugging Face models.
Scroll for more results...