data science & ml Claude 스킬을 발견하세요. 61개의 스킬을 탐색하고 AI 워크플로우에 완벽한 기능을 찾아보세요.
Processes large-scale datasets for machine learning workloads using distributed streaming execution across CPU and GPU clusters.
Manages the machine learning lifecycle by tracking experiments, versioning models, and streamlining production deployments.
Accelerates large-scale similarity search and clustering for dense vectors using Facebook AI's high-performance library.
Streamlines deep learning development by decoupling research code from engineering boilerplate for automated distributed training and hardware scaling.
Accelerates LLM fine-tuning workflows with Unsloth to achieve up to 5x faster training speeds and 80% reduced memory consumption.
Orchestrates teams of autonomous AI agents to collaborate on complex tasks through role-based delegation and memory.
Extends transformer context windows using RoPE, YaRN, and ALiBi techniques to process documents exceeding 128k tokens.
Implements Meta AI's foundation model for high-precision zero-shot image segmentation using points, boxes, and masks.
Streamlines the fine-tuning of large language models using Axolotl through expert YAML configuration guidance and advanced training techniques.
Decomposes complex neural network activations into sparse, interpretable features to understand and steer model behavior.
Generates state-of-the-art text and image embeddings for RAG, semantic search, and clustering tasks.
Provides high-performance, Rust-optimized text tokenization for NLP research and production-grade machine learning pipelines.
Compresses Large Language Models using advanced techniques like Wanda and SparseGPT to reduce memory footprint and accelerate inference speeds.
Optimizes large-scale AI model training using DeepSpeed's ZeRO, pipeline parallelism, and high-performance DeepNVMe I/O handling.
Generates high-quality images and performs advanced image transformations using Stable Diffusion models and the HuggingFace Diffusers library.
Tracks machine learning experiments and manages model lifecycles with real-time visualization and collaborative tools.
Implements and optimizes Mamba-based Selective State Space Models for high-efficiency sequence modeling and long-context AI research.
Compresses large language models using teacher-student learning techniques to reduce inference costs while maintaining high performance.
Aligns Large Language Models with human preferences using advanced reinforcement learning techniques including SFT, DPO, PPO, and GRPO.
Quantizes Large Language Models to 4-bit or 8-bit formats to reduce GPU memory usage by up to 75% with minimal accuracy loss.
Generates high-fidelity music and sound effects from text descriptions using Meta's AudioCraft framework.
Optimizes Transformer models using Flash Attention to achieve significant speedups and memory reductions during training and inference.
Simplifies PyTorch distributed training by providing a unified API for DDP, DeepSpeed, and FSDP with minimal code changes.
Enables advanced vision-language capabilities for image understanding, multi-turn visual conversations, and document analysis.
Serves Large Language Models with maximum throughput and efficiency using vLLM's PagedAttention and continuous batching.
Builds complex AI systems using Stanford's declarative programming framework to optimize prompts and create modular RAG systems automatically.
Enables zero-shot image classification and semantic image search by connecting visual concepts with natural language.
Evaluates AI code generation models across multiple programming languages and benchmarks using standardized pass@k metrics.
Optimizes AI models for efficient local inference using the GGUF format and llama.cpp quantization techniques.
Performs declarative causal interventions and mechanistic interpretability experiments on PyTorch models.
Scroll for more results...