data science & ml向けのClaudeスキルを発見してください。61個のスキルを閲覧し、AIワークフローに最適な機能を見つけましょう。
Deploys and manages high-performance RLHF training pipelines for large-scale language models using Ray and vLLM acceleration.
Optimizes Large Language Model inference for maximum throughput and ultra-low latency on NVIDIA GPUs.
Decomposes neural network activations into interpretable, sparse features using SAELens for deep mechanistic interpretability research.
Implements state-of-the-art vision-language pre-training to enable high-quality image captioning and visual question answering within AI workflows.
Implements language-independent subword tokenization using BPE and Unigram algorithms for robust NLP model training and inference.
Simplifies PyTorch distributed training across multiple GPUs, TPUs, and nodes with minimal code changes and a unified API.
Generates high-quality sentence, text, and image embeddings for RAG, semantic search, and clustering using state-of-the-art transformer models.
Facilitates high-performance distributed data processing and streaming for large-scale machine learning workloads.
Integrates Weights & Biases into your workflow to track machine learning experiments, visualize training metrics, and manage model artifacts in real-time.
Implements PyTorch-native agentic reinforcement learning workflows using Meta's torchforge library for scalable algorithm experimentation.
Implements and optimizes Selective State Space Models (SSM) for high-performance sequence modeling and long-context AI applications.
Facilitates causal interventions on PyTorch models using a declarative framework for mechanistic interpretability experiments.
Optimizes large-scale model training using DeepSpeed configurations, ZeRO optimization stages, and high-performance I/O management.
Trains and optimizes Mixture of Experts (MoE) models to scale AI capacity with significantly reduced compute costs.
Implements and manages RWKV architectures for efficient, linear-time AI inference and long-context processing.
Evaluates Large Language Models across 60+ academic benchmarks using standardized prompts and metrics for reproducible research.
Transcribes and translates audio across 99 languages using OpenAI's robust general-purpose speech recognition models.
Compresses large language models to 4-bit precision to enable high-speed inference and deployment on consumer-grade hardware.
Streamlines the fine-tuning of large language models using Axolotl through expert YAML configuration and advanced training pattern guidance.
Generates high-quality music and sound effects from text descriptions using Meta's AudioCraft library.
Optimizes large language model fine-tuning using LoRA, QLoRA, and other parameter-efficient methods to significantly reduce memory and hardware requirements.
Implements Simple Preference Optimization to align Large Language Models without requiring a reference model.
Serves large language models with high throughput and low latency using PagedAttention and continuous batching.
Builds sophisticated LLM applications using agents, chains, and Retrieval-Augmented Generation (RAG) with a unified interface.
Interprets and manipulates neural network internals across local and remote models using the nnsight library and NDIF execution.
Optimizes Large Language Models using 4-bit activation-aware weight quantization to achieve 3x faster inference with minimal accuracy loss.
Evaluates AI code generation models using industry-standard benchmarks and pass@k metrics.
Standardizes and accelerates PyTorch model training with built-in support for distributed computing, logging, and engineering best practices.
Compresses large language models into efficient student models while retaining performance through advanced teacher-student transfer techniques.
Provides high-performance, Rust-based tokenization tools for building and training NLP models with support for BPE, WordPiece, and Unigram algorithms.
Scroll for more results...