发现data science & ml类别的 Claude 技能。浏览 61 个技能,找到适合您 AI 工作流程的完美功能。
Generates state-of-the-art text and image embeddings for RAG, semantic search, and clustering tasks.
Curates high-quality datasets for LLM training using GPU-accelerated deduplication, filtering, and PII redaction.
Implements language-independent subword tokenization using BPE and Unigram algorithms for advanced AI model development.
Deploys and optimizes LLM inference on CPU, Apple Silicon, and consumer hardware using GGUF quantization.
Guarantees valid, type-safe JSON and structured outputs from Large Language Models using grammar-based constraints.
Transcribes audio, translates speech to English, and automates multilingual audio processing using OpenAI's Whisper models.
Integrates Salesforce's BLIP-2 framework to enable advanced image captioning, visual question answering, and multimodal reasoning within AI workflows.
Orchestrates teams of autonomous AI agents to collaborate on complex tasks through role-based delegation and memory.
Processes large-scale datasets for machine learning workloads using distributed streaming execution across CPU and GPU clusters.
Extends transformer context windows using RoPE, YaRN, and ALiBi techniques to process documents exceeding 128k tokens.
Streamlines the fine-tuning of large language models using Axolotl through expert YAML configuration guidance and advanced training techniques.
Decomposes complex neural network activations into sparse, interpretable features to understand and steer model behavior.
Compresses Large Language Models using advanced techniques like Wanda and SparseGPT to reduce memory footprint and accelerate inference speeds.
Provides high-performance, Rust-optimized text tokenization for NLP research and production-grade machine learning pipelines.
Optimizes large-scale AI model training using DeepSpeed's ZeRO, pipeline parallelism, and high-performance DeepNVMe I/O handling.
Quantizes Large Language Models to 4-bit or 8-bit formats to reduce GPU memory usage by up to 75% with minimal accuracy loss.
Implements and optimizes Mamba-based Selective State Space Models for high-efficiency sequence modeling and long-context AI research.
Tracks machine learning experiments and manages model lifecycles with real-time visualization and collaborative tools.
Aligns Large Language Models with human preferences using advanced reinforcement learning techniques including SFT, DPO, PPO, and GRPO.
Compresses large language models using teacher-student learning techniques to reduce inference costs while maintaining high performance.
Optimizes Large Language Models using activation-aware 4-bit quantization to achieve 3x inference speedups and significant memory reduction with minimal accuracy loss.
Evaluates AI code generation models across multiple programming languages and benchmarks using standardized pass@k metrics.
Simplifies PyTorch distributed training by providing a unified API for DDP, DeepSpeed, and FSDP with minimal code changes.
Optimizes Transformer models using Flash Attention to achieve significant speedups and memory reductions during training and inference.
Generates high-fidelity music and sound effects from text descriptions using Meta's AudioCraft framework.
Enables advanced vision-language capabilities for image understanding, multi-turn visual conversations, and document analysis.
Serves Large Language Models with maximum throughput and efficiency using vLLM's PagedAttention and continuous batching.
Builds complex AI systems using Stanford's declarative programming framework to optimize prompts and create modular RAG systems automatically.
Enables zero-shot image classification and semantic image search by connecting visual concepts with natural language.
Optimizes AI models for efficient local inference using the GGUF format and llama.cpp quantization techniques.
Scroll for more results...