AI 에이전트 기능을 확장하는 Claude 스킬의 전체 컬렉션을 살펴보세요.
Implements and optimizes Selective State Space Models (SSM) for high-performance sequence modeling and long-context AI applications.
Trains large language models using advanced reinforcement learning algorithms like GRPO and PPO with the production-ready verl framework.
Implements zero-shot image segmentation using Meta AI's SAM to identify and extract objects via points, boxes, or automatic mask generation.
Implements efficient similarity search and clustering for dense vectors at scale using Facebook AI's high-performance library.
Ensures guaranteed valid JSON, XML, and type-safe code generation from LLMs using constrained token sampling and Pydantic models.
Manage the complete machine learning lifecycle including experiment tracking, model versioning, and deployment using the MLflow framework.
Visualizes machine learning training metrics, model architectures, and performance profiles using Google's TensorBoard toolkit.
Deploys and manages high-performance RLHF training pipelines for large-scale language models using Ray and vLLM acceleration.
Facilitates causal interventions on PyTorch models using a declarative framework for mechanistic interpretability experiments.
Accelerates LLM inference speed by up to 3.6x using speculative decoding, Medusa heads, and lookahead techniques without sacrificing model quality.
Evaluates Large Language Models across 60+ academic benchmarks using standardized prompts and metrics for reproducible research.
Drafts publication-ready ML and AI research papers for top-tier conferences like NeurIPS and ICML directly from your research codebase.
Provides high-performance, Rust-based tokenization tools for building and training NLP models with support for BPE, WordPiece, and Unigram algorithms.
Standardizes and accelerates PyTorch model training with built-in support for distributed computing, logging, and engineering best practices.
Implements state-of-the-art vision-language pre-training to enable high-quality image captioning and visual question answering within AI workflows.
Fine-tunes and aligns Large Language Models using Supervised Fine-Tuning and Reinforcement Learning from Human Feedback via the TRL library.
Optimizes large language models for efficient local inference using GGUF format and llama.cpp quantization techniques.
Builds, deploys, and manages continuous AI agents through a visual workflow builder or specialized development toolkit.
Interprets and manipulates neural network internals across local and remote models using the nnsight library and NDIF execution.
Compresses large language models to 4-bit precision to enable high-speed inference and deployment on consumer-grade hardware.
Builds sophisticated LLM applications using agents, chains, and Retrieval-Augmented Generation (RAG) with a unified interface.
Optimizes Large Language Models using 4-bit activation-aware weight quantization to achieve 3x faster inference with minimal accuracy loss.
Optimizes large language model fine-tuning using LoRA, QLoRA, and other parameter-efficient methods to significantly reduce memory and hardware requirements.
Extracts and validates structured data from LLM responses using Pydantic for reliable, type-safe outputs and automatic retries.
Integrates Pinecone's managed vector database to power high-performance RAG, semantic search, and recommendation systems.
Facilitates high-performance distributed data processing and streaming for large-scale machine learning workloads.
Generates high-quality music and sound effects from text descriptions using Meta's AudioCraft library.
Evaluates AI code generation models using industry-standard benchmarks and pass@k metrics.
Implements Simple Preference Optimization to align Large Language Models without requiring a reference model.
Compresses large language models into efficient student models while retaining performance through advanced teacher-student transfer techniques.
Scroll for more results...