Descubre Habilidades de Claude para data science & ml. Explora 61 habilidades y encuentra las capacidades perfectas para tus flujos de trabajo de IA.
Scaffolds and organizes modular AI agent projects using standardized directory layouts and configuration patterns.
Defines robust, type-safe data contracts and Pydantic-based schemas for AI agents using the Atomic Agents framework.
Injects dynamic, runtime data into AI agent system prompts to enable context-aware decision making and information sharing.
Orchestrates and configures modular, structured AI agents using the Atomic Agents framework for robust LLM applications.
Generates structured, effective system prompts for AI agents using a modular architecture of background identity, processing steps, and output instructions.
Master the core principles of AI context management to optimize agent performance and token efficiency.
Designs and implements robust multi-agent systems using supervisor, swarm, and hierarchical patterns to optimize context management and reasoning.
Optimizes LLM context windows through strategic compaction, observation masking, and partitioning to reduce token costs and improve agent performance.
Optimizes AI agent performance and token usage by implementing advanced context summarization and management strategies for long-running sessions.
Implements production-grade LLM-as-a-judge patterns to evaluate model outputs using structured rubrics, bias mitigation, and pairwise comparison techniques.
Guides the end-to-end development of LLM-powered applications, from task evaluation and pipeline design to cost estimation and agent architecture.
Implements sophisticated multi-layer memory architectures for AI agents to persist state, track entities, and maintain temporal knowledge across sessions.
Transforms external RDF context into agent mental states to enable cognitive reasoning, explainability, and semantic interoperability in multi-agent systems.
Diagnoses and mitigates performance failures in agentic systems caused by large context windows, attention loss, and information noise.
Designs and implements production-grade Retrieval-Augmented Generation pipelines with advanced chunking, retrieval, and evaluation strategies.
Architects and evaluates robust multi-agent systems using industry-standard patterns and sophisticated orchestration strategies.
Optimizes and deploys PyTorch models to Arm Cortex-M processors using the CMSIS-NN backend.
Generates high-quality music and sound effects from text descriptions using Meta's AudioCraft library.
Implements Simple Preference Optimization to align Large Language Models without requiring a reference model.
Facilitates causal interventions on PyTorch models using a declarative framework for mechanistic interpretability experiments.
Optimizes Large Language Models using 4-bit activation-aware weight quantization to achieve 3x faster inference with minimal accuracy loss.
Optimizes large language model fine-tuning using LoRA, QLoRA, and other parameter-efficient methods to significantly reduce memory and hardware requirements.
Compresses large language models to 4-bit precision to enable high-speed inference and deployment on consumer-grade hardware.
Implements language-independent subword tokenization using BPE and Unigram algorithms for robust NLP model training and inference.
Evaluates AI code generation models using industry-standard benchmarks and pass@k metrics.
Transcribes and translates audio across 99 languages using OpenAI's robust general-purpose speech recognition models.
Facilitates high-performance distributed data processing and streaming for large-scale machine learning workloads.
Compresses large language models into efficient student models while retaining performance through advanced teacher-student transfer techniques.
Builds sophisticated LLM applications using agents, chains, and Retrieval-Augmented Generation (RAG) with a unified interface.
Optimizes LLM serving and structured data generation with RadixAttention prefix caching for high-performance agentic workflows.
Scroll for more results...