Discover Agent Skills for data science & ml. Browse 61 skills for Claude, ChatGPT & Codex.
Orchestrates and configures modular, structured AI agents using the Atomic Agents framework for robust LLM applications.
Injects dynamic, runtime data into AI agent system prompts to enable context-aware decision making and information sharing.
Generates structured, effective system prompts for AI agents using a modular architecture of background identity, processing steps, and output instructions.
Defines robust, type-safe data contracts and Pydantic-based schemas for AI agents using the Atomic Agents framework.
Designs and implements robust multi-agent systems using supervisor, swarm, and hierarchical patterns to optimize context management and reasoning.
Optimizes AI agent performance and token usage by implementing advanced context summarization and management strategies for long-running sessions.
Optimizes LLM context windows through strategic compaction, observation masking, and partitioning to reduce token costs and improve agent performance.
Implements production-grade LLM-as-a-judge patterns to evaluate model outputs using structured rubrics, bias mitigation, and pairwise comparison techniques.
Master the core principles of AI context management to optimize agent performance and token efficiency.
Guides the end-to-end development of LLM-powered applications, from task evaluation and pipeline design to cost estimation and agent architecture.
Implements sophisticated multi-layer memory architectures for AI agents to persist state, track entities, and maintain temporal knowledge across sessions.
Diagnoses and mitigates performance failures in agentic systems caused by large context windows, attention loss, and information noise.
Transforms external RDF context into agent mental states to enable cognitive reasoning, explainability, and semantic interoperability in multi-agent systems.
Architects and evaluates robust multi-agent systems using industry-standard patterns and sophisticated orchestration strategies.
Designs and implements production-grade Retrieval-Augmented Generation pipelines with advanced chunking, retrieval, and evaluation strategies.
Optimizes and deploys PyTorch models to Arm Cortex-M processors using the CMSIS-NN backend.
Generates high-quality music and sound effects from text descriptions using Meta's AudioCraft library.
Implements Simple Preference Optimization to align Large Language Models without requiring a reference model.
Implements language-independent subword tokenization using BPE and Unigram algorithms for robust NLP model training and inference.
Optimizes Large Language Models using 4-bit activation-aware weight quantization to achieve 3x faster inference with minimal accuracy loss.
Optimizes large language model fine-tuning using LoRA, QLoRA, and other parameter-efficient methods to significantly reduce memory and hardware requirements.
Compresses large language models to 4-bit precision to enable high-speed inference and deployment on consumer-grade hardware.
Optimizes LLM serving and structured data generation with RadixAttention prefix caching for high-performance agentic workflows.
Compresses large language models into efficient student models while retaining performance through advanced teacher-student transfer techniques.
Evaluates AI code generation models using industry-standard benchmarks and pass@k metrics.
Implements and manages RWKV architectures for efficient, linear-time AI inference and long-context processing.
Accelerates LLM inference speed by up to 3.6x using speculative decoding, Medusa heads, and lookahead techniques without sacrificing model quality.
Integrates OpenAI's CLIP model to enable zero-shot image classification, semantic image search, and cross-modal retrieval without task-specific training.
Builds sophisticated LLM applications using agents, chains, and Retrieval-Augmented Generation (RAG) with a unified interface.
Standardizes and accelerates PyTorch model training with built-in support for distributed computing, logging, and engineering best practices.
Scroll for more results...