Discover Agent Skills for data science & ml. Browse 61 skills for Claude, ChatGPT & Codex.
Architects sophisticated LLM applications using LangChain 1.x and LangGraph for stateful agents, complex workflows, and advanced memory management.
Builds sophisticated Retrieval-Augmented Generation (RAG) systems to ground LLM responses in proprietary data and external knowledge bases.
Implements Reinforcement Learning with Leave-One-Out (RLOO) estimation for stable policy optimization and reasoning model training.
Accelerates LLM instruction-tuning using Unsloth-optimized SFTTrainer for faster, memory-efficient model adaptation.
Architects and implements autonomous AI agent systems using sophisticated planning, memory, and tool-integration strategies.
Integrates real-time xAI Grok sentiment with major financial APIs to provide a holistic view of market trends, price action, and fundamental health.
Implements Retrieval-Augmented Generation (RAG) workflows to ground AI responses with external document context and reduce hallucinations.
Fine-tunes large language models using PyTorch, HuggingFace, and Unsloth to adapt AI behaviors to specific datasets and tasks.
Imports GGUF models from HuggingFace directly into Ollama for local inference and model management.
Provides technical blueprints and implementation patterns for the Transformer architecture to guide LLM development and fine-tuning.
Builds custom plugins for the Semantik search engine to handle document ingestion, embeddings, and AI-powered reranking.
Accelerates machine learning inference using Unsloth and vLLM backends for 2x faster token generation.
Evaluates LLM output quality and optimizes prompt templates using Evidently.ai metrics and LLM-as-a-Judge patterns.
Streamlines the development and training of reward models for RLHF pipelines and thinking quality scoring.
Builds sophisticated LLM applications using LangChain for prompt management, model chaining, and structured output parsing.
Implements a persistent knowledge graph memory system for Auto-Claude to retain context and learn patterns across development sessions.
Architects and optimizes LLM-powered applications using structured methodologies, pipeline design, and agent-assisted development patterns.
Optimizes large language models for efficient inference and training by reducing memory footprint using advanced precision-shifting techniques like 4-bit and 8-bit quantization.
Transforms external RDF context into formal Belief-Desire-Intention (BDI) models to enable rational agency and explainable reasoning in AI agents.
Fine-tunes vision-language models like Pixtral and Ministral using Unsloth's FastVisionModel optimizations for faster training.
Optimizes Large Language Models using Direct Preference Optimization to align behavior with preferred response pairs without explicit reward modeling.
Streamlines parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and Unsloth to optimize memory and training speed.
Implements Group Relative Policy Optimization for efficient LLM alignment and reinforcement learning from human feedback.
Provides foundational expertise in context engineering to optimize AI agent performance and manage token usage effectively.
Persists project knowledge and error-solution patterns across autonomous coding sessions to prevent repetitive mistakes.
Simplifies the selection and optimization of xAI Grok models for development tasks through detailed capability and cost analysis.
Configures and manages multiple Conda environment locations across different research group storage allocations on the UF HiPerGator supercomputer.
Organizes scientific research repositories by decoupling core code from experimental data and notebook outputs.
Provides a clean, Pythonic interface for local LLM inference, chat completions, and model management using the official Ollama library.
Designs and implements sophisticated multi-agent systems using supervisor, swarm, and hierarchical patterns to solve complex context management challenges.
Scroll for more results...