data science & ml Claude 스킬을 발견하세요. 61개의 스킬을 탐색하고 AI 워크플로우에 완벽한 기능을 찾아보세요.
Evaluates LLM output quality and optimizes prompt templates using Evidently.ai metrics and LLM-as-a-Judge patterns.
Streamlines the development and training of reward models for RLHF pipelines and thinking quality scoring.
Optimizes AI agent context through compression, masking, and strategic partitioning to maximize token efficiency and model performance.
Builds sophisticated LLM applications using LangChain for prompt management, model chaining, and structured output parsing.
Architects and optimizes LLM-powered applications using structured methodologies, pipeline design, and agent-assisted development patterns.
Optimizes large language models for efficient inference and training by reducing memory footprint using advanced precision-shifting techniques like 4-bit and 8-bit quantization.
Builds custom plugins for the Semantik search engine to handle document ingestion, embeddings, and AI-powered reranking.
Implements a persistent knowledge graph memory system for Auto-Claude to retain context and learn patterns across development sessions.
Transforms external RDF context into formal Belief-Desire-Intention (BDI) models to enable rational agency and explainable reasoning in AI agents.
Fine-tunes vision-language models like Pixtral and Ministral using Unsloth's FastVisionModel optimizations for faster training.
Optimizes Large Language Models using Direct Preference Optimization to align behavior with preferred response pairs without explicit reward modeling.
Streamlines parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and Unsloth to optimize memory and training speed.
Persists project knowledge and error-solution patterns across autonomous coding sessions to prevent repetitive mistakes.
Implements Group Relative Policy Optimization for efficient LLM alignment and reinforcement learning from human feedback.
Provides foundational expertise in context engineering to optimize AI agent performance and manage token usage effectively.
Provides a clean, Pythonic interface for local LLM inference, chat completions, and model management using the official Ollama library.
Designs and implements sophisticated multi-agent systems using supervisor, swarm, and hierarchical patterns to solve complex context management challenges.
Simplifies the selection and optimization of xAI Grok models for development tasks through detailed capability and cost analysis.
Configures and manages multiple Conda environment locations across different research group storage allocations on the UF HiPerGator supercomputer.
Diagnoses and mitigates AI agent performance failures caused by long-context attention loss, poisoning, and informational clash.
Builds high-quality fine-tuning datasets from literary works to train AI models in specific authorial voices and writing styles.
Builds and manages fully managed RAG solutions using Amazon Bedrock for semantic search and document-based AI applications.
Optimizes LLM fine-tuning via advanced QLoRA patterns, hyperparameter tuning, and memory-efficient implementation strategies.
Guides the development of high-performance ML and AI applications in Rust using memory-efficient patterns and GPU acceleration.
Design and implement multi-layered memory architectures for AI agents to ensure long-term state persistence and entity consistency.
Optimizes vector database performance by balancing search latency, recall accuracy, and memory footprint.
Manages ComfyUI instances for node-based Stable Diffusion image generation with automated GPU configuration and model management.
Manages FiftyOne dataset visualization and curation environments using Podman Quadlet containers with integrated MongoDB sidecars.
Persists learned solutions and error patterns across autonomous coding sessions to create a self-healing development environment.
Generates and refines structured physics analysis specifications using standardized templates and domain-specific best practices.
Scroll for more results...