data science & ml Claude 스킬을 발견하세요. 61개의 스킬을 탐색하고 AI 워크플로우에 완벽한 기능을 찾아보세요.
Conducts deep-dive information gathering and data synthesis to produce actionable strategic insights and comprehensive reports.
Designs, optimizes, and deploys scalable large language model architectures and high-performance RAG systems.
Transforms vague human intent into structured, production-ready prompt artifacts through iterative clarification.
Executes machine learning examples on remote GPU hosts via SSH by syncing minimal workspaces and launching Docker-based training scripts.
Provides comprehensive technical guidance on Reinforcement Learning from Human Feedback for aligning large language models with human preferences.
Loads and deploys state-of-the-art pretrained models for text, vision, and audio using the Hugging Face library.
Optimizes and runs large language models on Apple Silicon using the native MLX framework for high-performance inference and fine-tuning.
Optimizes LLM output quality by providing domain-specific guidance and structural patterns for crafting high-performance prompts.
Architects sophisticated AI agent systems and LLM workflows using proven patterns like ReAct, prompt chaining, and orchestrator-worker models.
Matches complex research questions to rigorous experimental designs, sampling strategies, and validity controls using PhD-level frameworks.
Identifies and mitigates performance loss in large-context agent systems using proven architectural patterns and empirical benchmarks.
Optimizes AI agent context windows using advanced compaction, masking, and partitioning techniques to improve performance and reduce token costs.
Masters the mechanics of LLM context to design efficient, high-performance agent architectures and debugging strategies.
Diagnoses and mitigates performance loss in AI agents by identifying patterns like lost-in-middle, context poisoning, and distraction.
Design and implement advanced memory architectures to help AI agents persist state, maintain entity consistency, and reason over structured knowledge.
Master the mechanics, constraints, and optimization strategies of context within AI agent architectures to improve performance and reduce costs.
Designs and implements scalable multi-agent systems to overcome context limitations and manage complex task decomposition.
Designs and implements robust multi-agent systems to overcome context limits and handle complex, parallelizable tasks.
Design and implement sophisticated memory architectures for AI agents to maintain state, consistency, and long-term knowledge.
Implements sophisticated LLM-as-a-Judge techniques to evaluate, compare, and benchmark AI model outputs with high precision.
Implements robust LLM-as-a-Judge evaluation techniques to measure, compare, and optimize the quality of AI-generated outputs.
Provides rigorous, PhD-level critiques of academic manuscripts and research methodologies to ensure high-impact scholarly standards.
Maps LinkML enum permissible values to verified ontology terms and CURIEs using the Ontology Access Kit (OAK).
Bootstraps a production-ready LangChain TypeScript project with optimal configuration for AI agents and LangGraph.
Integrates Hyperspell’s long-term memory and RAG capabilities into your project with automated configuration and SDK setup.
Conducts rigorous statistical analysis, power calculations, and data interpretation using PhD-level methodologies.
Optimizes AI prompts using research-backed frameworks and production-ready templates to ensure high-quality, cost-effective model outputs.
Generates accurate, data-driven charts and business visualizations using Python for professional reports and analysis.
Fine-tunes AI models on 0G's decentralized GPU network using a streamlined CLI and SDK workflow.
Optimizes agent behavior by automatically identifying the active LLM and adjusting execution configurations for maximum cross-model compatibility.
Scroll for more results...