发现data science & ml类别的 Claude 技能。浏览 61 个技能,找到适合您 AI 工作流程的完美功能。
Integrates Hyperspell’s long-term memory and RAG capabilities into your project with automated configuration and SDK setup.
Configures and manages local LLM inference using Mozilla Llamafile to provide offline, OpenAI-compatible AI capabilities.
Optimizes AI prompts using research-backed frameworks and production-ready templates to ensure high-quality, cost-effective model outputs.
Simplifies LLM integration by providing a consistent OpenAI-style Python interface for over 100 cloud and local providers.
Configures and manages local LLM inference using Mozilla's llamafile to provide offline, OpenAI-compatible AI capabilities.
Creates, analyzes, and manages complex Excel spreadsheets with a focus on financial modeling standards and formula integrity.
Guides the design and execution of rigorous qualitative studies, thematic analysis, and coding schemes for non-numerical data.
Conducts rigorous statistical analysis, power calculations, and data interpretation using PhD-level methodologies.
Facilitates breakthrough problem-solving by applying first-principles reasoning and cross-domain analogies to overcome complex technical bottlenecks.
Provides PhD-level guidance for selecting, designing, and validating academic and scientific research methodologies.
Facilitates PhD-level academic research by managing specialized methodology skills and enforcing rigorous scientific standards.
Profiles and optimizes Python code to identify bottlenecks, reduce latency, and minimize memory consumption using industry-standard tools.
Orchestrates task delegation to external LLM services by offloading high-token execution while maintaining central reasoning within Claude.
Conducts deep-dive information gathering and data synthesis to produce actionable strategic insights and comprehensive reports.
Designs, optimizes, and deploys scalable large language model architectures and high-performance RAG systems.
Optimizes LLM output quality by providing domain-specific guidance and structural patterns for crafting high-performance prompts.
Loads and deploys state-of-the-art pretrained models for text, vision, and audio using the Hugging Face library.
Optimizes and runs large language models on Apple Silicon using the native MLX framework for high-performance inference and fine-tuning.
Architects sophisticated AI agent systems and LLM workflows using proven patterns like ReAct, prompt chaining, and orchestrator-worker models.
Provides comprehensive technical guidance on Reinforcement Learning from Human Feedback for aligning large language models with human preferences.
Matches complex research questions to rigorous experimental designs, sampling strategies, and validity controls using PhD-level frameworks.
Identifies and mitigates performance loss in large-context agent systems using proven architectural patterns and empirical benchmarks.
Masters the mechanics of LLM context to design efficient, high-performance agent architectures and debugging strategies.
Optimizes AI agent context windows using advanced compaction, masking, and partitioning techniques to improve performance and reduce token costs.
Design and implement advanced memory architectures to help AI agents persist state, maintain entity consistency, and reason over structured knowledge.
Designs and implements robust multi-agent systems to overcome context limits and handle complex, parallelizable tasks.
Diagnoses and mitigates performance loss in AI agents by identifying patterns like lost-in-middle, context poisoning, and distraction.
Designs and implements scalable multi-agent systems to overcome context limitations and manage complex task decomposition.
Master the mechanics, constraints, and optimization strategies of context within AI agent architectures to improve performance and reduce costs.
Implements robust LLM-as-a-Judge evaluation techniques to measure, compare, and optimize the quality of AI-generated outputs.
Scroll for more results...