Descubre Habilidades de Claude para data science & ml. Explora 61 habilidades y encuentra las capacidades perfectas para tus flujos de trabajo de IA.
Enables memory-efficient fine-tuning of large language models using 4-bit quantization and LoRA adapters.
Implements parameter-efficient fine-tuning using Low-Rank Adaptation (LoRA) to specialize large language models with minimal resource overhead.
Optimizes AI prompts using research-backed frameworks and production-ready templates to ensure high-quality, cost-effective model outputs.
Simplifies LLM interactions by providing a unified Python interface for 100+ AI providers with consistent OpenAI-format syntax.
Configures and manages local LLM inference using Mozilla Llamafile to provide offline, OpenAI-compatible AI capabilities.
Integrates Hyperspell’s long-term memory and RAG capabilities into your project with automated configuration and SDK setup.
Simplifies LLM integration by providing a consistent OpenAI-style Python interface for over 100 cloud and local providers.
Configures and manages local LLM inference using Mozilla's llamafile to provide offline, OpenAI-compatible AI capabilities.
Guides the design and execution of rigorous qualitative studies, thematic analysis, and coding schemes for non-numerical data.
Conducts rigorous statistical analysis, power calculations, and data interpretation using PhD-level methodologies.
Facilitates breakthrough problem-solving by applying first-principles reasoning and cross-domain analogies to overcome complex technical bottlenecks.
Provides PhD-level guidance for selecting, designing, and validating academic and scientific research methodologies.
Facilitates PhD-level academic research by managing specialized methodology skills and enforcing rigorous scientific standards.
Creates, analyzes, and manages complex Excel spreadsheets with a focus on financial modeling standards and formula integrity.
Profiles and optimizes Python code to identify bottlenecks, reduce latency, and minimize memory consumption using industry-standard tools.
Orchestrates task delegation to external LLM services by offloading high-token execution while maintaining central reasoning within Claude.
Provides comprehensive technical guidance on Reinforcement Learning from Human Feedback for aligning large language models with human preferences.
Optimizes LLM output quality by providing domain-specific guidance and structural patterns for crafting high-performance prompts.
Loads and deploys state-of-the-art pretrained models for text, vision, and audio using the Hugging Face library.
Architects sophisticated AI agent systems and LLM workflows using proven patterns like ReAct, prompt chaining, and orchestrator-worker models.
Conducts deep-dive information gathering and data synthesis to produce actionable strategic insights and comprehensive reports.
Matches complex research questions to rigorous experimental designs, sampling strategies, and validity controls using PhD-level frameworks.
Designs, optimizes, and deploys scalable large language model architectures and high-performance RAG systems.
Optimizes and runs large language models on Apple Silicon using the native MLX framework for high-performance inference and fine-tuning.
Design and implement sophisticated memory architectures for AI agents to maintain state, consistency, and long-term knowledge.
Master the mechanics, constraints, and optimization strategies of context within AI agent architectures to improve performance and reduce costs.
Implements sophisticated LLM-as-a-Judge techniques to evaluate, compare, and benchmark AI model outputs with high precision.
Designs and implements robust multi-agent systems to overcome context limits and handle complex, parallelizable tasks.
Design and implement advanced memory architectures to help AI agents persist state, maintain entity consistency, and reason over structured knowledge.
Diagnoses and mitigates performance loss in AI agents by identifying patterns like lost-in-middle, context poisoning, and distraction.
Scroll for more results...