Descubre Habilidades de Claude para data science & ml. Explora 61 habilidades y encuentra las capacidades perfectas para tus flujos de trabajo de IA.
Builds robust Retrieval-Augmented Generation (RAG) systems using vector databases and semantic search to ground AI responses in external knowledge.
Orchestrates end-to-end MLOps pipelines from data preparation and model training to production deployment and monitoring.
Implements advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability in production environments.
Implements comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking.
Standardizes the scientific research process by guiding users through rigorous hypothesis formulation, experimental design, and variable mapping.
Architects sophisticated LLM applications using LangChain patterns for agents, memory, and modular chains.
Executes machine learning workloads and local commands on remote NVIDIA GPUs by prefixing them with 'gpu' for instant cloud compute.
Conducts autonomous, multi-source research tasks and generates comprehensive reports using Google's Gemini Deep Research API.
Converts various file formats including PDF, Office docs, and media into clean Markdown for LLM analysis and text processing.
Transforms complex datasets into publication-quality figures and interactive visualizations using established design principles and domain-specific conventions.
Implements high-performance, conceptually dense scientific and numerical analysis code following research workflow standards.
Streamlines scientific research by fetching arXiv paper sources and automating BibTeX citation management via the NASA ADS API.
Validates and analyzes datasets to ensure they are optimized for Unsloth fine-tuning and Chinchilla optimality.
Analyzes single-cell omics data using deep generative models for tasks like batch correction, integration, and differential expression.
Implements high-performance, secure text-to-speech synthesis using Kokoro TTS for real-time voice applications.
Optimizes AI prompts by applying architecture-specific techniques for autoregressive and reasoning-based models.
Analyzes, transcribes, and edits video files using AI-powered frame extraction, local speech-to-text, and multi-modal audio understanding.
Monitors financial markets autonomously to generate AI-driven trading signals using multi-agent consensus and the Stirrup framework.
Automates the lifecycle of LLM fine-tuning workloads on RunPod GPU instances using the Unsloth framework.
Automates the generation of professional model cards and the deployment of fine-tuned LLMs to the Hugging Face Hub.
Transforms complex business data into actionable insights and strategic recommendations using modern BI tools and predictive analytics.
Simplifies the deployment and management of Unsloth fine-tuning jobs on Hugging Face cloud GPUs.
Manages local GPU fine-tuning workflows using Unsloth to optimize LLM training performance and resource utilization.
Generates optimized training notebooks and scripts for fine-tuning LLMs using the Unsloth framework.
Provides a comprehensive C/C++ API reference and implementation patterns for high-performance local LLM inference using llama.cpp.
Provides a comprehensive reference for developing native C extensions and GCL applications within the GreyCat ecosystem.
Synthesizes fragmented research findings into coherent, structured narratives with evidence-based uncertainty quantification.
Provides expert guidance and automated tools for statistical modeling, experiment design, causal inference, and production-grade machine learning pipelines.
Optimizes Large Language Models through 4-bit and 8-bit quantization to enable efficient deployment on resource-constrained hardware.
Integrates local Large Language Models using llama.cpp and Ollama while implementing robust security measures against prompt injection and resource-based attacks.
Scroll for more results...