data science & ml向けのClaudeスキルを発見してください。61個のスキルを閲覧し、AIワークフローに最適な機能を見つけましょう。
Implements a homotopical framework for Artificial Life that treats organisms as morphisms and verifies structural changes at interaction time.
Configures and personalizes your Claude Code Colab environment through interactive setup and automated notebook modification.
Designs and implements sophisticated LLM applications using the LangChain framework for agents, memory management, and modular workflows.
Implements advanced prompt engineering techniques to optimize LLM performance, reliability, and output controllability in production environments.
Implements hierarchical spatial indexing with deterministic GF(3) color derivation for geospatial analysis and visualization.
Implements comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and LLM-as-judge patterns.
Automatically synchronizes the latest LLM model specifications, pricing, and API documentation to ensure optimal architecture decisions.
Implements Schmidhuber's compression progress theory to provide intrinsic curiosity rewards for autonomous AI learning and exploration.
Implements adaptive learning and high-speed memory patterns for self-improving Claude Code agents using AgentDB.
Builds production-ready Retrieval-Augmented Generation (RAG) systems for LLM applications using vector databases and semantic search.
Performs advanced data analysis and business intelligence using specialized SQL patterns for statistical and exploratory insights.
Deploys and optimizes serverless AI models, embedding generation, and RAG architectures directly on Cloudflare’s edge network.
Predicts age, gender, and ethnicity from person data and images to enrich datasets and customer profiles.
Deploys and manages reactive Python notebooks with hot-reloading capabilities for interactive development.
Inspects Marimo notebook execution results and HTML snapshots to debug errors and verify cell outputs.
Streamlines the creation and management of reactive marimo notebooks for interactive data science and analytics workflows.
Provides comprehensive Bayesian meta-analysis templates using Stan and JAGS for advanced biostatistical evidence synthesis.
Implements Reinforcement Learning with Leave-One-Out estimation to stabilize model training and optimize policy performance.
Deploys and manages ComfyUI instances for node-based Stable Diffusion image generation with GPU acceleration and model lifecycle support.
Manages local Ollama inference servers using Podman Quadlet to provide GPU-accelerated LLM capabilities.
Simplifies building LLM-powered applications by providing standardized abstractions for prompt engineering, model orchestration, and structured output parsing.
Streamlines the supervised fine-tuning of Large Language Models using Unsloth for optimized performance and reasoning model development.
Optimizes large language models for efficient inference and training using various precision types and memory estimation techniques.
Aligns AI models with human preferences using Direct Preference Optimization to improve reasoning and response quality without explicit reward models.
Provides a clean, Pythonic interface for interacting with Ollama to handle text generation, chat completions, and model management.
Optimizes LLM fine-tuning using LoRA, QLoRA, and Unsloth to drastically reduce memory requirements and accelerate training cycles.
Evaluates LLM outputs and optimizes prompts using Evidently.ai metrics and LLM-as-a-judge patterns.
Explains and implements core Transformer architecture components for LLM development, fine-tuning, and model analysis.
Streamlines the development of PySpark ETL pipelines and distributed data processing workflows.
Ranks and filters retrieved documents based on vector similarity metrics to optimize RAG pipeline relevance.
Scroll for more results...