data science & ml向けのClaudeスキルを発見してください。61個のスキルを閲覧し、AIワークフローに最適な機能を見つけましょう。
Parses microscopy channel and marker names from KINTSUGI metadata files using automatic format detection.
Resolves CuPy runtime compilation errors on Windows by correctly configuring CUDA NVRTC paths.
Organizes scientific research repositories by decoupling core code from experimental data and notebook outputs.
Configures and manages multiple Conda environment locations across different research group storage allocations on the UF HiPerGator supercomputer.
Standardizes how AI identifies and communicates about specific Jupyter notebook cells using stable, identifiable characteristics instead of volatile cell numbers.
Standardizes scale alignment for multi-modal RNA and protein data integration to ensure accurate cross-modal matching.
Optimizes reinforcement learning reward functions for automated trading to eliminate reward hacking and improve P&L gradient signals.
Diagnoses and resolves bimodal matching score distributions in MaxFuse multi-modal data integration pipelines.
Prevents data shape mismatches and synchronization errors when modifying Jupyter notebook data sources.
Optimizes MaxFuse parameters to prevent RNA cell loss in datasets with extreme protein-to-RNA imbalance ratios.
Optimizes trading regime detection by dynamically adjusting Lotka-Volterra coefficients based on real-time market volatility and momentum.
Implement unified position sizing and sophisticated risk management for trading algorithms using GARCH, Kelly criterion, and quadratic drawdown scaling.
Navigates complex memory graphs using RLM-style recursive search to find deep context without excessive token usage.
Monitors GPU performance, VRAM usage, and inference metrics for local Ollama models.
Integrates Ollama's local LLM capabilities into Claude Code via direct REST API operations using the Python requests library.
Evaluates Large Language Model application performance using automated metrics, human feedback loops, and LLM-as-judge frameworks.
Integrates Ollama with the OpenAI Python library to enable seamless migration and compatibility with OpenAI-based tools.
Integrates local LLM inference into Python applications using the official Ollama library for text generation, chat, and model management.
Optimizes LLM performance and reliability through advanced prompt engineering techniques like few-shot learning and chain-of-thought.
Imports GGUF models directly from HuggingFace into Ollama for seamless local inference and experimentation.
Builds and orchestrates end-to-end MLOps pipelines from data preparation through model training, validation, and production deployment.
End of results