data science & ml向けのClaudeスキルを発見してください。61個のスキルを閲覧し、AIワークフローに最適な機能を見つけましょう。
Optimizes embedding model selection, configuration, and cost estimation for RAG pipelines.
Implements managed Retrieval-Augmented Generation (RAG) using Google File Search and Gemini models for high-accuracy document retrieval and grounding.
Ensures the integrity of machine learning training workflows by validating datasets, model checkpoints, and system dependencies.
Calculates and compares machine learning training and inference costs across major cloud GPU platforms like Modal, Lambda Labs, and RunPod.
Sets up comprehensive monitoring dashboards using TensorBoard and Weights & Biases to track machine learning experiments in real-time.
Configures and optimizes Google Cloud Platform environments for BigQuery ML and Vertex AI training workloads.
Provides production-ready machine learning templates and training workflows for classification, text generation, and financial analysis.
Implements production-ready RAG pipelines and multi-step agent workflows using LangChain, LangGraph, and LangSmith templates.
Manages high-fidelity voice cloning workflows, library organization, and audio optimization for synthetic speech generation.
Integrates Large Language Model chat completions into backend applications using the z-ai-web-dev-sdk for advanced conversational AI and text generation.
Builds complex, stateful, and cyclic multi-actor AI agent workflows using the LangGraph framework.
Integrates sophisticated large language model chat completions into backend applications using the z-ai-web-dev-sdk.
Implements advanced Retrieval-Augmented Generation (RAG) workflows to build knowledge-grounded LLM applications and semantic search systems.
Configures and personalizes your Claude Code Colab environment through interactive setup and automated notebook modification.
Designs and implements sophisticated LLM applications using the LangChain framework for agents, memory management, and modular workflows.
Implements advanced prompt engineering techniques to optimize LLM performance, reliability, and output controllability in production environments.
Implements comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and LLM-as-judge patterns.
Automatically synchronizes the latest LLM model specifications, pricing, and API documentation to ensure optimal architecture decisions.
Implements adaptive learning and high-speed memory patterns for self-improving Claude Code agents using AgentDB.
Builds production-ready Retrieval-Augmented Generation (RAG) systems for LLM applications using vector databases and semantic search.
Performs advanced data analysis and business intelligence using specialized SQL patterns for statistical and exploratory insights.
Deploys and optimizes serverless AI models, embedding generation, and RAG architectures directly on Cloudflare’s edge network.
Predicts age, gender, and ethnicity from person data and images to enrich datasets and customer profiles.
Deploys and manages reactive Python notebooks with hot-reloading capabilities for interactive development.
Inspects Marimo notebook execution results and HTML snapshots to debug errors and verify cell outputs.
Streamlines the creation and management of reactive marimo notebooks for interactive data science and analytics workflows.
Implements Reinforcement Learning with Leave-One-Out estimation to stabilize model training and optimize policy performance.
Deploys and manages ComfyUI instances for node-based Stable Diffusion image generation with GPU acceleration and model lifecycle support.
Manages local Ollama inference servers using Podman Quadlet to provide GPU-accelerated LLM capabilities.
Simplifies building LLM-powered applications by providing standardized abstractions for prompt engineering, model orchestration, and structured output parsing.
Scroll for more results...