data science & ml向けのClaudeスキルを発見してください。61個のスキルを閲覧し、AIワークフローに最適な機能を見つけましょう。
Design and implement sophisticated agent memory architectures, from vector stores to temporal knowledge graphs, for cross-session persistence.
Optimizes AI context window usage through strategic compression, observation masking, and partitioning to improve performance and reduce costs.
Provides foundational principles and implementation patterns for managing language model context windows and attention mechanics in AI agent systems.
Optimizes AI agent token usage through advanced context summarization and structured information preservation.
Extracts structured text, metadata, and tables from over 75 document formats using a high-performance Rust core.
Connects Claude to the K-Dense Web platform for advanced, end-to-end scientific research workflows and multi-agent AI collaboration.
Automates advanced quantum chemistry workflows and protein-ligand modeling using a cloud-based Python API.
Performs advanced numerical computing, matrix operations, and scientific visualizations using MATLAB and GNU Octave syntax.
Scaffolds and organizes modular AI agent projects using standardized directory layouts and configuration patterns.
Injects dynamic, runtime data into AI agent system prompts to enable context-aware decision making and information sharing.
Generates structured, effective system prompts for AI agents using a modular architecture of background identity, processing steps, and output instructions.
Defines robust, type-safe data contracts and Pydantic-based schemas for AI agents using the Atomic Agents framework.
Orchestrates and configures modular, structured AI agents using the Atomic Agents framework for robust LLM applications.
Guides the end-to-end development of LLM-powered applications, from task evaluation and pipeline design to cost estimation and agent architecture.
Designs and implements robust multi-agent systems using supervisor, swarm, and hierarchical patterns to optimize context management and reasoning.
Optimizes AI agent performance and token usage by implementing advanced context summarization and management strategies for long-running sessions.
Implements production-grade LLM-as-a-judge patterns to evaluate model outputs using structured rubrics, bias mitigation, and pairwise comparison techniques.
Optimizes LLM context windows through strategic compaction, observation masking, and partitioning to reduce token costs and improve agent performance.
Master the core principles of AI context management to optimize agent performance and token efficiency.
Implements sophisticated multi-layer memory architectures for AI agents to persist state, track entities, and maintain temporal knowledge across sessions.
Diagnoses and mitigates performance failures in agentic systems caused by large context windows, attention loss, and information noise.
Transforms external RDF context into agent mental states to enable cognitive reasoning, explainability, and semantic interoperability in multi-agent systems.
Designs and implements production-grade Retrieval-Augmented Generation pipelines with advanced chunking, retrieval, and evaluation strategies.
Architects and evaluates robust multi-agent systems using industry-standard patterns and sophisticated orchestration strategies.
Optimizes and deploys PyTorch models to Arm Cortex-M processors using the CMSIS-NN backend.
Optimizes large language models for efficient local inference using GGUF format and llama.cpp quantization techniques.
Analyzes and manipulates transformer model internals using mechanistic interpretability techniques like activation patching and circuit analysis.
Deploys high-performance LLM inference on CPU, Apple Silicon, and non-NVIDIA GPUs using GGUF quantization.
Reduces Large Language Model size and accelerates inference using advanced pruning techniques like Wanda and SparseGPT.
Extends Transformer model context windows using advanced positional encoding and interpolation techniques like RoPE, YaRN, and ALiBi.
Scroll for more results...