AIエージェントの能力を拡張するClaudeスキルの完全なコレクションをご覧ください。
Provides a systematic framework for evaluating the methodology, statistics, and integrity of scientific manuscripts and grant proposals.
Enforces a rigorous four-phase scientific methodology to identify root causes and implement stable software fixes without guesswork.
Implements robust security patterns and vulnerability prevention techniques for Solidity smart contracts.
Enhances the quality, resolution, and clarity of screenshots and images for professional documentation and media.
Accesses official USPTO APIs to perform comprehensive patent and trademark searches, intellectual property analysis, and prosecution history tracking.
Refines rough ideas into fully-formed software designs using structured Socratic questioning and incremental validation.
Implements robust compression encoders and entropy coders by mathematically deriving operations from existing decoder state machines to ensure bit-perfect compatibility.
Optimizes numerical linear algebra computations for finding eigenvalues of small dense matrices through direct LAPACK integration and overhead reduction.
Guides the recovery of Directed Acyclic Graph structures from observational data, parameter estimation, and the implementation of causal interventions.
Automates complex subscription lifecycles, recurring payments, and dunning management workflows for SaaS applications.
Provides comprehensive guidance for building the formally verified CompCert C compiler from source while managing strict dependency versions and memory constraints.
Establishes a mandatory protocol for identifying and applying the correct skill for any given task.
Analyzes chess board images to identify piece positions and calculate optimal moves using systematic image detection and engine-based verification.
Implements distributed model training by partitioning PyTorch layers across multiple GPUs using pipeline parallelism patterns like AFAB and 1F1B.
Configures automated Git-based deployment systems that map specific repository branches to web-accessible directories using post-receive hooks.
Implements PyTorch pipeline parallelism to distribute large language model training across multiple GPUs using All-Forward-All-Backward (AFAB) scheduling.
Retrieves and verifies temporally-accurate data from dynamic machine learning leaderboards to ensure model rankings and benchmark results are current and valid.
Executes complex implementation plans in controlled batches with built-in review checkpoints to ensure accuracy and alignment.
Facilitates safe Terraform state operations including resource imports, address refactoring, and backend migrations with built-in safety workflows.
Automates the creation and maintenance of professional README files for Terraform modules using standardized templates and resource tables.
Facilitates high-quality code reviews by providing actionable frameworks for identifying bugs, ensuring architectural consistency, and delivering constructive feedback.
Generates standardized Terraform module structures with pre-configured core files and best-practice templates for cloud infrastructure.
Facilitates solving complex pattern recognition tasks by combining git workflow management with mathematical grid transformation analysis and implementation.
Guides the development of self-interpreting Scheme-like evaluators through incremental implementation and systematic multi-level debugging.
Merges heterogeneous data sources into unified datasets using field mappings and priority-based conflict resolution.
Optimizes LLM inference request grouping and scheduling to minimize operational costs while satisfying latency and padding constraints.
Implements distributed tensor-parallel linear layers in PyTorch to enable training of models that exceed single-device memory limits.
Implements programmatic terminal interfaces to control shell sessions through terminal emulation and pseudo-terminal wrappers.
Validates API endpoints through automated request testing, performance benchmarking, and comprehensive security audits.
Reorganizes large-scale datasets into hierarchical directory structures while enforcing strict file size and item count constraints.
Scroll for more results...