探索我们完整的 Claude 技能集合,扩展 AI 代理的能力。
Identifies critical implementation paths, entrypoints, and high-traffic files to streamline performance profiling and optimization tasks.
Provides standardized guidance and design patterns for developing effective Claude Code skills that extend agent capabilities with specialized knowledge.
Provides comprehensive guidance and implementation patterns for integrating with Azure DevOps services using official OpenAPI specifications.
Optimizes documentation for human readability and AI RAG performance by improving structure, token efficiency, and information density.
Generates prioritized, deduplicated Markdown reports from aggregated code enhancement findings and automated audits.
Manages and validates cross-platform infrastructure to ensure compatibility between Claude Code, OpenCode, and Codex CLI.
Profiles CPU and memory hot paths to generate flame graphs and evidence-backed performance reports.
Removes AI-generated code smells, debug statements, and unused infrastructure to maintain high-quality repository hygiene.
Validates and optimizes SKILL.md files to ensure high-quality triggers, secure tool access, and efficient discovery within Claude Code.
Coordinates multiple AI enhancement analyzers in parallel to provide unified project health reports and automated optimizations.
Automates the final verification of code changes by running tests, builds, and requirement checks before shipping.
Improves AI prompts by analyzing structure, clarity, and reliability based on Claude-specific best practices.
Conducts standardized performance benchmarks with strict duration rules to ensure accurate baselines and regression detection.
Synchronizes project documentation and changelogs with current code state to prevent stale or inaccurate technical information.
Audits and enhances AI agent prompt files using prompt engineering best practices and automated fixes.
Streamlines the resolution of GitHub Pull Request comments by fetching, summarizing, and applying fixes directly through the GitHub CLI.
Manages and compares performance test baselines to track version-over-version metrics and environment metadata.
Executes controlled performance experiments to validate hypotheses with scientific rigor and repeatable metrics.
Automates the recording of structured performance investigation notes, evidence, and decision-making logic within the AI state directory.
Automates comprehensive, multi-layered code reviews using specialized agents to detect and fix quality, security, and performance issues.
Analyzes and fixes Claude Code hook scripts to ensure security, proper error handling, and lifecycle compliance.
Generates evidence-based performance hypotheses by analyzing git history and specific execution paths.
Audits and optimizes AI plugin structures, MCP tool schemas, and security patterns for Claude Code and related platforms.
Optimizes project memory files like CLAUDE.md and AGENTS.md to maximize AI instruction-following and context retention.
Optimizes LLM performance and designs production-grade agentic systems using advanced prompt engineering patterns and evaluation frameworks.
Streamlines the development of performant web applications using React, Next.js, and TypeScript through automated scaffolding and expert architectural guidance.
Productionizes machine learning models and builds scalable MLOps systems using industry-leading frameworks and best practices.
Secures applications through automated threat modeling, penetration testing, security auditing, and robust cryptographic implementations.
Automates CI/CD pipeline generation, infrastructure as code scaffolding, and production-grade deployment management across major cloud platforms.
Architects scalable software systems and provides automated tools for diagram generation, dependency analysis, and technical decision-making.
Scroll for more results...