Explore our collection of Agent Skills to enhance your AI workflow.
Compares files and directories between different Git worktrees or branches to identify code differences and changes.
Generates a diverse range of ideas and solutions by balancing high-probability outcomes with creative sampling of unconventional possibilities.
Applies classic 'The Elements of Style' principles to documentation to ensure clarity, brevity, and professional polish.
Automates the installation and configuration of Codemap CLI to provide AI agents with intelligent codebase visualization and lifecycle hooks.
Creates complex, multi-step agentic workflows that minimize context bloat by delegating tasks to specialized sub-agents via file-based instructions.
Executes complex tasks using a competitive multi-agent generation and evaluation framework to ensure superior, production-grade results.
Systematically repairs failing unit and integration tests after code changes using intelligent agent orchestration.
Applies Anthropic's official authoring guidelines to help developers build concise, well-structured, and discoverable Claude Code skills.
Configures a Docker-based MCP server for searching and downloading academic papers from arXiv, PubMed, and other scholarly sources.
Streamlines the creation of high-performance Claude Code agents using Anthropic-aligned structures, triggering logic, and optimized system prompts.
Systematically identifies the original source of bugs and invalid data by tracing errors backward through the call stack.
Automatically generates comprehensive test coverage for local code changes using specialized AI agents.
Performs systematic Fishbone (Ishikawa) analysis to identify root causes of software failures and process bottlenecks.
Executes complex reasoning tasks through systematic multi-agent exploration, pruning, and solution synthesis.
Generates structured, single-page A3 problem-solving reports with deep root cause analysis and actionable implementation plans.
Conducts a rigorous multi-perspective code review using specialized AI judges to evaluate requirements, architecture, and implementation quality.
Automates the discovery, configuration, and testing of intelligent Claude Code hooks to enforce project standards and security.
Automates the creation of standardized GitHub pull requests using the CLI with support for templates and conventional commit formatting.
Configures the Serena MCP server to enable advanced semantic code retrieval and symbol-based navigation in AI coding agents.
Implements a ruthless self-correction framework to catch flaws, verify dependencies, and ensure production-grade code quality.
Executes coding tasks using a specialized implementation sub-agent and an independent LLM judge to ensure high-quality, verified results.
Applies a rigorous Red-Green-Refactor cycle to verify and bulletproof AI agent skills against rationalization and rule-breaking under pressure.
Curates and persists project-specific insights and patterns into CLAUDE.md to improve long-term agent performance.
Evaluates and scores AI-generated work using a specialized, context-isolated sub-agent to ensure high-quality results.
Transforms GitHub issues into comprehensive technical specifications and implementation plans automatically.
Automates the creation of structured task files and project directory hierarchies for streamlined workflow management.
Transforms draft task specifications into implementation-ready plans through multi-agent analysis and quality-gated verification.
Generates structured, high-quality Claude Code commands using standardized patterns and MCP tool integrations.
Exports open GitHub issues into local markdown files to provide structured project context for AI development.
Conducts iterative Five Whys root cause analysis to uncover systemic issues and propose fundamental solutions.
Scroll for more results...