Explore our collection of Agent Skills to enhance your AI workflow.
Automates the identification of code clones and technical debt using quantitative metrics and structured TDD refactoring plans.
Transforms unstructured notes, logs, and brain dumps into standardized, project-ready Markdown intake files.
Ensures code changes maintain or improve testing coverage by adhering to specific project standards and best practices.
Optimizes backend systems, APIs, and database performance through a disciplined measurement-first workflow.
Ensures accurate implementations by strategically performing web research on evolving APIs, security standards, and documentation.
Configures and manages local LLM inference using Mozilla Llamafile to provide offline, OpenAI-compatible AI capabilities.
Standardizes external API and CLI tool integrations with a focus on security, reliability, and automated retry logic.
Guides the creation of high-quality Model Context Protocol (MCP) servers to enable LLMs to interact with external services and APIs effectively.
Enforces standardized git commit messages using Conventional Commits rules and custom configurations.
Configures and optimizes clang-format settings to enforce consistent code style with minimal disruption to existing projects.
Analyzes and implements chain-specific gas optimizations for Solidity smart contracts to reduce transaction costs across EVM networks.
Simplifies LLM interactions by providing a unified Python interface for 100+ AI providers with consistent OpenAI-format syntax.
Manages external service connections, configurations, and health monitoring through a unified registry pattern.
Designs choreographed emotion sequences and multi-step animations for AI-driven mascot interfaces.
Integrates AI-powered qualitative analysis into Python pandas workflows for intelligent data sorting, deduplication, and merging.
Architects sophisticated AI agent systems and LLM workflows using proven patterns like ReAct, prompt chaining, and orchestrator-worker models.
Executes and interprets tests using the Bun runtime's native test runner directly within the Claude development environment.
Generates high-end government-style UI components using the signature National Design Studio aesthetic.
Optimizes and runs large language models on Apple Silicon using the native MLX framework for high-performance inference and fine-tuning.
Implements rigorous Verification and Validation processes based on CMMI standards to ensure products are built correctly and solve the right user needs.
Facilitates contributions to the Agentpack repository through spec-driven development workflows and automated verification.
Optimizes LLM output quality by providing domain-specific guidance and structural patterns for crafting high-performance prompts.
Bridges CMMI process requirements with concrete implementation configurations in GitHub and Azure DevOps.
Automates the end-to-end coding lifecycle by routing tasks through optimized multi-agent workflows and mandatory triple-layered code reviews.
Builds and reviews production-grade frontend interfaces with a focus on accessibility, semantic structure, and distinctive visual design.
Refines post-launch AI feature performance through systematic error analysis, evaluation reviews, and autonomy level adjustments.
Enforces mandatory risk analysis and structured decision documentation to prevent architectural debt and project failures.
Standardizes the progression of AI feature autonomy from human-in-the-loop to autonomous action.
Explains Solidity smart contract logic, design patterns, and potential risks through detailed architectural walkthroughs.
Simplifies Go integration testing by providing lightweight, throwaway Docker container instances for databases, message queues, and cloud services.
Scroll for more results...