Explora nuestra colección completa de Habilidades de Claude que extienden las capacidades de los agentes de IA.
Streamlines the creation of interactive UI components and widgets for the OpenAI Apps SDK using the MCP skybridge protocol.
Automates YouTrack project management tasks, including epic health checks, weekly reporting, and Linear issue synchronization.
Provides strategic guidance for evaluating and optimizing AI agents using Langfuse traces and data-driven iteration loops.
Guides developers through their first complete OpenSpec workflow cycle by performing real codebase tasks with narrated instruction.
Evaluates software readiness for alpha release by tracing end-to-end user flows from the UI to the backend to identify functional gaps.
Streamlines the lifecycle of LLM prompts by enabling versioning, deployment, and management within the Langfuse observability platform.
Defines precise optimization goals and regression guardrails for autonomous agent loops using normalized 0-1 scoring.
Conducts systematic production-readiness audits of backend systems across security, reliability, and performance layers.
Automates the iterative refinement of AI agent performance through contract-driven evaluation and multi-lever optimization protocols.
Automates the implementation of captured insights and behavioral improvements into Claude Code plugins.
Automates the importation of structured BEADS.md project plans into the bd issue tracker with full dependency mapping.
Manages the complete lifecycle of Langfuse LLM prompts, including version control, deployment labels, and side-by-side version comparisons.
Manage Langdock assistants, knowledge folders, and usage analytics through specialized API integrations and CLI tools.
Transforms work sessions into structured, actionable learnings and testable rules to improve development workflows.
Drafts professional German business emails and project updates tailored for technical teams and AI startups.
Provides deep insights into multi-turn LLM conversations by analyzing and debugging Langfuse trace sessions.
Designs structured evaluation frameworks for AI agents including metrics, datasets, and automated grading strategies.
Provides evidence-based frameworks for setting, tracking, and optimizing personal goals and habits using data-driven behavioral psychology.
Diagnoses barriers to progress by systematically analyzing target selection, strategic algorithms, and execution friction.
Establishes a proactive improvement framework to capture learnings and encode patterns during development, ensuring every task compounds into future efficiency.
Identifies ambiguities in user requirements and generates structured clarification questions to ensure precise project scoping.
Standardizes git commit messages following the Conventional Commits 1.0.0 specification for automated versioning and project clarity.
Provides structured, expert feedback on code changes to identify bugs, security risks, and quality improvements before merging.
Orchestrates end-to-end evaluation cycles for AI agents using Langfuse to identify performance regressions and generate actionable optimization reports.
Manages human annotations and manual scoring workflows for Langfuse LLM traces directly from Claude.
Streamlines the preparation of meeting agendas, stakeholder updates, and project status reports using live data from project management tools.
Streamlines the creation of production-ready Langdock integration actions using standardized patterns for API authentication, request handling, and metadata formatting.
Provides real-time access to the official Claude Code documentation for up-to-date feature discovery and troubleshooting.
Analyzes coding sessions to extract repeatable patterns and transform them into modular Claude Code components like skills, commands, or agents.
Transforms development sessions into structured, actionable learnings and testable rules to continuously improve AI-driven workflows.
Scroll for more results...