Explora nuestra colección completa de Habilidades de Claude que extienden las capacidades de los agentes de IA.
Provides evidence-based frameworks for setting, tracking, and optimizing personal goals and habits using data-driven behavioral psychology.
Designs structured evaluation frameworks for AI agents including metrics, datasets, and automated grading strategies.
Provides deep insights into multi-turn LLM conversations by analyzing and debugging Langfuse trace sessions.
Drafts professional German business emails and project updates tailored for technical teams and AI startups.
Transforms work sessions into structured, actionable learnings and testable rules to improve development workflows.
Manage Langdock assistants, knowledge folders, and usage analytics through specialized API integrations and CLI tools.
Manages the complete lifecycle of Langfuse LLM prompts, including version control, deployment labels, and side-by-side version comparisons.
Automates the importation of structured BEADS.md project plans into the bd issue tracker with full dependency mapping.
Automates the implementation of captured insights and behavioral improvements into Claude Code plugins.
Automates the iterative refinement of AI agent performance through contract-driven evaluation and multi-lever optimization protocols.
Conducts systematic production-readiness audits of backend systems across security, reliability, and performance layers.
Defines precise optimization goals and regression guardrails for autonomous agent loops using normalized 0-1 scoring.
Streamlines the lifecycle of LLM prompts by enabling versioning, deployment, and management within the Langfuse observability platform.
Evaluates software readiness for alpha release by tracing end-to-end user flows from the UI to the backend to identify functional gaps.
Provides strategic guidance for evaluating and optimizing AI agents using Langfuse traces and data-driven iteration loops.
Automates YouTrack project management tasks, including epic health checks, weekly reporting, and Linear issue synchronization.
Streamlines the creation of interactive UI components and widgets for the OpenAI Apps SDK using the MCP skybridge protocol.
Analyzes and visualizes LLM quality scores, trends, and regressions within the Langfuse observability platform.
Converts rough project concepts into implementation-ready Technical Design Documents optimized for autonomous coding agents.
Diagnoses and resolves LLM workflow issues by performing structured root-cause analysis on Langfuse traces.
Instruments Python LLM pipelines with Langfuse tracing, observability patterns, and performance scoring.
Extracts repeatable work patterns and generates modular component specifications for Claude Code plugins.
Provides standardized UI patterns and implementation guidance for building interactive widgets within the OpenAI Apps SDK.
Extracts and filters Langfuse observability traces to provide surgical debugging insights directly within the Claude Code environment.
Streamlines SwiftUI development by providing best practices, reusable UI patterns, and optimized component architectures.
Manages Linear issues and daily planning workflows directly within Claude Code to streamline task execution and reporting.
Performs adversarial QA audits on bead-based task decompositions to ensure reliable autonomous agent execution.
Ensures implementation accuracy by identifying ambiguous requests and asking targeted, low-friction clarifying questions before starting work.
Evaluates user experience, assesses production readiness, and identifies UI-to-backend gaps through structured auditing workflows.
Provides architectural guidance and implementation patterns for building Model Context Protocol (MCP) servers with the OpenAI Apps SDK.
Scroll for more results...