AI 에이전트 기능을 확장하는 Claude 스킬의 전체 컬렉션을 살펴보세요.
Provides structured, expert feedback on code changes to identify bugs, security risks, and quality improvements before merging.
Identifies ambiguities in user requirements and generates structured clarification questions to ensure precise project scoping.
Establishes a proactive improvement framework to capture learnings and encode patterns during development, ensuring every task compounds into future efficiency.
Diagnoses barriers to progress by systematically analyzing target selection, strategic algorithms, and execution friction.
Provides evidence-based frameworks for setting, tracking, and optimizing personal goals and habits using data-driven behavioral psychology.
Designs structured evaluation frameworks for AI agents including metrics, datasets, and automated grading strategies.
Provides deep insights into multi-turn LLM conversations by analyzing and debugging Langfuse trace sessions.
Drafts professional German business emails and project updates tailored for technical teams and AI startups.
Transforms work sessions into structured, actionable learnings and testable rules to improve development workflows.
Manage Langdock assistants, knowledge folders, and usage analytics through specialized API integrations and CLI tools.
Manages the complete lifecycle of Langfuse LLM prompts, including version control, deployment labels, and side-by-side version comparisons.
Automates the importation of structured BEADS.md project plans into the bd issue tracker with full dependency mapping.
Automates the implementation of captured insights and behavioral improvements into Claude Code plugins.
Implements multi-layered data validation patterns to eliminate structural bugs and ensure system integrity across all execution layers.
Automates the iterative refinement of AI agent performance through contract-driven evaluation and multi-lever optimization protocols.
Conducts systematic production-readiness audits of backend systems across security, reliability, and performance layers.
Defines precise optimization goals and regression guardrails for autonomous agent loops using normalized 0-1 scoring.
Hardens software reliability by implementing multi-layered data validation across system boundaries to eliminate structural bugs.
Streamlines the lifecycle of LLM prompts by enabling versioning, deployment, and management within the Langfuse observability platform.
Evaluates software readiness for alpha release by tracing end-to-end user flows from the UI to the backend to identify functional gaps.
Implements advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability in production applications.
Implements production-grade safety, error handling, and robust patterns for reliable shell scripting and automation.
Provides strategic guidance for evaluating and optimizing AI agents using Langfuse traces and data-driven iteration loops.
Automates YouTrack project management tasks, including epic health checks, weekly reporting, and Linear issue synchronization.
Streamlines the creation of interactive UI components and widgets for the OpenAI Apps SDK using the MCP skybridge protocol.
Analyzes and visualizes LLM quality scores, trends, and regressions within the Langfuse observability platform.
Converts rough project concepts into implementation-ready Technical Design Documents optimized for autonomous coding agents.
Generates production-ready FastAPI project structures featuring asynchronous patterns, dependency injection, and scalable architecture.
Diagnoses and resolves LLM workflow issues by performing structured root-cause analysis on Langfuse traces.
Instruments Python LLM pipelines with Langfuse tracing, observability patterns, and performance scoring.
Scroll for more results...