AI 에이전트 기능을 확장하는 Claude 스킬의 전체 컬렉션을 살펴보세요.
Generates production-ready GitHub Actions workflows for automated testing, building, and multi-environment deployment.
Implements high-performance asynchronous Python patterns using asyncio and concurrent programming techniques.
Configures and optimizes ShellCheck static analysis to ensure shell script quality, portability, and security.
Implements robust error handling strategies and resilience patterns across multiple programming languages to build fault-tolerant applications.
Enforces a rigorous protocol requiring empirical evidence and command output before any task is marked as complete or successful.
Refines rough ideas into fully-formed technical designs through iterative questioning and alternative exploration.
Configures Prometheus for comprehensive metric collection, infrastructure monitoring, and proactive alerting.
Master TypeScript's complex type system using generics, conditional types, and mapped types to build robust, type-safe applications.
Profiles and optimizes Python code to eliminate bottlenecks, reduce memory overhead, and accelerate execution using advanced profiling tools and best practices.
Eliminates flaky tests by replacing arbitrary delays with intelligent condition polling for asynchronous state changes.
Streamlines the creation of distributable Python packages using modern standards like pyproject.toml and automated PyPI publishing workflows.
Implements systematic diagnostic methodologies and profiling tools to identify, isolate, and resolve complex software bugs across any technology stack.
Implements high-performance Python applications using asyncio, concurrent programming patterns, and non-blocking I/O strategies.
Implements comprehensive Python testing strategies using pytest, fixtures, mocking, and test-driven development best practices.
Enforces a strict evidence-based workflow that requires fresh command output before any task can be claimed as complete.
Enforces rigorous verification protocols to ensure all coding tasks are validated with empirical evidence before marking them as complete.
Implements comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and rigorous benchmarking.
Implements an iterative, multi-model code review process that separates coding and auditing tasks between Claude and Codex to ensure production-grade quality.
Automates the generation of professional Quarto PDF reports and PowerPoint presentations from CSV survey data.
Integrates Langdock's AI assistants, knowledge retrieval, and RAG pipelines into custom applications using Python or Node.js.
Establishes a proactive improvement framework to capture learnings and encode patterns during development, ensuring every task compounds into future efficiency.
Transforms development sessions into structured, actionable learnings and testable rules to continuously improve AI-driven workflows.
Applies behavioral psychology and decision science principles to optimize user flows and improve product adoption.
Manages human annotations and manual scoring workflows for Langfuse LLM traces directly from Claude.
Facilitates LLM experiment execution and prompt evaluation using Langfuse datasets and automated LLM-as-judge scoring.
Maps high-level goals to the specific daily behaviors that produce them to create actionable, data-driven tracking systems.
Automates the creation and configuration of Langfuse datasets for LLM evaluation and observability workflows.
Provides deep insights into multi-turn LLM conversations by analyzing and debugging Langfuse trace sessions.
Manage Langdock assistants, knowledge folders, and usage analytics through specialized API integrations and CLI tools.
Generates expertly structured German system prompts and tool integration instructions for Langdock assistants.
Scroll for more results...