Explore our collection of Agent Skills to enhance your AI workflow.
Orchestrates Kanban state machine transitions and task management using provider abstractions.
Standardizes code style and resolves linting errors across multiple file formats using Prettier and ESLint.
Implements a vendor-agnostic, AI-powered software testing intelligence protocol covering 24 distinct test types for any tech stack.
Connects Claude to external REST APIs to manage authentication, execute HTTP requests, and process structured data responses.
Conducts deep developer research, library documentation lookups, and real-time web searches to solve complex coding challenges.
Automates the end-of-session workflow by verifying code integrity, updating progress tracking, and capturing project insights.
Orchestrates a structured five-phase loop to systematically improve AI agent performance through data-driven experimentation and regression testing.
Generates and stress-tests implementation roadmaps through interactive scoping and multi-agent validation protocols.
Orchestrates the end-to-end project planning lifecycle from initial concept to execution-ready roadmaps with interactive checkpoints.
Builds custom UI panels and configuration views for Umbraco backoffice packages using modern web components.
Standardizes the beginning of a coding session by verifying environment health, reviewing progress, and identifying the next priority task.
Preserves mid-session coding progress and context without requiring a full shutdown or heavy verification cycle.
Generates production-ready JavaScript action scripts for Langdock integrations from API documentation or descriptions.
Architects comprehensive evaluation frameworks for AI agents by defining metrics, datasets, and grading strategies.
Orchestrates parallel architectural reviews using real Gemini, Codex, and Cursor-Agent CLIs alongside native Claude sessions for diverse technical feedback.
Configures comprehensive Langfuse datasets with custom evaluation dimensions and LLM-as-judge prompts.
Automates the evaluation, trace analysis, and documentation cycle for Langfuse-integrated AI agents.
Audits application user experience and production readiness through structured lifecycle frameworks and technical gap analysis.
Guides developers through the iterative process of refining, scaling, and optimizing production-ready SaaS applications.
Automates Langdock assistant management, knowledge base operations, and usage data exports through a unified CLI wrapper.
Facilitates LLM agent evaluation and observability by bridging Claude-style commands to Langfuse infrastructure and dataset management.
Establishes a proactive framework that captures learnings and encodes patterns during the development process to create compounding productivity gains.
Guides absolute beginners through the entire process of setting up a production-ready SaaS environment and initial project structure.
Automates the creation of comprehensive evaluation pipelines for AI agents using Langfuse observability and testing frameworks.
Optimizes SaaS applications for search engines using Next.js best practices and automated SEO configurations.
Provides production-ready code templates for Supabase authentication, environment variable validation, and core SaaS UI components.
Provides standardized workflows and implementation patterns for updating, maintaining, and adding features to production-ready SaaS applications.
Implements a professional, two-tier design system using SCSS Modules, CSS Cascade Layers, and HSL semantic tokens.
Implements high-performance data fetching patterns using the stale-while-revalidate strategy for React applications.
Provides a comprehensive framework for building production-ready SaaS admin panels with RBAC, analytics, and essential business logic.
Scroll for more results...