Serving as a production-grade orchestration engine, this tool unifies n8n workflows with agentic LLM ecosystems, creating a secure, deterministic, and scalable automation gateway. It bridges various AI coding assistants—like Gemini CLI, Claude Code CLI, and Codex CLI—to n8n instances via the Model Context Protocol (MCP), providing a single entry point for over 30 specialized tools across workflow management, execution, and AI intelligence. Engineered with FastAPI and FastMCP, it supports dual-transport modes (HTTP/SSE and stdio), features context-aware execution, intelligent caching, structured observability, and robust fault-tolerant design, ensuring resilient and high-performance enterprise automation in AI-native infrastructures.
주요 기능
01Fault-tolerant design with automatic retries and workflow validation/auto-fix pipeline.
02Unifies n8n workflows with agentic LLM ecosystems for secure, scalable automation.
03Context-aware execution with session memory tracking for AI agents.
04Comprehensive observability through structured logging, Prometheus metrics, and health checks.
050 GitHub stars
06Intelligent TTL caching for sub-100ms response times across resources.
사용 사례
01Integrating various AI coding assistants and LLM agents with n8n workflow automation instances.
02Building and managing robust, AI-native enterprise automation infrastructures.
03Enabling secure, deterministic, and scalable execution of complex n8n workflows through intelligent agents.