Orchestrates external AI models like Grok, Gemini, and GPT-5 within Claude Code using deterministic Bash commands and the claudish CLI.
This skill provides a standardized framework for integrating non-Claude LLMs into your Claude Code environment. By utilizing the claudish CLI, it enables reliable model invocation with full flag passthrough for settings like effort levels and permission modes. It features automatic context-aware model selection, parallel execution patterns for team workflows, and a robust error escalation protocol to ensure multi-model tasks are executed with total transparency and reliability. Whether you are cross-validating code or running complex research, this skill ensures your external model integrations are predictable and 100% reproducible.
主要功能
01Robust error verification and escalation protocol for failed requests
02Automatic context-aware model preference learning and reuse
03Deterministic model invocation via Bash + claudish CLI
043 GitHub stars
05Full flag passthrough for Claude Code parameters like effort and permission-mode
06Parallel execution support for multi-model /team orchestration
使用场景
01Cross-validating complex logic or architectural decisions across multiple SOTA LLMs
02Automating multi-step research tasks that require specific external model capabilities
03Running parallel code reviews using Grok and Gemini alongside Claude