Consult LLM
Enables Claude Code to consult powerful external AI models for deeper analysis on complex problems.
About
Consult LLM is an MCP server designed to augment Claude Code's capabilities by allowing it to tap into more powerful reasoning models such as OpenAI's o3, Google's Gemini 2.5 Pro, and DeepSeek Reasoner. It intelligently constructs prompts from file contexts, incorporates Git diffs, and provides comprehensive usage tracking with cost estimations, making it an invaluable tool for complex problem-solving and code analysis within the Claude Code environment.
Key Features
- 5 GitHub stars
- Comprehensive logging of prompts, responses, and token usage
- Automatic prompt construction from markdown and code files
- Query powerful AI models (o3, Gemini 2.5 Pro, DeepSeek Reasoner) with file context
- Usage tracking with cost estimation
- Git diff integration to feed code changes as context
Use Cases
- Reviewing and getting feedback on proposed code changes with relevant file context and Git diffs
- Consulting powerful AI models for refactoring suggestions or architectural advice based on existing codebase
- Getting deeper analysis from a smarter LLM on complex coding issues identified by Claude Code