01Support for local AI models (Ollama, vLLM, LM Studio) for enhanced privacy and cost control.
02True AI orchestration with cross-model conversation threading and context retention.
03Smart file handling that automatically expands directories and manages token limits based on model capacity.
041,370 GitHub stars
05Seamless switching between diverse AI models (Gemini, O3, OpenRouter, Ollama) and specialized tools.
06Extended context windows by delegating analysis of large codebases or datasets to high-capacity models.