01Dynamic context control and token optimization for AI coding tools
02Tiered Local AI powered by Ollama for specialized analysis (Nano, Micro, Base models)
03Semantic caching for instant retrieval of repeated LLM results
04Surgical code retrieval (e.g., c3_search, c3_read) and file mapping
05Local offloading for heavy non-editing analysis (c3_delegate)
060 GitHub stars