01Achieves up to 98.5% token savings for LLM API calls, reducing costs and increasing efficiency.
02Employs a smart 4-layer context assembly strategy to mitigate 'Lost in the Middle' syndrome in LLMs.
03Operates locally with zero external dependencies, using SQLite for secure and private data handling.
04Provides blazing fast incremental indexing and hybrid search (BM25 + optional Ollama semantic embeddings).
05Offers first-class multi-language support, including advanced Java class sub-chunking and import resolution.
0636 GitHub stars