01Parallel processing and Redis caching to minimize API costs
02Support for both API-based and local LLM backends
03Iterative data-driven hypothesis generation and refinement
04Literature integration via PDF processing and GROBID extraction
053 GitHub stars
06Adaptive refinement focused on challenging edge cases