关于
DonkeyKong addresses the critical problem of LLM hallucination, especially when these models are tasked with tedious data collection. It implements a 'Kong in the Loop' architecture, employing mechanical 'Donkeys' (distributed Docker workers) for initial data gathering. A local, cost-free LLM ('Kong', via Ollama) then performs intelligent validation and adversarial review on this collected data, ensuring quality and completeness. Only complex analysis or flagged low-confidence items are escalated to expensive cloud LLMs, significantly reducing operational costs and ensuring the integrity of results by verifying inputs before high-cost processing.