소개
LLuna is a security-first research framework designed for building autonomous AI agents with a strong emphasis on integrity and verifiable execution. It's specifically engineered to work efficiently with local Small Language Models (SLMs) in the 4B-8B range and integrates the Model Context Protocol (MCP). The framework's core principles ensure that agents strictly adhere to tool outputs as ground truth, preventing hallucination, pretending execution, or inferring unseen external states. It prioritizes determinism over speculative helpfulness, making it technically impossible for the agent to stall silently or misrepresent actions. Recent updates include significant UI/UX enhancements like a dark theme, scrollable panels, expandable tool output, and critical malformed JSON recovery capabilities to handle imperfect LLM outputs gracefully, solidifying its reliability for research and development.