Muninn addresses the fundamental challenge of AI assistants forgetting context between sessions by providing a robust, local-first persistent memory engine. It acts as a shared memory layer, allowing users to seamlessly switch between various AI assistants like Claude, Gemini, and Codex without losing vital context. Built for privacy and autonomy, Muninn runs entirely on your machine with zero cloud dependencies or API calls, ensuring your data remains private. It features advanced 4-signal hybrid retrieval, neuroscience-inspired memory management, and efficient LLM-free extraction, enabling intelligent memory decay, merging, promotion, and consolidation without external services.
Key Features
014-Signal Hybrid Retrieval (Vector + BM25 + Graph + Temporal)
02Neuroscience-Inspired Memory Hierarchy
033-Tier LLM-Free Extraction Pipeline
04Multi-Factor Importance Scoring
05Background Memory Consolidation
060 GitHub stars
Use Cases
01Providing persistent, cross-session memory for AI agents
02Maintaining context and knowledge across multiple AI assistants (e.g., Claude, Gemini, Codex)
03Ensuring completely local and private AI memory storage and processing