Heimdall
Provides AI coding assistants with persistent, cognitive memory of specific codebases, enabling long-term learning and recall across sessions.
About
AI coding assistants typically lack persistent memory, starting each interaction from scratch. Heimdall solves this by giving large language models (LLMs) a growing, cognitive memory tailored to your specific codebase. It indexes project documentation and a comprehensive git history, allowing the LLM to recall precise solutions, architectural patterns, and development insights over time. This system ensures that valuable lessons and context from past sessions are retained, significantly enhancing the assistant's effectiveness and reducing repetitive context provision.
Key Features
- Isolated & Organized: Provides project-isolated memory spaces to prevent context leakage.
- Automatic Updates: Automatically detects and loads new documentation, and integrates with git hooks for real-time memory updates from commits.
- 1 GitHub stars
- Efficient Integration: Built on the Model Context Protocol (MCP) for standardized, low-overhead LLM access.
- Git-Aware Context: Indexes entire git history, understanding changes, authors, and context.
- Context-Rich Memory: Learns from documentation, session insights, and development history.
Use Cases
- Maintaining a dynamic, up-to-date knowledge base derived from a project's codebase and development history for AI interaction.
- Improving an LLM's ability to provide context-aware suggestions and solutions based on a project's history and documentation.
- Enhancing AI coding assistants (e.g., Claude Code) with long-term memory for specific projects.