MARM
Provides protocol-based memory management for AI to improve response accuracy, reduce drift, and stabilize context over long sessions.
概要
MARM (Memory Accurate Response Mode) is a comprehensive AI memory ecosystem designed to combat context loss, hallucinations, and conversational drift in large language models. It offers a persistent, intelligent, and cross-platform memory solution for any AI agent. The ecosystem comprises the MARM Protocol for structured AI interaction, the MARM Universal MCP Server providing a stateful backend with semantic search and session management, and the MARM Chatbot for direct web-based interaction. It enables a more consistent and user-controlled LLM experience by integrating a complete protocol layer with intelligent memory systems, semantic search, and structured reasoning.
主な機能
- AI-powered semantic search across all memories using embeddings
- Cross-session memory persistence across different AI agent conversations
- Unified memory layer accessible by any connected LLM (Claude, Qwen, Gemini, etc.)
- Comprehensive suite of 19 Model Context Protocol (MCP) tools for memory, logging, and workflow
- Docker-ready deployment with custom SQLite connection pooling and WAL mode for performance
- 162 GitHub stars
ユースケース
- Anchoring AI responses to user's logic and data to prevent hallucinations and improve accountability
- Maintaining accurate context and user-specific information across long AI conversations and multiple sessions
- Streamlining AI agent workflows by providing structured memory and smart recall for project notes and decisions