MemFabric provides a unique self-organizing memory system where Large Language Models (LLMs) manage their own knowledge by interacting with plain markdown files. Instead of relying on complex embedding pipelines or vector databases, LLMs directly read and interpret descriptive filenames to decide which information is relevant. They store new memories by creating, updating, or reorganizing these files—merging, splitting, renaming, and synthesizing content as their knowledge evolves. This intelligent, LLM-driven approach means the memory structure constantly improves and adapts, and performance automatically enhances with more capable models, making it a highly efficient and adaptable solution for persistent AI memory.
Key Features
010 GitHub stars
02No embeddings or vector database required
03Performance scales directly with the intelligence of the connected LLM
04LLM-driven self-organization of memory files
05Stores knowledge as plain markdown files with descriptive filenames
06Memory structure adapts and improves over time through LLM reorganization
Use Cases
01Provide persistent memory for AI chatbots (Claude, ChatGPT, Gemini) across conversations and providers
02Equip open-source computer-use agents with long-term context about users and environments
03Facilitate shared memory and namespace for multiple AI agents