01Utilizes context stuffing over embeddings for memory retrieval, leveraging large LLM context windows.
02Stores all memories in inspectable and version-controllable YAML and Markdown files.
03Intelligent `remember` function categorizes and stores diverse information (facts, feedback, projects, references).
04Comprehensive `recall` function synthesizes answers from all stored memories with sources and confidence.
05Supports any OpenAI-compatible LLM provider for flexible and cost-effective operation.
060 GitHub stars