Empower Large Language Model (LLM) clients with resilient, adaptive, and persistent long-term ontological memory through a Neo4j-backed knowledge graph server. This solution offers semantic retrieval, contextual recall, and temporal awareness, ensuring LLMs have access to an evolving, historically accurate view of information. It uses entities as primary nodes with observations, vector embeddings, and full version history, connected by relations with rich metadata, strength, and confidence levels, enabling advanced capabilities like semantic search and time-based confidence decay.
주요 기능
011 GitHub stars
02Unified Neo4j Storage for Graph and Vector Data
03Configurable Confidence Decay for Relations
04Semantic Search with Vector Embeddings
05Temporal Versioning and Point-in-Time Queries
06High-Performance Batch Operations
사용 사례
01Building scalable and temporally aware knowledge graphs for AI applications
02Enabling semantic retrieval and contextual recall for AI models
03Providing long-term ontological memory for LLM clients (e.g., Claude Desktop, Cursor, GitHub Copilot)