Graphiti Pro icon

Graphiti Pro

Provides an enhanced memory repository service and management platform for building and querying temporally-aware knowledge graphs tailored for AI agents.

Acerca de

Graphiti Pro builds upon the foundational Graphiti framework, offering an advanced memory repository service and a comprehensive management platform for AI agents operating in dynamic environments. Unlike traditional RAG methods, Graphiti continuously integrates user interactions, structured and unstructured data, and external information into a coherent, queryable knowledge graph. This 'Pro' version significantly enhances the core MCP service with asynchronous parallel processing for memory additions, robust task management tools, and unified configuration. It also provides broader AI model compatibility, supporting various OpenAI API-compatible LLMs and local models, alongside flexible, separated model configurations for specialized tasks. A complete web-based management interface further improves user experience with service control, real-time configuration updates, usage monitoring, and log viewing, making it ideal for developing interactive, context-aware AI applications.

Características Principales

  • Asynchronous parallel processing for efficient memory additions, handling up to 5 tasks concurrently per group ID.
  • Dedicated task management tools to list, get status, wait for, and cancel `add_memory` operations.
  • Broad AI model compatibility supporting DeepSeek, Qwen, Ollama, vLLM, and other OpenAI API-compatible models via the instructor library.
  • Flexible, separated configuration for Large Language Models (LLM), Small Language Models (Small LLM), and embedding models.
  • Integrated web-based management platform with service control, real-time configuration, detailed token usage monitoring, and log viewing.
  • 0 GitHub stars

Casos de Uso

  • Developing and scaling interactive, context-aware AI applications that require dynamic, temporally-aware knowledge graphs.
  • Enhancing AI agent capabilities by efficiently integrating diverse data sources into a robust memory repository.
  • Managing and monitoring the performance and configuration of AI memory services across various LLMs and embedding models, including local deployments.