01Memory-mapped architecture for true parallel processing and shared data across workers
02AI-native Model Context Protocol (MCP) server for LLM integration (Claude, GPT, etc.)
03Powerful data model supporting standoff annotation, graph traversal, and arbitrary feature annotations
04Advanced structural pattern search capabilities for complex textual queries
05Dramatic performance improvements: 3.5x faster loads and significantly less memory consumption
061 GitHub stars