01Triple interface: Skill file for zero-config agent adoption, CLI for direct shell use, and MCP server for protocol-native clients.
02Flexible LLM provider configuration via YAML and environment variables, supporting Ollama and any OpenAI-compatible provider.
03Built with hexagonal architecture and dependency injection for maintainability and testability.
04Two core distillation operations: `distill_batch` for compressing full output and `distill_watch` for identifying relevant deltas between snapshots.
05Interactive terminal UI (`--config-ui`) for simplified first-time setup and configuration.
060 GitHub stars