Track and manage AI agent workflow costs locally, providing offline token counting, budget alerts, and model routing suggestions without proxying API traffic.
Sponsored
This tool offers local-first cost awareness for AI agent workflows, calculating token counts offline using js-tiktoken to ensure no spend data leaves your machine. Unlike proxy-based trackers, it processes all cost metrics locally via SQLite, eliminating dependencies on third-party servers and enhancing data residency. It provides detailed cost breakdowns, budget alerts, and model routing suggestions to help users optimize spending and choose cheaper models before invoices arrive, supporting multi-provider pricing across Claude, OpenAI, and Gemini. This makes it ideal for users with sensitive data or those who prefer full control over their cost tracking infrastructure.
Características Principales
01Heuristic model routing suggestions by task type
02Per-tool cost breakdown
03Budget alerts for session spend thresholds
04Offline token counting via js-tiktoken
05Multi-provider pricing for Claude, OpenAI, and Gemini
060 GitHub stars
Casos de Uso
01Optimize and manage costs for AI agent workflows locally.
02Track token usage and spend across multiple LLM providers (Claude, OpenAI, Gemini).
03Generate cost reports and allocate spend to projects for internal billing without sending data to third-party proxies.