Optimizes token usage for Claude Code and Claude Desktop through intelligent caching and Brotli compression, significantly reducing LLM context window costs.
Sponsored
Token Optimizer serves as a Model Context Protocol (MCP) server designed to dramatically cut down on token consumption when interacting with Claude Code and Claude Desktop. It achieves this by intelligently combining Brotli compression, a persistent SQLite-based caching system, and accurate token counting using tiktoken (GPT-4 tokenizer). The server provides a suite of tools that allow users to compress and cache text, retrieve optimized content, analyze optimization potential, and monitor cache statistics, enabling up to 95%+ token reduction for more efficient LLM interactions.
Características Principales
01Zero external dependencies for completely offline operation
022 GitHub stars
03Persistent SQLite-based caching across sessions
04Token-Efficient Compression using Brotli
05Accurate token counting with tiktoken (GPT-4 tokenizer)
06Comprehensive compression analysis and optimization recommendations
Casos de Uso
01Reduce token costs when passing large text content to Claude Code or Desktop
02Cache frequently used code snippets or documentation for instant retrieval and reduced LLM calls
03Analyze text to understand potential token savings through compression before submitting to an LLM