Acerca de
Designed to significantly enhance the efficiency of large language model tool-use, Proxy acts as an intelligent intermediary, reducing context window consumption by approximately 95%. Instead of exposing a multitude of individual tools, it presents LLMs like Claude with just two meta-tools: one to navigate a hierarchical tool tree and another to execute any discovered tool. This 'progressive disclosure' model not only slashes token usage but also ensures zero cold-start latency through background preloading, making tool interactions faster and more cost-effective.