About
The Cache Cost Tracking skill provides a robust framework for monitoring LLM expenses and evaluating cache performance within AI applications. By leveraging Langfuse integration, it allows developers to track costs across multiple cache layers, attribute spending to specific agents in multi-agent workflows, and calculate real-world financial savings from prompt and response caching. This skill is essential for production-grade systems where token usage optimization and detailed usage reporting are required for budget management.