Tracks input and output interactions with large language models, offering insights into application-model communication.
LLM Tracker is a lightweight tool designed to monitor the input and output of interactions between applications and large language models. Built on Traefik, it captures the communication flow, which is often overlooked by client frameworks. It currently supports models deployed with Ollama, Deepseek, and those compatible with the OpenAI SDK. Features include masking of tools lists in inputs, support for both streaming and non-streaming responses, compatibility with MCP, function_call, and chat modes, and escaping special characters in input and output for improved readability.