01Performance bottleneck identification using latency and token metrics
028,417 GitHub stars
03Flexible output formatting including pretty-print and raw JSON for automation
04Support for OpenInference semantic attributes for standardized observability
05Deep trace fetching and span analysis for complex LLM workflows
06Experiment result review and dataset exploration for AI benchmarks