Analyze and debug Claude Code session transcripts to optimize agent performance and identify improvement opportunities.
Session Auditor provides a comprehensive suite of tools for inspecting and analyzing Claude Code session history. It parses JSONL transcripts to generate detailed statistics on token usage, tool calls, and subagent activity. By identifying where an agent struggled, required redirection, or encountered errors, users can derive actionable lessons to refine system prompts and skill definitions. This skill is essential for developers looking to move beyond simple interactions into building robust, high-performance AI agents with high success rates and efficient token consumption.
Key Features
01Deep-dive error analysis including retries and self-corrections
02Detailed metrics for token usage, turn counts, and tool call distribution
03Automatic session resolution across multiple project directories
04Subagent activity tracking to audit complex hierarchical workflows
052 GitHub stars
06Advanced transcript filtering for thinking blocks, tool results, and bash commands
Use Cases
01Optimizing token consumption by identifying redundant tool calls and long thinking blocks
02Refining skill instructions based on historical session data and lessons-learned audits
03Debugging complex agent workflows that resulted in unexpected errors or infinite loops