Implements idiomatic Langfuse SDK patterns and best practices for comprehensive LLM observability and tracing.
This skill empowers Claude to architect robust LLM observability by applying standardized Langfuse SDK patterns. It provides structured guidance on implementing singleton client patterns, complex trace lifecycles, and nested span hierarchies while ensuring proper error handling and data flushing. Whether you are leveraging Python decorators for automatic instrumentation or manually tracking sessions and evaluation scores, this skill ensures your AI applications are observable, measurable, and production-ready.
주요 기능
010 GitHub stars
02Session and user-level analytics integration
03Numeric evaluation scoring for LLM performance monitoring
04Automated trace lifecycle and nested span management
05Singleton client configuration with clean shutdown logic
06Idiomatic Python decorator implementation for automatic tracing
사용 사례
01Setting up production-grade LLM tracing and monitoring in new projects
02Implementing user feedback loops and evaluation metrics for AI applications
03Debugging complex multi-step AI agent workflows using nested span hierarchies