概要
This skill provides a comprehensive framework for building LLM-powered applications, prioritizing real-time research over stale training data to ensure compatibility with the latest model versions and API changes. It guides developers through model selection using live benchmarks, implements optimized architecture patterns from simple calls to complex agents, and enforces best practices for prompt caching, structured outputs, and observability. By integrating tools like OpenRouter, Langfuse, and Promptfoo, it transforms LLM development from experimental prompting into a rigorous, production-ready engineering discipline.