The LLM Engineering Specialist skill acts as a central hub for advanced AI development, providing structured guidance on the entire lifecycle of Large Language Model integration. It helps developers navigate complex decisions like choosing between RAG and fine-tuning, optimizing inference speeds, managing token-heavy context windows, and implementing robust safety guardrails. By routing to specific, battle-tested patterns, it ensures that Claude applies the most effective strategies for building, evaluating, and deploying production-grade LLM applications.
Características Principales
015 GitHub stars
02Intelligent routing for specialized LLM engineering tasks
03Prompt engineering and instruction optimization patterns
04RAG architecture and retrieval strategy guidance
05Fine-tuning strategy selection including LoRA and QLoRA
06Inference optimization and safety alignment guardrails