关于
LLM Tuning Patterns provides a research-driven framework for optimizing Large Language Model performance across diverse domains like formal theorem proving, code generation, and creative exploration. Drawing from APOLLO and Godel-Prover research, this skill implements specific guidance for parameters such as temperature, top_p, and token limits to ensure maximum reasoning capability. It helps users avoid common pitfalls like truncated chain-of-thought and overly deterministic proof paths, while introducing advanced techniques like parallel sampling and proof-plan-first prompting.