About
This skill empowers Claude to act as a professional prompt engineer, analyzing existing LLM instructions for redundancy, verbosity, and structural inefficiencies. By streamlining phrasing and focusing on high-impact directives, it significantly reduces token overhead and improves response latency without sacrificing output accuracy. It is an essential tool for developers and AI engineers looking to scale LLM applications cost-effectively while maintaining high performance and clarity across various model architectures.