소개
Response Compression is a specialized efficiency skill designed to streamline AI communication by removing decorative elements, unnecessary transitions, and polite filler that consumes valuable context window space. By enforcing strict directness guidelines and eliminating "hype" or "hedging" language, it enables users to save significant token costs—typically 150 to 350 tokens per interaction. This makes it an essential utility for developers managing high-volume automated workflows, complex debugging sessions, or large codebase refactors where every token counts toward the context limit.