소개
This skill provides a systematic diagnostic framework for Apple's Vision and VisionKit frameworks across all xOS platforms (iOS, iPadOS, macOS, tvOS, and visionOS). It helps developers identify and fix common pitfalls like low confidence scores, threading errors that freeze the UI, coordinate system mismatches between normalized and image space, and API availability issues. By offering structured decision trees and diagnostic code snippets for hand pose, body pose, text recognition, and barcode detection, it significantly reduces the time spent debugging complex computer vision implementations.