概要
The Prediction Tracking skill provides a rigorous framework for Claude to document, monitor, and evaluate forecasts made by AI researchers and critics. It enables users to record specific claims with mandatory metadata like timeframes and confidence levels, then later audit those claims against emerging evidence. By generating accuracy scores and assessing 'calibration'—the relationship between a predictor's confidence and their actual success rate—this skill helps researchers and developers distinguish signal from noise in a rapidly evolving technological landscape.