Automates iterative code optimization by running autonomous experiments to improve specific performance metrics.
Autoloop transforms Claude into an autonomous scientist capable of running unattended experiment cycles to optimize software performance, test coverage, ML models, or code quality. By scouting your codebase and generating a self-contained runner script and protocol, it iteratively modifies code, evaluates it against defined quality gates and metrics, and uses Git checkpoints to preserve improvements while discarding regressions. It is ideal for 'Karpathy loops' or hill-climbing optimizations where you want the agent to work through the night to improve a specific scalar metric.
Key Features
01Structured experiment logging in results.tsv and embedded progress logs
02Autonomous 'hill-climbing' loops for unattended optimization
03Tiered quality gates (tests, linting, benchmarks) to prevent regressions
04Git-based checkpoint and rollback system for automated safety
056 GitHub stars
06Automated project scouting to identify build systems and candidate metrics
Use Cases
01Iteratively increasing test coverage or resolving complex linting issues
02Performance tuning of hot-path code modules using benchmark metrics
03Optimizing ML training scripts or validation loss overnight