Aurelius transforms alignment into an adversarial, incentive-driven process. Rather than trusting centralized judgment, it rewards independent discovery and reproducible scoring. The result is a continuously evolving dataset of alignment failures — built in public, governed by logic, and available to all.