The world has never had a calibrated instrument for the most consequential transition in human history. This one runs in real time.
Three data-driven clocks tracking AGI, the Singularity, and Superintelligence, derived from a public ensemble of capability, compute, economic, and alignment signals. Every movement sourced. Every methodology versioned.
AGI ClockArtificial General Intelligence
Singularity ClockRecursive Self-Improvement Inflection
Superintelligence ClockArtificial Superintelligence
Live Signal Readings
Signals from the Field
Recent Clock Movements
Methodology
What we count to: AGI
Operationally testable. Grounded in economic reality. Falsifiable in both directions. Sidesteps unresolvable debates about consciousness or "real" intelligence.
What we count to: Singularity & ASI
Singularity: The inflection point at which AI-driven improvement to AI systems outpaces human-driven improvement, creating a self-reinforcing acceleration in capability gains. Proxy thresholds: AI-authored frontier research crosses 50%, and METR task horizon reaches 1,000 hours.
ASI: Cognitive capability qualitatively beyond the best human in essentially every economically relevant domain. Proxy: HLE saturation (95%+) and FrontierMath saturation, or Singularity + 2-year compute/recursion buffer, whichever is later.
How we project
Each clock is a weighted median across its signal ensemble. Benchmark signals use linear extrapolation to defined saturation thresholds. Capability signals (METR task horizon) use exponential extrapolation with empirical doubling periods. Crowd-forecast signals (Metaculus) enter directly as median predictions.
Confidence intervals reflect signal disagreement (10th–90th percentile of per-signal projections), not statistical uncertainty in any single signal. Wider band = more disagreement among signals. We're more uncertain when signals diverge, tighter when they converge.
Cascade ordering
The three clocks must respect causal ordering: ASI cannot precede Singularity, which cannot precede AGI. Minimum buffers are enforced: AGI ≤ Singularity − 365 days ≤ ASI − 730 days.
When independent signal ensembles produce an incoherent ordering, the later clocks are pinned to the
earlier clock plus the buffer, and cascade_adjusted=True
is recorded so the adjustment is transparent rather than hidden.
Alignment Deficit blend (v1.2)
The Alignment Deficit gauge is the ratio of capability velocity to safety velocity. As of model v1.2 it blends three independent inputs:
- 60%: structured composite (R&D capex YoY, frontier-lab safety headcount %, interpretability index)
- 20%: arXiv flux (LLM-classified cs.AI/cs.LG/cs.CL papers, trailing 30 days)
- 20%: frontier-release flux (LLM-classified frontier-lab and safety-org RSS, trailing 90 days)
Each flux input is clamped to a sane band [0.25×, 6.0×] to prevent single-week spikes from dominating, and skipped if the sample size is too small. Weights redistribute when an input is unavailable.
Editorial posture
Sober. Transparent. Non-partisan. Quantitative. Continuously updated. We don't use hype language ("imminent," "superhuman," "god-like"). We don't pick a side in the AI-optimist/doomer debate; we present signals and let readers form their own view. Every claim on the site traces to a source URL. Every methodology change bumps the model version and is recorded in the changelog.
Why we version the model
An opaque clock is an opinion in a lab coat. We publish every revision, every weight, every threshold, and every formula on GitHub so you can argue with the data and the method, and reproduce any past reading from a git hash.
That is also why the Recent Clock Movements panel splits into two streams. Signal entries are the world moving the clock: a benchmark advanced, a paper dropped, a gauge input changed. Methodology entries are us moving the instrument: reweighting signals, replacing a formula, widening a cascade buffer.
A methodology revision can shift a clock by years on the day it ships, while the underlying world did not change at all. Mixing the two in a single stream hides that distinction; separating them makes it legible. The current model is v1.4.0, in effect since Apr 15, 2026.
Limitations
These projections are a research instrument, not a forecast you should act on. Specifically:
- Benchmark saturation is a proxy for capability, not proof of AGI. Models can saturate a benchmark and still fall short of the underlying work.
- Log-linear and linear extrapolation breaks near benchmark ceilings and at phase transitions; real capability curves are sigmoidal.
- Signal availability changes (for example, Metaculus community predictions are currently gated). Missing signals widen the band or hide disagreement.
- Wide confidence intervals reflect real uncertainty, not rounding. A clock sitting on the CI boundary can shift by months on a single ingest cycle.
- The model is opinion, not prediction. Methodology and signal code are public so you can disagree with both.
- The Alignment Deficit is a gauge of research intensity, not a measure of actual alignment; a low reading does not mean systems are safe.
Do not use this site for investment, career, medical, legal, or life decisions.