AI 2027 Scenario Active

This scenario is based on the AI 2027 research team's projections for exponential algorithmic progress:

  • Updated median (May 2025): Superhuman coding by 2029-2030
  • Still substantial probability: Superhuman coding could arrive as early as 2027
  • ASI timeline: Artificial Superintelligence potentially 1-2 years after superhuman coding
  • Exponential algorithmic progress: Self-improving AI systems accelerate R&D 5-8x once superhuman coding is achieved
  • Technical assumptions: AI capability doubling every 5-6 months, compute scaling at 3.4x annually (leading AI labs)
  • Key mechanism: Expert-human-level AI systems automate AI research itself, creating rapid recursive improvement through multiplicative scaling effects

Note: The team has lengthened their median timeline by ~1.5 years based on recent AI progress. This still represents an aggressive scenario with substantial uncertainty.

Complete all sections below for personalized analysis

Step 1: Select Your Role Category

Step 2: Your Seniority Level

Higher seniority levels involve more strategic decision-making and are typically harder to automate

Step 3: AI Displacement Risk Assessment

Your Risk Profile & Timeline

Role: -
Scenario: Custom
AI Readiness: -
Task Adaptability: -
Friction: -
Firm Readiness: -
Personal Adapt: -

Your Timeline

AI Progress Speed

Capability doubling time (METR trendline: 7 months)

7 mo

Compute Scaling

Annual compute growth

3.4x

Industry Automation Pace

Speed of automation in your specific industry

1.0x
How this works

The stacked hazard job loss model

This publication distills a stacked hazard framework for thinking about automation risk. Tune the sliders above to explore how different progress assumptions bend the hazard curve—faster capability compounding or lower friction pulls the timeline forward.

Survival relationship Ploss(t) = 1 - S(t) = 1 - exp(-∫0t Itotal(s) ds)
Total hazard (stacked framework) Itotal(s) = IAI(s) + Imacro(s) + Ifirm(s) + Irole(s) + Ipersonal(s)

Right now the interactive experience concentrates on the AI hazard term. As you answer the prompts the curve shifts to reflect capability readiness, friction, and adaptability.

AI hazard component IAI(s) = hAI / (1 + exp(-k [A(s) · C(s) - I0 - b H(s) - c Tadapt(s)]))
  • A(s) AI capability growth (exp. doubling every 5-7 months)
  • C(s) compute scaling effect (3.4-5x/year for leading labs)
  • H(s) organizational and regulatory friction
  • Tadapt(s) how easily tasks can shift to AI
  • hAI scaling constant for overall intensity
  • I0 baseline adoption threshold
  • k steepness of the adoption curve
  • b, c sensitivity to friction and adaptability

Macro, firm, role, and personal hazards stay at zero for now—the layout is ready for those chapters once the research solidifies. Compute and algorithmic improvements compound multiplicatively, reflecting empirical scaling laws.

50% Displacement Risk

-

90% Displacement Risk

-

Risk by 2030

-

Risk by 2031

-