This scenario is based on the AI 2027 research team's projections for exponential algorithmic progress:
Note: The team has lengthened their median timeline by ~1.5 years based on recent AI progress. This still represents an aggressive scenario with substantial uncertainty.
Higher seniority levels involve more strategic decision-making and are typically harder to automate
How well can AI already perform core tasks in your field?
How much training data exists for AI to learn your role?
How easily can success in your role be measured objectively?
What percentage of your work happens in digital/text format?
Can your work be broken into discrete, measurable tasks?
How standardized are procedures/workflows in your role?
How much does your work require understanding unique organizational context?
How quickly do you get feedback on work quality?
How much of your expertise is documented vs. learned through experience?
How critical are human relationships, empathy, and trust in your role?
How strong are labor protections in your field?
How much does your work require physical presence?
How critical is established personal trust with clients/patients/students?
How prepared is your organization for AI integration?
How cost-sensitive is your employer to labor expenses?
How difficult is it to hire people with your skills?
How modern is your organization's technical infrastructure?
How transferable are your core skills to other roles/industries?
How much could AI amplify (vs. replace) your productivity?
How quickly can you learn and adopt new tools/technologies?
How good are you at your job, relative to your peers?
Capability doubling time (METR trendline: 7 months)
Annual compute growth
Speed of automation in your specific industry
This publication distills a stacked hazard framework for thinking about automation risk. Tune the sliders above to explore how different progress assumptions bend the hazard curve—faster capability compounding or lower friction pulls the timeline forward.
Ploss(t) = 1 - S(t) = 1 - exp(-∫0t Itotal(s) ds)
Itotal(s) = IAI(s) + Imacro(s) + Ifirm(s) + Irole(s) + Ipersonal(s)
Right now the interactive experience concentrates on the AI hazard term. As you answer the prompts the curve shifts to reflect capability readiness, friction, and adaptability.
IAI(s) = hAI / (1 + exp(-k [A(s) · C(s) - I0 - b H(s) - c Tadapt(s)]))
Macro, firm, role, and personal hazards stay at zero for now—the layout is ready for those chapters once the research solidifies. Compute and algorithmic improvements compound multiplicatively, reflecting empirical scaling laws.