Researcher, Governance Team · OpenAI · 2024
Lost confidence in OpenAI leadership's ability to handle AGI responsibly. Forfeited approximately $1.7M in vested equity by refusing to sign a non-disparagement agreement.
This report presents a detailed scenario projecting how AI systems could evolve from their current capabilities to artificial superintelligence by the end of the decade, constructed by a team of forecasters with deep expertise in AI capabilities and risk assessment. The scenario traces a year-by-year progression through increasingly capable AI agents, automated AI research, and recursive self-improvement, grounding each step in specific technical milestones and the strategic decisions that labs and governments might make along the way. The authors argue that the combination of continued scaling, algorithmic improvements, and the deployment of AI systems as autonomous researchers could compress what might seem like decades of progress into just a few years. The report is notable for its specificity: rather than offering vague warnings about distant risks, it provides concrete predictions about capability thresholds, economic impacts, and geopolitical dynamics that can be checked against reality as events unfold. Lead author Daniel Kokotajlo departed OpenAI over concerns that the company was not taking safety seriously enough, lending personal credibility to the urgency conveyed in the forecast.
Written in 2021 as a speculative exercise on the forecasting-oriented platform LessWrong, this essay sketches a year-by-year future history from 2022 through 2026 predicting that AI capabilities would advance far more rapidly than mainstream expectations suggested. Kokotajlo forecast that language models would become significantly more capable each year, that AI would begin automating substantial portions of knowledge work, and that the geopolitical implications of these advances would become increasingly acute. What makes this piece remarkable in retrospect is how many of its predictions proved directionally accurate: the essay anticipated the emergence of highly capable chatbots, the acceleration of AI investment, growing public awareness of AI risks, and intensifying competition between major AI laboratories. The piece exemplifies a tradition of quantitative forecasting in the AI safety community that emphasizes making specific, falsifiable predictions rather than offering vague hand-wringing about the future. Kokotajlo's track record as a forecaster contributed to his credibility when he later raised concerns about safety practices at OpenAI and ultimately resigned, forfeiting significant equity to speak publicly about his worries.
2 of 4 confirmed
AGI could plausibly arrive by 2027
“I left OpenAI because I lost confidence that it would behave responsibly around the time of AGI. I think AGI is coming and that we are not on track to handle it responsibly.”
AI companies will prioritize deployment speed over safety evaluation
“I believe the leaders of the top labs are making reckless decisions in how much autonomy and capability they're giving to their models.”
The Washington Post and multiple outlets reported on compressed safety testing timelines at major labs through 2024-2025. Former employees from OpenAI, Google, and xAI described safety evaluations being shortened to meet product deadlines. OpenAI shipped GPT-4o with abbreviated red-teaming.
View evidence ⤴AI capabilities will advance faster than mainstream expert predictions
“AI timelines have been consistently shorter than most experts predicted. The pace of progress has surprised nearly everyone.”
The 2024 AI Index Report documented that GPT-4, Claude 3, and Gemini Ultra exceeded capability levels that 2022 expert surveys predicted would take until 2028-2030. Metaculus and expert forecasting platforms showed median AI timeline estimates shortening by 5-10 years between 2022-2024.
AI hype will fade as unrealistic expectations fail to materialize
“But the hype begins to fade as the unrealistic expectations from 2022-2023 fail to materialize.”
AI hype accelerated through 2024-2025 rather than fading. Investment in AI companies surged to record levels, major tech companies increased AI spending dramatically, and public attention intensified with the mainstream adoption of ChatGPT, Gemini, and Claude.
View evidence ⤴