Across dozens of departures from OpenAI, Google, xAI, Anthropic, Meta, and Stability AI, the same concerns surface again and again across their writings. Safety teams are dissolved or absorbed into product work. Deployment timelines are compressed past the point where meaningful evaluation is possible. Researchers who raise objections internally find their concerns deprioritized, and sometimes face retaliation for speaking up.
These are not abstract worries about a distant future. The people who left were senior scientists, alignment leads, and ethics researchers — many of them architects of the safety frameworks their former employers now sideline. Their departures span every major AI laboratory and accelerated sharply through 2024 and into 2026, tracking the industry's pivot from cautious research to aggressive commercialization.
The writings below — peer-reviewed papers, policy reports, public resignation letters, and long-form essays — represent the intellectual foundation behind these warnings. They document what these researchers saw, what they tried to build, and why they ultimately decided they could no longer stay.
Falsifiable claims extracted from researcher statements — tracked against real-world outcomes.
11 predictions total
Resolution— Nov 2024
AI-generated deepfakes and disinformation were widely documented in the 2024 US, Indian, and European elections. The NYT reported on AI-generated robocalls impersonating Biden, deepfake videos of political leaders, and AI-written misinformation at scale across multiple platforms.
View evidenceResolution— Nov 2024
Multiple senior safety researchers departed OpenAI through 2024-2025 citing safety deprioritization. The superalignment team was dissolved in May 2024. Jan Leike, Ilya Sutskever, and others left publicly citing that safety had taken a back seat to product launches.
View evidenceResolution— Jun 2025
The Washington Post and multiple outlets reported on compressed safety testing timelines at major labs through 2024-2025. Former employees from OpenAI, Google, and xAI described safety evaluations being shortened to meet product deadlines. OpenAI shipped GPT-4o with abbreviated red-teaming.
View evidenceResolution— Jan 2025
The 2024 AI Index Report documented that GPT-4, Claude 3, and Gemini Ultra exceeded capability levels that 2022 expert surveys predicted would take until 2028-2030. Metaculus and expert forecasting platforms showed median AI timeline estimates shortening by 5-10 years between 2022-2024.
View evidenceResolution— Jan 2025
AI hype accelerated through 2024-2025 rather than fading. Investment in AI companies surged to record levels, major tech companies increased AI spending dramatically, and public attention intensified with the mainstream adoption of ChatGPT, Gemini, and Claude.
View evidence