38 safety-motivated departures tracked
OpenAI has seen the largest wave of safety-motivated departures in the AI industry. In early 2021, seven researchers — including Dario Amodei, Daniela Amodei, and Chris Olah — left to co-found Anthropic, citing concerns that scaling was outpacing safety investment. The exodus accelerated in 2024 after the dissolution of the Superalignment team, co-led by Ilya Sutskever and Jan Leike, who said safety had taken 'a backseat to shiny products.' More than a dozen Superalignment researchers departed within days. Governance researcher Daniel Kokotajlo forfeited roughly $1.7 million in equity by refusing to sign a non-disparagement agreement. Senior leadership followed: CTO Mira Murati, Chief Research Officer Bob McGrew, and VP of Research Barret Zoph all left on the same day. Miles Brundage, Senior Advisor for AGI Readiness, warned that 'neither OpenAI nor any other frontier lab is ready' for AGI. The pattern has continued into 2026 with departures over ChatGPT advertising plans and a Pentagon partnership.
Steven Adler
Safety Researcher · 2024