Ethical AI Departures

Themes & Writings

Across sixty departures from OpenAI, Google, xAI, Anthropic, Meta, and Stability AI, the same concerns surface again and again. Safety teams are dissolved or absorbed into product work. Deployment timelines are compressed past the point where meaningful evaluation is possible. Researchers who raise objections internally find their concerns deprioritized, and sometimes face retaliation for speaking up.

These are not abstract worries about a distant future. The people who left were senior scientists, alignment leads, and ethics researchers — many of them architects of the safety frameworks their former employers now sideline. Their departures span every major AI laboratory and accelerated sharply through 2024 and into 2025, tracking the industry's pivot from cautious research to aggressive commercialization.

The writings below — peer-reviewed papers, policy reports, public resignation letters, and long-form essays — represent the intellectual foundation behind these warnings. They document what these researchers saw, what they tried to build, and why they ultimately decided they could no longer stay.

Filter by Concern

22 writings total