Guodong Zhang
Co-Founder (Head of Imagine Team) · xAI · 2026
Co-founder and head of xAI's Imagine team. Part of broader co-founder exodus — 9 of 11 original co-founders have now departed.Safety Deprioritization
Showing 38 of 60 profiles
Co-Founder (Head of Imagine Team) · xAI · 2026
Co-founder and head of xAI's Imagine team. Part of broader co-founder exodus — 9 of 11 original co-founders have now departed.Safety Deprioritization
Co-Founder · xAI · 2026
Co-founder. Part of broader exodus — 9 of 11 original co-founders have now left xAI.Safety Deprioritization
VP of Hardware, Robotics Lead · OpenAI · 2026
Resigned over OpenAI's deal with the Pentagon. Opposed military applications of AI, particularly concerns about surveillance and lethal autonomy. Her departure triggered a 295% surge in ChatGPT uninstalls.Military Applications
Co-Founder · xAI · 2026
Co-founder who departed on Feb 27, 2026. Part of the broader exodus where 9 of 11 original xAI co-founders have now left.Safety Deprioritization
Co-Founder (Reasoning Team Lead) · xAI · 2026
Co-founder who led the reasoning team. Departed as part of broader xAI co-founder exodus.Safety Deprioritization
Co-Founder (Research and Safety Lead) · xAI · 2026
Co-founder who led research and safety. Part of a broader exodus where 9 of 11 original xAI co-founders departed, amid concerns about Grok's safety failures including generation of non-consensual explicit images. The safety team was completely disbanded after his departure.Safety Deprioritization
Safety Researcher · Anthropic · 2026
Departed from Anthropic alongside Mrinank Sharma during the February 2026 safety researcher exodus. Part of a wave of departures expressing concern about the gap between Anthropic's safety commitments and competitive pressures.Safety Deprioritization
Safety Researcher · Anthropic · 2026
Departed from Anthropic alongside Mrinank Sharma during the February 2026 safety researcher exodus. Part of a wave of departures expressing concern about the gap between Anthropic's safety commitments and competitive pressures.Safety Deprioritization
Head of Safeguards Research · Anthropic · 2026
Warned 'the world is in peril.' Cited a disconnect between Anthropic's stated safety values and competitive pressures driving actual decisions.Safety Deprioritization
Researcher · OpenAI · 2026
Resigned over ChatGPT advertising plans. Wrote a New York Times op-ed warning that OpenAI would exploit users' intimate conversational data to serve targeted ads.Lack of Transparency
Chief Engineer and Co-Founder · xAI · 2025
Left xAI to focus on AI safety research. Founded Babuschkin Ventures to work on alignment problems independently from capabilities labs.Safety Deprioritization
Co-Founder · xAI · 2025
Co-founder who departed to join Morph Labs, citing desire to pursue independent research. Part of the broader xAI co-founder exodus.Safety Deprioritization
Safety Researcher · OpenAI · 2024
Said he's 'pretty terrified' by the pace of AI development. Called the pursuit of AGI a 'very risky gamble with the future of humanity.'AGI Risk Underestimation
VP of Research and Safety · OpenAI · 2024
Led an 80+ person safety systems team. Departed after 7 years. Her exit marked the latest in a long string of safety researcher departures.Safety Deprioritization
Researcher, AI Governance · OpenAI · 2024
Said it became 'harder for me to trust that my work here would benefit the world.'Safety Deprioritization
VP of Research · OpenAI · 2024
Departed the same day as CTO Mira Murati and CRO Bob McGrew.Safety Deprioritization
Chief Technology Officer · OpenAI · 2024
Departed amid broader leadership exodus. Said she wanted 'to create the time and space to do my own exploration.' Left the same day as the Chief Research Officer and VP of Research.Safety Deprioritization
Chief Research Officer · OpenAI · 2024
Departed the same day as CTO Mira Murati and VP Research Barret Zoph, part of a coordinated senior leadership exodus.Safety Deprioritization
Co-Founder · OpenAI · 2024
Left to join Anthropic to 'deepen my focus on AI alignment and to start a new chapter where I can return to hands-on technical work.' Departed less than 3 months after the Superalignment team was dissolved.Alignment Research Gaps
Head of Preparedness Team · OpenAI · 2024
Removed from leadership of the Preparedness team without public announcement. Reassigned to AI reasoning work. The Preparedness team was formed in Dec 2023 to evaluate catastrophic risks.Safety Deprioritization
Co-Lead, Superalignment Team · OpenAI · 2024
Said 'safety culture and processes have taken a backseat to shiny products' at OpenAI. Resigned the day after Sutskever. Joined Anthropic to continue alignment work.Safety Deprioritization
Safety Researcher · OpenAI · 2024
Departed amid the broader safety staff exodus.Safety Deprioritization
Safety Researcher · OpenAI · 2024
Departed amid the broader safety staff exodus.Safety Deprioritization
Safety Researcher · OpenAI · 2024
Departed amid the broader Superalignment team exodus.Safety Deprioritization
Safety Researcher · OpenAI · 2024
Departed amid the broader Superalignment team exodus.Safety Deprioritization
Researcher, Policy & Governance · OpenAI · 2024
Departed the AI governance team amid broader safety staff exodus.Safety Deprioritization
Researcher, Superalignment Team · OpenAI · 2024
Left the Superalignment team. Said 'I really didn't want to end up working for the Titanic of AI.'Safety Deprioritization
VP and Engineering Fellow · Google · 2023
Resigned to freely speak about the existential risks of AI. Warned of 10-20% probability of human extinction from AI. Said he regretted his life's work.AGI Risk Underestimation
Senior Engineering Manager, ML Ethics Team · Twitter · 2022
Laid off when Elon Musk eliminated the entire ~20-person META (ML Ethics, Transparency, and Accountability) team. Stated publicly: 'The team that was researching and pushing for algorithmic transparency and algorithmic choice... is gone.'Team Dissolution
Director of ML Ethics, Transparency, and Accountability · Twitter · 2022
Laid off by Elon Musk during mass Twitter layoffs. Her entire ML Ethics, Transparency, and Accountability (META) team was eliminated. Had been building algorithmic fairness tools. Called the dissolution 'a loss for the industry.'Team Dissolution
Director of Responsible Innovation · Meta · 2022
Left before Meta formally disbanded the Responsible Innovation Team in September 2022. The team had advised product teams on potential harms across societal issues. He was Meta's first-ever Director of Responsible Innovation.Team Dissolution
Research Scientist · OpenAI · 2021
Left to co-found Anthropic. Became Chief Science Officer focused on scaling laws and safety.Safety Deprioritization
Head of Policy · OpenAI · 2021
Left to co-found Anthropic. Concerned about governance and policy gaps in AI development.Safety Deprioritization
Research Scientist · OpenAI · 2021
Left to co-found Anthropic. Became Chief Architect focused on safe scaling.Safety Deprioritization
Research Scientist (Interpretability) · OpenAI · 2021
Left to co-found Anthropic to focus on AI interpretability and safety research.Alignment Research Gaps
Research Scientist (GPT-3 Lead Author) · OpenAI · 2021
Left to co-found Anthropic over safety direction concerns at OpenAI.Safety Deprioritization
VP of Safety & Policy · OpenAI · 2021
Departed with Dario Amodei to co-found Anthropic. Concerned about the pace of scaling without proportional safety investment.Safety Deprioritization
VP of Research · OpenAI · 2021
Left over disagreements about scaling AI without adequate safety research. Co-founded Anthropic to pursue a safety-first approach to AI development.Safety Deprioritization
11 in the last 90 days