Guodong Zhang
Co-Founder (Head of Imagine Team) · xAI · 2026
Co-founder and head of xAI's Imagine team. Part of broader co-founder exodus — 9 of 11 original co-founders have now departed.Safety Deprioritization
11 in the last 90 days
60 profiles
Co-Founder (Head of Imagine Team) · xAI · 2026
Co-founder and head of xAI's Imagine team. Part of broader co-founder exodus — 9 of 11 original co-founders have now departed.Safety Deprioritization
Co-Founder · xAI · 2026
Co-founder. Part of broader exodus — 9 of 11 original co-founders have now left xAI.Safety Deprioritization
VP of Hardware, Robotics Lead · OpenAI · 2026
Resigned over OpenAI's deal with the Pentagon. Opposed military applications of AI, particularly concerns about surveillance and lethal autonomy. Her departure triggered a 295% surge in ChatGPT uninstalls.Military Applications
Co-Founder · xAI · 2026
Co-founder who departed on Feb 27, 2026. Part of the broader exodus where 9 of 11 original xAI co-founders have now left.Safety Deprioritization
Co-Founder (Reasoning Team Lead) · xAI · 2026
Co-founder who led the reasoning team. Departed as part of broader xAI co-founder exodus.Safety Deprioritization
Co-Founder (Research and Safety Lead) · xAI · 2026
Co-founder who led research and safety. Part of a broader exodus where 9 of 11 original xAI co-founders departed, amid concerns about Grok's safety failures including generation of non-consensual explicit images. The safety team was completely disbanded after his departure.Safety Deprioritization
Safety Researcher · Anthropic · 2026
Departed from Anthropic alongside Mrinank Sharma during the February 2026 safety researcher exodus. Part of a wave of departures expressing concern about the gap between Anthropic's safety commitments and competitive pressures.Safety Deprioritization
Safety Researcher · Anthropic · 2026
Departed from Anthropic alongside Mrinank Sharma during the February 2026 safety researcher exodus. Part of a wave of departures expressing concern about the gap between Anthropic's safety commitments and competitive pressures.Safety Deprioritization
Head of Safeguards Research · Anthropic · 2026
Warned 'the world is in peril.' Cited a disconnect between Anthropic's stated safety values and competitive pressures driving actual decisions.Safety Deprioritization
Researcher · OpenAI · 2026
Resigned over ChatGPT advertising plans. Wrote a New York Times op-ed warning that OpenAI would exploit users' intimate conversational data to serve targeted ads.Lack of Transparency
VP of Product Policy · OpenAI · 2026
Fired after opposing ChatGPT's 'adult mode' that would allow sexually explicit conversations. Led the product policy team that develops safeguards. Said 'The allegation that I discriminated against anyone is absolutely false.'Whistleblower Retaliation
Chief Engineer and Co-Founder · xAI · 2025
Left xAI to focus on AI safety research. Founded Babuschkin Ventures to work on alignment problems independently from capabilities labs.Safety Deprioritization
Co-Founder · xAI · 2025
Co-founder who departed to join Morph Labs, citing desire to pursue independent research. Part of the broader xAI co-founder exodus.Safety Deprioritization
Safety Researcher · OpenAI · 2024
Said he's 'pretty terrified' by the pace of AI development. Called the pursuit of AGI a 'very risky gamble with the future of humanity.'AGI Risk Underestimation
VP of Research and Safety · OpenAI · 2024
Led an 80+ person safety systems team. Departed after 7 years. Her exit marked the latest in a long string of safety researcher departures.Safety Deprioritization
Researcher, AI Governance · OpenAI · 2024
Said it became 'harder for me to trust that my work here would benefit the world.'Safety Deprioritization
Senior Advisor for AGI Readiness · OpenAI · 2024
Said 'Neither OpenAI nor any other frontier lab is ready' for AGI. Noted OpenAI placed increasingly restrictive limits on what he could publish. The AGI Readiness team was disbanded after his departure.AGI Risk Underestimation
VP of Research · OpenAI · 2024
Departed the same day as CTO Mira Murati and CRO Bob McGrew.Safety Deprioritization
Chief Technology Officer · OpenAI · 2024
Departed amid broader leadership exodus. Said she wanted 'to create the time and space to do my own exploration.' Left the same day as the Chief Research Officer and VP of Research.Safety Deprioritization
Chief Research Officer · OpenAI · 2024
Departed the same day as CTO Mira Murati and VP Research Barret Zoph, part of a coordinated senior leadership exodus.Safety Deprioritization
Research Scientist · OpenAI · 2024
Left over concerns about copyright law violations in ChatGPT's training data. Said 'If you believe what I believe, you have to just leave the company.' Became a whistleblower. Found dead in his apartment on November 26, 2024.Copyright and Data Ethics
Co-Founder · OpenAI · 2024
Left to join Anthropic to 'deepen my focus on AI alignment and to start a new chapter where I can return to hands-on technical work.' Departed less than 3 months after the Superalignment team was dissolved.Alignment Research Gaps
Head of Preparedness Team · OpenAI · 2024
Removed from leadership of the Preparedness team without public announcement. Reassigned to AI reasoning work. The Preparedness team was formed in Dec 2023 to evaluate catastrophic risks.Safety Deprioritization
Researcher, Superalignment Team · OpenAI · 2024
Departed after the Superalignment team was dissolved.Team Dissolution
Researcher, Superalignment Team · OpenAI · 2024
Departed after the Superalignment team was dissolved.Team Dissolution
Researcher, Superalignment Team · OpenAI · 2024
Departed after the Superalignment team was dissolved.Team Dissolution
Researcher, Superalignment Team · OpenAI · 2024
Departed after the Superalignment team was dissolved.Team Dissolution
Researcher, Superalignment Team · OpenAI · 2024
Departed after the Superalignment team was dissolved.Team Dissolution
Researcher, Superalignment Team · OpenAI · 2024
Departed after the Superalignment team was dissolved.Team Dissolution
Co-Lead, Superalignment Team · OpenAI · 2024
Said 'safety culture and processes have taken a backseat to shiny products' at OpenAI. Resigned the day after Sutskever. Joined Anthropic to continue alignment work.Safety Deprioritization
Co-Founder and Chief Scientist · OpenAI · 2024
Co-led the Superalignment team and was involved in the attempted board ouster of Sam Altman in Nov 2023. After the board crisis resolved in Altman's favor, Sutskever departed and founded Safe Superintelligence Inc. (SSI).Alignment Research Gaps
Safety Researcher · OpenAI · 2024
Departed amid the broader safety staff exodus.Safety Deprioritization
Safety Researcher · OpenAI · 2024
Departed amid the broader safety staff exodus.Safety Deprioritization
Safety Researcher · OpenAI · 2024
Departed amid the broader Superalignment team exodus.Safety Deprioritization
Safety Researcher · OpenAI · 2024
Departed amid the broader Superalignment team exodus.Safety Deprioritization
Researcher, Policy & Governance · OpenAI · 2024
Departed the AI governance team amid broader safety staff exodus.Safety Deprioritization
Researcher, Superalignment Team · OpenAI · 2024
Fired after writing an internal board memo about cybersecurity vulnerabilities. Later published the influential 'Situational Awareness' essay arguing AGI is imminent and labs are unprepared.Whistleblower Retaliation
Researcher, Governance Team · OpenAI · 2024
Lost confidence in OpenAI leadership's ability to handle AGI responsibly. Forfeited approximately $1.7M in vested equity by refusing to sign a non-disparagement agreement.AGI Risk Underestimation
Researcher, Superalignment Team · OpenAI · 2024
Left the Superalignment team. Said 'I really didn't want to end up working for the Titanic of AI.'Safety Deprioritization
Vice President of Audio · Stability AI · 2023
Resigned because he disagreed with Stability AI's position that training generative AI models on copyrighted works constitutes 'fair use.' Wrote: 'Companies worth billions of dollars are training generative AI models on creators' works without permission... this is not acceptable.'Copyright and Data Ethics
VP and Engineering Fellow · Google · 2023
Resigned to freely speak about the existential risks of AI. Warned of 10-20% probability of human extinction from AI. Said he regretted his life's work.AGI Risk Underestimation
Senior Engineering Manager, ML Ethics Team · Twitter · 2022
Laid off when Elon Musk eliminated the entire ~20-person META (ML Ethics, Transparency, and Accountability) team. Stated publicly: 'The team that was researching and pushing for algorithmic transparency and algorithmic choice... is gone.'Team Dissolution
Director of ML Ethics, Transparency, and Accountability · Twitter · 2022
Laid off by Elon Musk during mass Twitter layoffs. Her entire ML Ethics, Transparency, and Accountability (META) team was eliminated. Had been building algorithmic fairness tools. Called the dissolution 'a loss for the industry.'Team Dissolution
Director of Responsible Innovation · Meta · 2022
Left before Meta formally disbanded the Responsible Innovation Team in September 2022. The team had advised product teams on potential harms across societal issues. He was Meta's first-ever Director of Responsible Innovation.Team Dissolution
Software Engineer, Ethical AI Team · Google · 2022
Departed alongside Alex Hanna to join Timnit Gebru's DAIR Institute as Director of Research. The dismissals of Gebru and Margaret Mitchell weighed heavily on his decision to leave.Whistleblower Retaliation
Research Scientist, Ethical AI Team · Google · 2022
Resigned from Google's Ethical AI team after the firings of Timnit Gebru and Margaret Mitchell. Said the company was not serious about ethical AI research. Joined Gebru's DAIR Institute.Whistleblower Retaliation
Product Manager, Civic Integrity Team · Meta · 2021
Leaked tens of thousands of internal Facebook documents to the SEC and Wall Street Journal, revealing that Meta knew its algorithms amplified hate speech and harmed teen mental health. Testified before Congress. The Civic Integrity team was dissolved after the 2020 election.Whistleblower Retaliation
Co-Lead, Ethical AI Team · Google · 2021
Fired after searching company emails for evidence of discrimination related to Timnit Gebru's ouster. Google cited violations of security policies.Whistleblower Retaliation
Software Engineer · Google · 2021
Resigned citing Google's mistreatment of Timnit Gebru. Part of the wave of protest resignations following the Ethical AI team leadership firings.Whistleblower Retaliation
Engineering Director, User Safety · Google · 2021
Resigned after 16 years at Google in protest of Timnit Gebru's firing. Said her exit 'extinguished my desire to continue as a Googler' and added: 'We cannot say we believe in diversity, and then ignore the conspicuous absence of many voices from within our walls.'Whistleblower Retaliation
Research Scientist (Alignment) · OpenAI · 2021
Left to found the Alignment Research Center (ARC) to focus on theoretical alignment research outside the constraints of a capabilities lab.Alignment Research Gaps
Research Scientist · OpenAI · 2021
Left to co-found Anthropic. Became Chief Science Officer focused on scaling laws and safety.Safety Deprioritization
Head of Policy · OpenAI · 2021
Left to co-found Anthropic. Concerned about governance and policy gaps in AI development.Safety Deprioritization
Research Scientist · OpenAI · 2021
Left to co-found Anthropic. Became Chief Architect focused on safe scaling.Safety Deprioritization
Research Scientist (Interpretability) · OpenAI · 2021
Left to co-found Anthropic to focus on AI interpretability and safety research.Alignment Research Gaps
Research Scientist (GPT-3 Lead Author) · OpenAI · 2021
Left to co-found Anthropic over safety direction concerns at OpenAI.Safety Deprioritization
VP of Safety & Policy · OpenAI · 2021
Departed with Dario Amodei to co-found Anthropic. Concerned about the pace of scaling without proportional safety investment.Safety Deprioritization
VP of Research · OpenAI · 2021
Left over disagreements about scaling AI without adequate safety research. Co-founded Anthropic to pursue a safety-first approach to AI development.Safety Deprioritization
Co-Lead, Ethical AI Team · Google · 2020
Forced out after co-authoring a paper on the harms of large language models ('On the Dangers of Stochastic Parrots'). Google objected to publication and demanded she retract or remove her name.Ethical AI Research Suppression
AI Researcher, Open Research Group Founder · Google · 2019
Resigned citing retaliation for organizing the 2018 Google Walkout protesting sexual misconduct handling. Said Google forced her to 'abandon her work' on AI ethics. Wrote: 'It's clear Google isn't a place where I can continue this work.' Co-founded the AI Now Institute at NYU.Whistleblower Retaliation