Zoe Hitzig
Researcher · OpenAI · 2026
Resigned over ChatGPT advertising plans. Wrote a New York Times op-ed warning that OpenAI would exploit users' intimate conversational data to serve targeted ads.Lack of Transparency
Showing 9 of 59 profiles
Researcher · OpenAI · 2026
Resigned over ChatGPT advertising plans. Wrote a New York Times op-ed warning that OpenAI would exploit users' intimate conversational data to serve targeted ads.Lack of Transparency
Economics Researcher · OpenAI · 2025
Left after concluding it had become difficult to publish research that didn't align with OpenAI's commercial interests. Accused the economic research team of functioning as a 'de facto advocacy arm' rather than conducting genuine research, specifically citing suppression of findings about AI's negative economic impacts.Ethical AI Research Suppression
Senior Advisor for AGI Readiness · OpenAI · 2024
Said 'Neither OpenAI nor any other frontier lab is ready' for AGI. Noted OpenAI placed increasingly restrictive limits on what he could publish. The AGI Readiness team was disbanded after his departure.AGI Risk Underestimation
Research Scientist · OpenAI · 2024
Left over concerns about copyright law violations in ChatGPT's training data. Said 'If you believe what I believe, you have to just leave the company.' Became a whistleblower. Found dead in his apartment on November 26, 2024.Copyright and Data Ethics
Researcher, Superalignment Team · OpenAI · 2024
Fired after raising cybersecurity concerns internally and sharing a security memo with board members. OpenAI cited a separate alleged information leak as the reason for termination; Aschenbrenner said the security memo was a major factor. Later published the influential 'Situational Awareness' essay arguing AGI is imminent and labs are unprepared.Whistleblower Retaliation
Vice President of Audio · Stability AI · 2023
Resigned because he disagreed with Stability AI's position that training generative AI models on copyrighted works constitutes 'fair use.' Wrote: 'Companies worth billions of dollars are training generative AI models on creators' works without permission... this is not acceptable.'Copyright and Data Ethics
VP and Engineering Fellow · Google · 2023
Resigned to freely speak about the existential risks of AI. Warned of 10-20% probability of human extinction from AI. Said he regretted his life's work.AGI Risk Underestimation
Product Manager, Civic Integrity Team · Meta · 2021
Leaked tens of thousands of internal Facebook documents to the SEC and Wall Street Journal, revealing that Meta knew its algorithms amplified hate speech and harmed teen mental health. Testified before Congress. The Civic Integrity team was dissolved after the 2020 election.Whistleblower Retaliation
Co-Lead, Ethical AI Team · Google · 2020
Forced out after co-authoring a paper on the harms of large language models ('On the Dangers of Stochastic Parrots'). Google objected to publication and demanded she retract or remove her name.Ethical AI Research Suppression
Prediction Tracker
See all predictions →4 of 11
predictions confirmed
AI systems will generate persuasive disinformation at scale