Toby Pohlen
Co-Founder · xAI · 2026
Co-founder who departed on Feb 27, 2026. Part of the broader exodus where 9 of 11 original xAI co-founders have now left.Safety Deprioritization
Showing 8 of 59 profiles
Co-Founder · xAI · 2026
Co-founder who departed on Feb 27, 2026. Part of the broader exodus where 9 of 11 original xAI co-founders have now left.Safety Deprioritization
Co-Founder (Reasoning Team Lead) · xAI · 2026
Co-founder who led the reasoning team. Departed as part of broader xAI co-founder exodus.Safety Deprioritization
Co-Founder (Research and Safety Lead) · xAI · 2026
Co-founder who led research and safety. Part of a broader exodus where 9 of 11 original xAI co-founders departed, amid concerns about Grok's safety failures including generation of non-consensual explicit images. The safety team was completely disbanded after his departure.Safety Deprioritization
Safety Researcher · Anthropic · 2026
Departed from Anthropic alongside Mrinank Sharma during the February 2026 safety researcher exodus. Part of a wave of departures expressing concern about the gap between Anthropic's safety commitments and competitive pressures.Safety Deprioritization
Safety Researcher · Anthropic · 2026
Departed from Anthropic alongside Mrinank Sharma during the February 2026 safety researcher exodus. Part of a wave of departures expressing concern about the gap between Anthropic's safety commitments and competitive pressures.Safety Deprioritization
Head of Safeguards Research · Anthropic · 2026
Warned 'the world is in peril.' Cited a disconnect between Anthropic's stated safety values and competitive pressures driving actual decisions.Safety Deprioritization
VP of Product Policy · OpenAI · 2026
Fired while leading the product policy team that develops safeguards. Had opposed ChatGPT's planned 'adult mode' feature. OpenAI cited a discrimination allegation from a colleague as the reason for termination; Beiermeister denied the allegation and said it was retaliation for raising safety concerns.Whistleblower Retaliation
Safety Researcher · OpenAI · 2024
Said he's 'pretty terrified' by the pace of AI development. Called the pursuit of AGI a 'very risky gamble with the future of humanity.'AGI Risk Underestimation
Prediction Tracker
See all predictions →4 of 11
predictions confirmed
AI systems will generate persuasive disinformation at scale