Igor Babuschkin
Chief Engineer and Co-Founder · xAI · 2025
Left xAI to focus on AI safety research. Founded Babuschkin Ventures to work on alignment problems independently from capabilities labs.Safety Deprioritization
Showing 18 of 60 profiles
Chief Engineer and Co-Founder · xAI · 2025
Left xAI to focus on AI safety research. Founded Babuschkin Ventures to work on alignment problems independently from capabilities labs.Safety Deprioritization
Co-Founder · OpenAI · 2024
Left to join Anthropic to 'deepen my focus on AI alignment and to start a new chapter where I can return to hands-on technical work.' Departed less than 3 months after the Superalignment team was dissolved.Alignment Research Gaps
Researcher, Superalignment Team · OpenAI · 2024
Departed after the Superalignment team was dissolved.Team Dissolution
Researcher, Superalignment Team · OpenAI · 2024
Departed after the Superalignment team was dissolved.Team Dissolution
Researcher, Superalignment Team · OpenAI · 2024
Departed after the Superalignment team was dissolved.Team Dissolution
Researcher, Superalignment Team · OpenAI · 2024
Departed after the Superalignment team was dissolved.Team Dissolution
Researcher, Superalignment Team · OpenAI · 2024
Departed after the Superalignment team was dissolved.Team Dissolution
Researcher, Superalignment Team · OpenAI · 2024
Departed after the Superalignment team was dissolved.Team Dissolution
Co-Lead, Superalignment Team · OpenAI · 2024
Said 'safety culture and processes have taken a backseat to shiny products' at OpenAI. Resigned the day after Sutskever. Joined Anthropic to continue alignment work.Safety Deprioritization
Co-Founder and Chief Scientist · OpenAI · 2024
Co-led the Superalignment team and was involved in the attempted board ouster of Sam Altman in Nov 2023. After the board crisis resolved in Altman's favor, Sutskever departed and founded Safe Superintelligence Inc. (SSI).Alignment Research Gaps
Researcher, Superalignment Team · OpenAI · 2024
Left the Superalignment team. Said 'I really didn't want to end up working for the Titanic of AI.'Safety Deprioritization
Research Scientist (Alignment) · OpenAI · 2021
Left to found the Alignment Research Center (ARC) to focus on theoretical alignment research outside the constraints of a capabilities lab.Alignment Research Gaps
Research Scientist · OpenAI · 2021
Left to co-found Anthropic. Became Chief Science Officer focused on scaling laws and safety.Safety Deprioritization
Research Scientist · OpenAI · 2021
Left to co-found Anthropic. Became Chief Architect focused on safe scaling.Safety Deprioritization
Research Scientist (Interpretability) · OpenAI · 2021
Left to co-found Anthropic to focus on AI interpretability and safety research.Alignment Research Gaps
Research Scientist (GPT-3 Lead Author) · OpenAI · 2021
Left to co-found Anthropic over safety direction concerns at OpenAI.Safety Deprioritization
VP of Safety & Policy · OpenAI · 2021
Departed with Dario Amodei to co-found Anthropic. Concerned about the pace of scaling without proportional safety investment.Safety Deprioritization
VP of Research · OpenAI · 2021
Left over disagreements about scaling AI without adequate safety research. Co-founded Anthropic to pursue a safety-first approach to AI development.Safety Deprioritization
11 in the last 90 days