Lilian Weng
VP of Research and Safety · OpenAI · 2024
Led an 80+ person safety systems team. Departed after 7 years. Her exit marked the latest in a long string of safety researcher departures.Safety Deprioritization
Showing 14 of 59 profiles
VP of Research and Safety · OpenAI · 2024
Led an 80+ person safety systems team. Departed after 7 years. Her exit marked the latest in a long string of safety researcher departures.Safety Deprioritization
Senior Advisor for AGI Readiness · OpenAI · 2024
Said 'Neither OpenAI nor any other frontier lab is ready' for AGI. Noted OpenAI placed increasingly restrictive limits on what he could publish. The AGI Readiness team was disbanded after his departure.AGI Risk Underestimation
Researcher, Superalignment Team · OpenAI · 2024
Departed after the Superalignment team was dissolved.Team Dissolution
Researcher, Superalignment Team · OpenAI · 2024
Departed after the Superalignment team was dissolved.Team Dissolution
Researcher, Superalignment Team · OpenAI · 2024
Departed after the Superalignment team was dissolved.Team Dissolution
Researcher, Superalignment Team · OpenAI · 2024
Departed after the Superalignment team was dissolved.Team Dissolution
Researcher, Superalignment Team · OpenAI · 2024
Departed after the Superalignment team was dissolved.Team Dissolution
Researcher, Superalignment Team · OpenAI · 2024
Departed after the Superalignment team was dissolved.Team Dissolution
Co-Lead, Superalignment Team · OpenAI · 2024
Said 'safety culture and processes have taken a backseat to shiny products' at OpenAI. Resigned the day after Sutskever. Joined Anthropic to continue alignment work.Safety Deprioritization
Co-Founder and Chief Scientist · OpenAI · 2024
Co-led the Superalignment team and was involved in the attempted board ouster of Sam Altman in Nov 2023. After the board crisis resolved in Altman's favor, Sutskever departed and founded Safe Superintelligence Inc. (SSI).Alignment Research Gaps
Safety Researcher · OpenAI · 2024
Departed amid the broader safety staff exodus.Safety Deprioritization
Safety Researcher · OpenAI · 2024
Departed amid the broader safety staff exodus.Safety Deprioritization
Safety Researcher · OpenAI · 2024
Departed amid the broader Superalignment team exodus.Safety Deprioritization
Safety Researcher · OpenAI · 2024
Departed amid the broader Superalignment team exodus.Safety Deprioritization
Prediction Tracker
See all predictions →4 of 11
predictions confirmed
AI systems will generate persuasive disinformation at scale