VP and Engineering Fellow · Google · 2023
Resigned to freely speak about the existential risks of AI. Warned of 10-20% probability of human extinction from AI. Said he regretted his life's work.
Hinton spent a decade at Google building the deep learning foundations that made modern AI possible. In 2023, at age 75, he resigned so he could speak freely about what he had come to believe: that the technology he helped create poses a genuine existential threat. He told the New York Times he estimated a 10 to 20 percent probability that AI could lead to human extinction within the next few decades. His departure was significant not because a critic left — but because a creator did.
Published in one of the world's most prestigious scientific journals, this paper brings together a group of prominent AI researchers and governance experts to warn that advanced AI systems could pose catastrophic risks including large-scale social manipulation, the automation of cyberattacks, and the irreversible loss of human control over critical systems. The authors argue that current governance mechanisms are inadequate to manage these risks, noting that AI capabilities are advancing faster than the safety research and regulatory frameworks needed to contain them. They propose a set of urgent priorities spanning both technical research, such as improved interpretability and alignment methods, and governance interventions, such as mandatory safety evaluations, licensing regimes, and international coordination agreements. The paper is notable for Geoffrey Hinton's involvement, as he left Google specifically to speak freely about existential risks from AI, lending significant credibility to the warning. It represents a rare instance of a consensus statement from leading researchers appearing in a top-tier scientific journal and calling for immediate action on AI risk rather than treating it as a speculative concern.
1 of 2 confirmed
AI systems will generate persuasive disinformation at scale
“I'm scared that the bad actors are going to use it for manipulating elections, for example. And I don't see how you prevent that.”
AI-generated deepfakes and disinformation were widely documented in the 2024 US, Indian, and European elections. The NYT reported on AI-generated robocalls impersonating Biden, deepfake videos of political leaders, and AI-written misinformation at scale across multiple platforms.
View evidence ⤴AI could pose existential risk to humanity within 5-20 years
“I think the probability of existential threat in the next 20 years is somewhere between 10 and 20 percent. That's enough to worry about.”