News

Godfather of AI says risks are moving faster than guardrails

Geoffrey Hinton, the deep-learning pioneer widely known as the “Godfather of AI,” is again urging policymakers and industry leaders to treat safety as a first-order problem after a recent YouTube video interview reignited debate over how fast frontier systems are advancing. In a new clip from The Diary of a CEO titled “Godfather of AI WARNS: ‘You Have No Idea What’s Coming,’” Hinton argues that model capabilities are compounding at a pace that voluntary pledges and traditional software guardrails cannot match, with near-term implications for jobs, security, and public trust.

The thrust of Hinton’s argument is that scale alone, more data and compute and larger models, is producing qualitatively new behaviors that product teams cannot reliably predict in the lab. If general-purpose models continue to gain planning, tool use, and autonomy, he warns they will begin to influence high-stakes decisions across business and politics in ways their creators did not anticipate. That reframes the next phase of AI as a governance and engineering challenge rather than a product sprint, which is why he pushes for “safety proven in advance” instead of promises to fix harms after deployment.

On the economy, Hinton’s outlook is decidedly unsentimental. He expects the productivity upside to be real yet, uneven. Value is likely to concentrate among a small set of model owners while a broad class of white-collar roles faces automation pressure. He has argued that without a deliberate policy, AI’s gains will widen inequality, with “a few people becoming much richer while most people become poorer.” Those concerns track with recent coverage of his remarks as well as his own interviews this year.

Misuse sits at the center of his near-term worries. The same systems that write code and synthesize research can scale phishing, supercharge social engineering, and generate convincing deepfakes that undermine confidence in elections and media. Hinton also flags harder edge cases that concern national security officials, including model-assisted guidance that could lower barriers to biological or chemical threats. While these sound apocalyptic in nature the pragmatic outlook indicates that powerful tools also need proven brakes before they are shipped at scale.

Timelines are what make his warning urgent. In multiple interviews, Hinton has framed the possibility of systems surpassing human competence not in centuries but within years to a couple of decades. Earlier this year he publicly estimated a 10 to 20 percent chance that AI could ultimately “take control” from humans, a non-trivial figure that helped push existential risk into mainstream policy discussions. The new clip does not dwell on precise odds, but it reinforces the same planning horizon and the need to act before capabilities harden into critical infrastructure.

What he wants to see next is a two-track response. First, treat safety research as a core engineering requirement: red-team programs with teeth, interpretability work that can surface deceptive behavior, rigorous pre-deployment evaluation suites, and independent audits that measure dangerous capabilities rather than just leaderboard benchmarks. Second, build policy that matches the cross-border reality of frontier models, including incident reporting, certification regimes, and mechanisms to pause or throttle risky rollouts until safeguards are demonstrated. Hinton remains skeptical that purely voluntary industry pledges can keep pace with capability growth.

Hinton’s credibility here is not incidental. His career spans foundational breakthroughs in neural networks and representation learning, and since leaving Google to speak more freely about risk, he has become a central voice translating lab-level concerns into public policy language. Whether readers agree with his tail-risk estimates or not, the case laid out in the recent YouTube interview is consistent: progress remains breathtaking, but without rigorous controls, transparent oversight, and real accountability, society is running a high-consequence experiment in public. The debate will continue, and the stakes, he argues, are already clear.