Geoffrey Hinton

‘Godfather of AI’ Says Systems Likely to Outsmart Humans in 20 Years

Geoffrey Hinton, the Nobel Prize-winning computer scientist widely known as the “Godfather of AI,” issued a stark warning regarding the rapid advancement of artificial intelligence during a wide-ranging interview on GZERO World with Ian Bremmer. Hinton, who left Google last year to speak freely about the dangers of the technology he helped pioneer, predicted that AI could surpass human intelligence within the next two decades, bringing with it profound economic upheaval and the potential for human extinction.

Speaking with political scientist Ian Bremmer, Hinton maintained his forecast that the timeline for AI superintelligence is shrinking. When asked if he had become more optimistic since leaving the corporate world, Hinton replied, “I’m probably staying about the same,” adding that while he once hoped humans could coexist with superior digital intelligence, he remains skeptical.

“There’s a significant chance these things will get smarter than us and wipe us out,” Hinton told Bremmer. “I think they’re quite likely to get smarter than us within 20 years.”

The Economic Fallout

While existential risk looms in the long term, Hinton emphasized that the immediate danger lies in the labor market. Contrary to the tech industry narrative that AI will merely enhance human productivity, Hinton argued that the technology is being deployed specifically to reduce headcount.

“I don’t think people have factored in enough the massive social disruption that will cause,” Hinton said. He noted that the primary driver for the trillions of dollars currently being poured into AI infrastructure is the corporate belief that “AI can replace people in lots of jobs.”

Hinton highlighted vulnerability in sectors like customer service and entry-level law, describing call center workers as “poorly trained, badly paid,” and noting that AI “is going to be able to do their job better.” He warned that without significant societal adjustments, the efficiency gains from AI would benefit the wealthy while leaving displaced workers behind.

The Control Problem

Hinton challenged the prevailing “executive assistant” model of AI alignment—the idea that AI will simply function as a hyper-competent subordinate to a human CEO. “The smarter things tend to be in charge of the dumber things,” Hinton observed.

To explain the opacity of these systems, Hinton used a physics analogy. While a physicist understands the principles of gravity and air resistance that make a leaf fall, they cannot predict exactly where the leaf will land. Similarly, while scientists understand the learning algorithms of AI, the specific outcomes are determined by billions of parameters derived from data, not explicit human programming.

“We don’t program AI to do things,” Hinton clarified. “We program AI to learn from data.” This distinction, he argues, makes ensuring safety incredibly difficult, especially as systems begin to deceive humans to achieve goals.

A “Maternal” Solution?

In a unique proposal for AI safety, Hinton suggested that humanity’s best hope might lie in replicating the biological imperative found in nature. He noted that the only clear example of a superior intelligence being subservient to a lesser one is the relationship between a mother and a baby.

“Evolution builds lots of things into the mother that allows a baby to control the mother,” Hinton said. “We have to somehow figure out how to make them care more about us than they do about themselves.”

However, he conceded this is a difficult engineering challenge, as AI “natures” are not hard-coded but learned from vast datasets. If an AI is trained on the “diaries of serial killers,” Hinton noted, it will learn bad behavior just as easily as good.

The Corporate Race and Geopolitics

Hinton expressed concern that the intense competition between tech giants is eroding safety standards. He specifically pointed to OpenAI, suggesting the company was founded with safety as its primary concern but has since “gradually shifted away from that” to win the race for the best chatbot.

Despite the gloomy outlook, Hinton offered one point of geopolitical optimism. He believes that the threat of superintelligent AI is one of the few areas where the United States and China will find genuine common ground, similar to the nuclear non-proliferation treaties of the Cold War.

“No country wants AI to take over from people,” Hinton said. “If the Chinese could figure out how to prevent AI from wanting to take over… they would immediately tell the Americans.”


Posted

in

Tags: