Shane Legg

Google DeepMind Co-Founder Doubles Down: 50% Chance of AGI by 2028

In a candid new interview, Shane Legg, the co-founder and Chief AGI Scientist at Google DeepMind, has doubled down on a prediction he made over a decade ago: that there is a 50-50 chance humanity will achieve Artificial General Intelligence (AGI) by 2028.

Speaking on Google DeepMind: The Podcast with host Professor Hannah Fry, Legg offered a sobering and expansive look at the current trajectory of artificial intelligence, suggesting that human intelligence will not be the “upper limit” of what creates future value in the global economy.

Legg, who coined the term AGI alongside Ben Goertzel in the early 2000s, defines “minimal AGI” as an artificial agent capable of performing the cognitive tasks that a typical human can do. While acknowledging that current systems like large language models are “uneven”—possessing superhuman knowledge in some areas while lacking basic reasoning in others—he believes the gap is closing rapidly.

“Is human intelligence going to be the upper limit of what’s possible? I think absolutely not,” Legg stated during the interview. He noted the physical limitations of the human brain, such as energy consumption and the slow speed of electrochemical signaling, compared to the potential of machine intelligence. “Instead of 100 hertz on the channel, you can have 10 billion hertz on the channel. And instead of electrochemical wave propagation at 30 meters per second, you can be at the speed of light.”

The Coming Economic Shift

While technological optimism remains high in Silicon Valley, Legg warned that society is largely unprepared for the structural disruptions that AGI will bring. Unlike previous industrial revolutions that mechanized physical labor, this shift targets cognitive labor—the very engine of the modern knowledge economy.

“This is actually something which is going to structurally change the economy and society,” Legg said. “We need to think about how do we structure this new world.”

Legg pointed out that highly paid, “elite cognitive work” involving complex reasoning—such as law, coding, and mathematics—may be automated faster than physical trades. “In a few years… where prior you needed a hundred software engineers, maybe you need twenty,” he hypothesized, noting that the remaining engineers would be exponentially more productive using advanced AI tools. Conversely, he suggested physical roles like plumbing would remain safe for longer due to the difficulties of robotics, a phenomenon known as Moravec’s paradox.

Safety and “System 2” Thinking

Addressing the critical issue of AI safety, Legg advocated for a shift in how models process information. He drew a parallel to Daniel Kahneman’s concept of “System 1” (instinctive) and “System 2” (deliberative) thinking. Current AI often relies on rapid pattern matching, but future safe AGI must be able to pause and reason ethically before acting.

“It’s often not sufficient just to go with your gut instinct,” Legg explained regarding ethical decision-making. “You actually need to sit down and think about it… If we can make that reasoning really, really tight… I think it should, in principle, actually be able to become more ethical than people.”

A New Epoch for Humanity

Despite the risks, Legg framed the arrival of AGI as a potential “golden age” if navigated correctly, citing the ability to advance science, cure diseases, and solve complex global problems. However, he stressed that the window for societal preparation is narrowing.

When pressed on whether he stands by his long-held forecast of a 50% chance of AGI arriving by 2028, Legg remained firm. “Yes,” he confirmed, noting that while the exact arrival of “superintelligence” might take longer, the threshold for machines matching human cognitive abilities is imminent.

As for the philosophical debate regarding whether these machines will ever be truly sentient, Legg suggests the public perception will matter more than the scientific reality.

“Some people will think they are conscious, and some people will think they are not. That is certainly going to happen,” Legg said.


Posted

in

Tags: