Thinking Robots by 2030: Breakthrough or Big Tech Hype?

Mon Jul 28 2025
icon-facebook icon-twitter icon-whatsapp

Key points

  • Experts say true AGI remains decades away, if ever
  • Big tech may redefine AGI to claim success
  • Generative AI mimics intelligence but lacks real understanding
  • Race for superintelligence could threaten human existence

ISLAMABAD: Nowadays, many people use artificial intelligence (AI) chatbots for everything from dinner suggestions to combating loneliness, but could humanity be on the verge of creating machines capable of independent thought — and potentially outsmarting their creators?

Some major tech firms claim that such a breakthrough, known as artificial general intelligence (AGI), is only a few years away. However, sceptics urge caution and warn against buying into the hype.

“Whenever you hear someone talking about AGI, just picture the tooth fairy or Father Christmas,” said Ed Zitron, host of the tech podcast Better Offline and creator of the Where’s Your Ed At? newsletter. “These are all fictional ideas, AGI included. The difference is that business folk are pouring billions into it because they have nowhere else to invest,” he told the media.

While experts disagree on the precise definition of AGI, it is generally understood as an AI that matches or surpasses human intelligence, with the ability to learn and operate autonomously. Such intelligence could be embedded in a robotic body capable of performing a wide variety of tasks.

Achieving AGI

Demis Hassabis, CEO of Google’s AI lab DeepMind, recently stated his company aims to achieve AGI by 2030. “My timeline has been consistent since DeepMind’s founding in 2010 — a roughly 20-year mission — and remarkably, we are on track,” he told the media in May.

According to CBC News, Zitron remains unconvinced, suggesting Hassabis is “directly incentivised” to promote his company’s progress, and highlighted uncertainty over the profitability of AI chatbots like Google’s Gemini or OpenAI’s ChatGPT.

“None of these companies are really making money from generative AI 
 so they need a new magic trick to keep investors happy,” he said.

AGI has long been predicted but never realised

AI specialist Melanie Mitchell points out that forecasts of intelligent AI have been made since the 1960s — and they have consistently proven inaccurate. “AGI or its equivalent is always ten years away, and perhaps it always will be,” said Mitchell, a professor at the Santa Fe Institute, specialising in AI, machine learning, and cognitive science.

She noted there is no universal agreement on what capabilities define a functioning AGI, but stressed it should not be confused with large language models (LLMs) like ChatGPT or Claude, which are types of generative AI.

LLMs have been trained on vast amounts of human-generated text — from websites, books, and other media — enabling them to produce very human-like language, she explained.

Zitron emphasised that distinction, arguing “generative AI is not intelligence; it’s drawing on a large database of information” fed to it by humans.

AGI could be “a race to disaster”

He defines AGI as “a conscious computer
 something that can think and act entirely on its own,” with the ability to learn independently. “We do not understand how human consciousness works,” he said. “How on earth are we meant to replicate that in computers? The truth is, we don’t know.”

Mitchell fears that without a clear, widely accepted definition, big tech companies may simply “redefine AGI into existence.” “They might say, ‘This is AGI,’ and claim success, without it having any deeper significance,” she warned.

Outside the tech sector, some believe AGI is achievable. “If our brain is a biological computer, then it must be possible to build machines that think at a human level,” said Max Tegmark, MIT professor and president of the Future of Life Institute, a non-profit that addresses risks from emerging technologies.

“There is no law of physics preventing us from doing it better,” he added.

Creating thinking machines, suicide race

Tegmark suggests it is hubristic to claim AGI is impossible, just as many once believed human flight could never be achieved. Early inventors tried to mimic the rapid wingbeats of small birds without success; the breakthrough came with understanding bird wings better and designing machines that glide instead.

“We are seeing something similar now: today’s advanced AI systems are far simpler than brains, but we have discovered a different way to create thinking machines.”

He would not be surprised if AGI arrives within two to five years — but cautions that does not mean we should build robots that outthink humans.

He described these intelligent machines as a new species, potentially threatening humanity’s place in the natural order, “because the smarter species tends to take control.”

“The race to build superintelligence is a suicide race — but it’s one we don’t need to run,” he said.

“We can still develop incredible AI that cures cancer and provides wonderful tools, without creating superintelligence.”

icon-facebook icon-twitter icon-whatsapp