Key points
- Experts say true AGI remains decades away, if ever
- Big tech may redefine AGI to claim success
- Generative AI mimics intelligence but lacks real understanding
- Race for superintelligence could threaten human existence
ISLAMABAD: Nowadays, many people use artificial intelligence (AI) chatbots for everything from dinner suggestions to combating loneliness, but could humanity be on the verge of creating machines capable of independent thought â and potentially outsmarting their creators?
Some major tech firms claim that such a breakthrough, known as artificial general intelligence (AGI), is only a few years away. However, sceptics urge caution and warn against buying into the hype.
FOR SALE:https://t.co/GjhxLWMDqA
AGI stands for Artificial General Intelligence, a hypothetical type of AI that possesses human-level cognitive abilities. It refers to a machine that can understand, learn, and apply knowledge across a wide range of tasks, just like a human⊠pic.twitter.com/4ZmnMhosAW
â DOMÎ (@Domenclature) July 27, 2025
âWhenever you hear someone talking about AGI, just picture the tooth fairy or Father Christmas,â said Ed Zitron, host of the tech podcast Better Offline and creator of the Whereâs Your Ed At? newsletter. âThese are all fictional ideas, AGI included. The difference is that business folk are pouring billions into it because they have nowhere else to invest,â he told the media.
While experts disagree on the precise definition of AGI, it is generally understood as an AI that matches or surpasses human intelligence, with the ability to learn and operate autonomously. Such intelligence could be embedded in a robotic body capable of performing a wide variety of tasks.
Achieving AGI
Demis Hassabis, CEO of Googleâs AI lab DeepMind, recently stated his company aims to achieve AGI by 2030. âMy timeline has been consistent since DeepMindâs founding in 2010 â a roughly 20-year mission â and remarkably, we are on track,â he told the media in May.
According to CBC News, Zitron remains unconvinced, suggesting Hassabis is âdirectly incentivisedâ to promote his companyâs progress, and highlighted uncertainty over the profitability of AI chatbots like Googleâs Gemini or OpenAIâs ChatGPT.
âNone of these companies are really making money from generative AI ⊠so they need a new magic trick to keep investors happy,â he said.
AGI has long been predicted but never realised
AI specialist Melanie Mitchell points out that forecasts of intelligent AI have been made since the 1960s â and they have consistently proven inaccurate. âAGI or its equivalent is always ten years away, and perhaps it always will be,â said Mitchell, a professor at the Santa Fe Institute, specialising in AI, machine learning, and cognitive science.
She noted there is no universal agreement on what capabilities define a functioning AGI, but stressed it should not be confused with large language models (LLMs) like ChatGPT or Claude, which are types of generative AI.
LLMs have been trained on vast amounts of human-generated text â from websites, books, and other media â enabling them to produce very human-like language, she explained.
Zitron emphasised that distinction, arguing âgenerative AI is not intelligence; itâs drawing on a large database of informationâ fed to it by humans.
AGI could be âa race to disasterâ
He defines AGI as âa conscious computer⊠something that can think and act entirely on its own,â with the ability to learn independently. âWe do not understand how human consciousness works,â he said. âHow on earth are we meant to replicate that in computers? The truth is, we donât know.â
I appreciate when google put out their definition of AGI, makes it clear cut.
No personal definitions of agi that border onto asi or word games with â superintelligence â
The goal posts for agi keep moving. pic.twitter.com/CAeLnNCtqs
â Jimmy Apples đ/acc (@apples_jimmy) December 18, 2023
Mitchell fears that without a clear, widely accepted definition, big tech companies may simply âredefine AGI into existence.â âThey might say, âThis is AGI,â and claim success, without it having any deeper significance,â she warned.
Outside the tech sector, some believe AGI is achievable. âIf our brain is a biological computer, then it must be possible to build machines that think at a human level,â said Max Tegmark, MIT professor and president of the Future of Life Institute, a non-profit that addresses risks from emerging technologies.
âThere is no law of physics preventing us from doing it better,â he added.
Creating thinking machines, suicide race
Tegmark suggests it is hubristic to claim AGI is impossible, just as many once believed human flight could never be achieved. Early inventors tried to mimic the rapid wingbeats of small birds without success; the breakthrough came with understanding bird wings better and designing machines that glide instead.
âWe are seeing something similar now: todayâs advanced AI systems are far simpler than brains, but we have discovered a different way to create thinking machines.â
đ„ One of the biggest crowd-pullers at WAIC 2025 was Unitreeâs G1, the worldâs first affordable humanoid combat robot. @UnitreeRobotics set up a live arena where robots sparred with each other, and yes â even humans could step into the ring to test their reflexes against G1. This⊠pic.twitter.com/kpcKFXhnUx
â Reborn (@reborn_agi) July 28, 2025
He would not be surprised if AGI arrives within two to five years â but cautions that does not mean we should build robots that outthink humans.
He described these intelligent machines as a new species, potentially threatening humanityâs place in the natural order, âbecause the smarter species tends to take control.â
âThe race to build superintelligence is a suicide race â but itâs one we donât need to run,â he said.
âWe can still develop incredible AI that cures cancer and provides wonderful tools, without creating superintelligence.”