Google Predicts AI May Permanently Destroy Humanity

New study suggests AI can achieve human-like intelligence by 2030  

Tue Apr 08 2025
icon-facebook icon-twitter icon-whatsapp

Key points

  • Study categorises risks of advanced AI into four types
  • Risks include misuse, misalignment, mistakes and structural risks
  • UN-like umbrella organisation needed to oversee AGI: DeepMind CEO

ISLAMABAD: A new research paper by Google DeepMind has raised alarms on the breakneck advancement of AI and predicted that it could achieve human-like intelligence by 2030 and “permanently destroy mankind”.

“Given the massive potential impact of AGI, we expect that it too could pose potential risk of severe harm,” the 145-page study highlighted.

Co-authored by DeepMind co-founder Shane Legg, it says, “In between these ends of the spectrum, the question of whether a given harm is severe isn’t a matter for Google DeepMind to decide; instead, it is the purview of society, guided by its collective risk tolerance and conceptualisation of harm.”

The study sheds light on the preventive measures that Google and other AI companies should take to reduce AGI’s threat.

Risk types

The study categorises risks of advanced AI into four types, including misuse, misalignment, mistakes and structural risks.

Misuse was referred to as people intentionally using AI for harm; misalignment as systems developing unintended harmful behaviour; mistakes as unexpected failures due to design or training flaws; and structural risks, were referred to as conflicting incentives between multiple parties, including different groups of people, such as countries or companies, and possibly multiple AI systems.

The study went on to highlight DeepMind’s risk mitigation strategy, which is centred around misuse prevention where people could use AI to harm others.

DeepMind CEO warns

In February, Demis Hassabis, CEO of DeepMind stated that AGI, which is as smart or smarter than humans, will start to emerge in the next five or 10 years. He also batted for a UN-like umbrella organisation to oversee AGI’s development.

“I would advocate for a kind of CERN for AGI, and by that, I mean a kind of international research focused high-end collaboration on the frontiers of AGI development to try and make that as safe as possible,” said Mr Hassabis.

“You would also have to pair it with a kind of an institute like IAEA, to monitor unsafe projects and sort of deal with those. And finally, some kind of supervening body that involves many countries around the world that input how you want to use and deploy these systems. So, a kind of like UN umbrella, something that is fit for purpose for a that, a technical UN,” he added.

icon-facebook icon-twitter icon-whatsapp