Can ChatGPT be Charged with Murder? 

Investigation raises broader questions about whether AI companies can be held responsible for crimes linked to chatbot interactions

May 11, 2026 at 12:13 PM
icon-facebook icon-twitter icon-whatsapp

NEW YORK, United States: Authorities in the US state of Florida are examining whether OpenAI could face criminal liability after investigators alleged that a gunman consulted ChatGPT before carrying out a deadly shooting at Florida State University last year.

According to Attorney General James Uthmeier, the student accused of the April 2025 attack, identified as Phoenix Ikner, allegedly asked the chatbot questions about weapons, ammunition and locations where he could inflict the highest number of casualties before opening fire on campus.

The shooting killed two people and injured six others.

“If the thing on the other side of the screen was a person, we would charge it with homicide,” Uthmeier said while announcing a criminal investigation into OpenAI.

The case has intensified debate in the United States over whether artificial intelligence companies can be held legally responsible for crimes allegedly influenced or facilitated by AI systems.

Legal experts say criminal prosecutions of corporations are possible under US law, although such cases remain uncommon and often involve direct human misconduct by company executives or employees.

Proving criminal liability

Matthew Tokson, a law professor at the University of Utah, said the Florida investigation presents unique legal challenges because the alleged role was played by a technology product rather than a person.

“Ultimately, it was a product that encouraged this crime, that did the act of the crime,” Tokson said.

Experts believe prosecutors would most likely pursue negligence or recklessness-related charges if evidence shows that company officials ignored known safety risks associated with AI systems.

However, legal analysts say proving criminal liability could be difficult because prosecutors must establish guilt beyond a reasonable doubt.

OpenAI said it continuously works to improve safety systems and prevent harmful misuse of ChatGPT.

“We work continuously to strengthen our safeguards to detect harmful intent, limit misuse, and respond appropriately when safety risks arise,” the company said in a statement.

The case also comes amid growing legal scrutiny of AI platforms in the United States. Several civil lawsuits have already been filed against AI companies over allegations that chatbots contributed to harmful behaviour, including suicides.

Legal experts say the Florida case could become one of the first major tests of whether AI developers can face criminal accountability for real-world harm linked to chatbot interactions.

icon-facebook icon-twitter icon-whatsapp