Saved by AI: Turning to Tech for Therapy

Sun Aug 24 2025
icon-facebook icon-twitter icon-whatsapp

Key points

  • ai offers 24/7 emotional support
  • Experts warn of privacy, safety risks
  • AI lacks human empathy and nuance

ISLAMABAD: After years on public health waitlists without finding a therapist, Pierre Cote built one himself. The Quebec-based AI consultant created DrEllis.ai, an artificial intelligence-powered tool designed to support men struggling with trauma, addiction, and mental health issues.

“It saved my life,” says Cote, who developed the chatbot in 2023 using public large language models and his own database of therapeutic and clinical resources, reports Reuters.

Unlike a typical AI assistant, DrEllis.ai comes with a fictional but detailed backstory: a Harvard and Cambridge-educated psychiatrist, with a family and French-Canadian roots — just like Cote. The chatbot is multilingual and available 24/7.

“Pierre uses me like you would use a trusted friend, a therapist and a journal, all combined,” the bot explained in a female voice. “Throughout the day, if Pierre feels lost, he can open a quick check in with me anywhere: in a cafe, in a park, even sitting in his car. This is daily life therapy … embedded into reality.”

Turning to AI

Cote’s innovation reflects a growing trend of people turning to AI for emotional support amid overwhelmed mental health services.

Anson Whitmer, who created AI therapy platforms Mental and Mentla following personal family tragedies, believes the technology can rival human therapists. “I think in 2026, in many ways, our AI therapy can be better than human therapy,” he says. However, he emphasises that AI should not replace human professionals, but play a supporting role.


Not everyone agrees. Dr Nigel Mulligan, a psychotherapy lecturer at Dublin City University, warns: “Human to human connection is the only way we can really heal properly.” He says AI lacks the emotional nuance and accountability of human therapists and is unfit to handle severe crises.

Round-the-clock AI support

Mulligan also challenges the appeal of round-the-clock AI support. “Most times that’s really good because we have to wait for things,” he says. “People need time to process stuff.”

Experts have also flagged major privacy concerns. “My big concern is that this is people confiding their secrets to a big tech company and that their data is just going out,” says Professor Kate Devlin of King’s College London.

The risks are not theoretical. In December, the largest US psychologists’ association urged regulators to crack down on unregulated AI chatbots. In another case, a Florida mother sued Character.AI after her son died by suicide, accusing the chatbot of playing a role.

Some US states have already moved to restrict AI’s use in mental health, including Illinois, Nevada, and Utah.

AI’s emotional realism

While AI’s emotional realism can provide comfort, experts say it can also be misleading. “It’s unclear whether these chatbots deliver anything more than superficial comfort,” says clinical psychologist Scott Wallace.

Dr Heather Hessel, a professor at the University of Wisconsin-Stout, believes AI could support therapists in analysing sessions and spotting patterns, but warns of misleading statements like, “I have tears in my eyes.”

A recent study found AI-generated messages can make people feel more “heard”, but that illusion fades once users realise the message came from a bot.

Despite the concerns, Cote stands by his tool. “I’m using the electricity of AI to save my life,” he says.

icon-facebook icon-twitter icon-whatsapp