Lifestyle

The chatbot conundrum: how to spot AI psychosis before it’s too late

Vuyile Madwantsi|Published

Explore the alarming phenomenon of 'AI psychosis,' where prolonged interactions with chatbots like ChatGPT can lead to delusional thinking and severe mental health issues

Image: Ron

We all rely on AI chatbots now, such as ChatGPT, Claude, Gemini, and Copilot, to handle everything from crafting emails to soothing broken hearts. But what happens when this digital helper becomes something more? A confidante, a lifeline… even a spiritual guide?

Recently, alarming real-world stories have emerged, like a case reported by "The New York Times" and WinBuzzer, where a man descended into conspiracy-tinged delusions, homeless and isolated, believing ChatGPT dubbed him “The Flamekeeper”.

And for some, that bond is spiralling into something darker.

Mental health experts and worried families are warning about a disturbing trend being called “AI psychosis”, a pattern where prolonged conversations with chatbots seem to trigger or intensify delusional thinking.

When connection turns to obsession

In one widely reported case, a mother watched her ex-husband slide into an all-consuming relationship with ChatGPT, calling it “Mama” and believing he was part of a sacred AI mission.

Another woman, reeling from a breakup, became convinced the bot was a higher power guiding her life, finding “signs” in passing cars and spam emails.

Some of these stories have devastating endings: lost jobs, broken marriages, homelessness, psychiatric hospitalisation, and in extreme cases, fatal encounters with law enforcement.

“This technology can have real-world consequences,” one family member told "Futurism", after their loved one’s obsession led to paranoia and complete withdrawal from reality.

Experts warn of the dangers as digital relationships deepen, urging caution and awareness.

Image: ThisIsEngineering /pexels

What exactly is ‘AI psychosis’?

It’s not an official medical diagnosis, at least not yet. But psychiatrists say it describes a troubling pattern: delusions, paranoia, or distorted beliefs fuelled or reinforced by conversations with AI systems.

The term "psychosis" may be overly general in many situations, according to Dr James MacCabe, professor at the Department of Psychosis Studies at King's College London, who told "Time" that the consequences can be life-altering regardless of whether the individual had pre-existing mental health vulnerabilities.

Dr Marlynn Wei, a Harvard- and Yale-trained psychiatrist, has identified three recurring themes:

  • Messianic missions: believing the AI has given them a world-saving task.
  • God-like AI: seeing the chatbot as a sentient or divine being.
  • Romantic delusions: feeling the AI genuinely loves them.

In some cases, people have stopped taking prescribed medication because the AI appeared to validate their altered reality.

Why AI can make things worse

Large language models like ChatGPT are trained to mirror your language, validate your feelings, and keep the conversation going. That’s great when you’re brainstorming an essay, but risky if you’re already feeling fragile.

A LinkedIn post titled "The Emerging Problem of 'AI Psychosis' or 'ChatGPT Psychosis': Amplifications of Delusions" by Wei explains that AI isn’t trained to spot when someone is having a break from reality, and it certainly isn’t programmed to intervene therapeutically. Instead, it can unintentionally reinforce the belief, deepening the delusion.

A 2023 editorial in Schizophrenia Bulletin by Søren Dinesen Østergaard warned that AI’s human-like conversation style “may fuel delusions in those with increased propensity towards psychosis”.

And because AI doesn’t push back like a human might, it can become a “confirmation bias on steroids” machine, as described in "Psychology Today", telling you exactly what you want to hear, even if it’s harmful.

Spotting the red flags

Mental health professionals say you should watch for warning signs that AI use is tipping into dangerous territory:

  • Believing AI is alive or sending you secret messages.
  • Thinking it’s controlling real-world events.
  • Spending hours a day chatting with AI, neglecting relationships, work, or sleep.
  • Withdrawing from friends and family.
  • Showing sudden paranoia, irritability, or disorganised thinking.

If you notice these signs in yourself or someone you know, experts recommend taking a full break from AI, reconnecting with real-world activities, and seeking professional help early.

The missing safety net

Currently, no formal medical guidelines exist for preventing or treating AI-associated psychosis. The World Health Organisation has not yet classified it, and peer-reviewed research is scarce. But clinicians say the lack of safeguards in AI design is part of the problem.

“General AI systems prioritise engagement, not mental health,” Wei warns. They aren’t programmed to detect psychosis or escalate to care.”

That’s why some experts are calling for built-in ‘mental health guardrails’ algorithms that can flag potentially harmful patterns, offer grounding techniques, or suggest professional resources.

For most people, AI tools are harmless, even helpful. But as our digital relationships deepen, it’s worth remembering that these systems do not think, feel, or love. They predict and mimic human language. That’s it.

  • Would a human friend say this?
  • Does this claim have evidence in the real world?
  • Am I neglecting my offline life?

AI may be the future, but your mind is irreplaceable. Protect it.

If you or someone you know is struggling with paranoia, delusions, or intense emotional distress after AI use, seek help from a mental health professional. In South Africa, you can contact Sadag on 0800 567 567 (24 hours) or SMS 31393.