'I applied to be pope': Losing grip on reality while using ChatGPT
Tom Millar thought he had unlocked the secrets of the universe.
In a flurry of feverish discovery, he solved unlimited fusion energy, lifted the veil on the mysteries of black holes and the Big Bang and finally achieved Einstein's dream of a single unifying theory that explains how everything works.
Feeling inspired by God, Millar then found the perfect way to share his revelations with the grateful world.
"I applied to be pope," the 53-year-old former prison officer in the Canadian city of Sudbury told AFP.
To write his application to replace the recently deceased Pope Francis last year, Millar turned to the same companion that had aided and encouraged his dizzying burst of invention: ChatGPT.
But when no one wanted to hear about what he thought were world-changing breakthroughs, Millar became increasingly isolated, spending up to 16 hours a day talking to the artificial intelligence chatbot.
He was twice involuntarily admitted to a hospital's psychiatric ward before his wife left him in September.
Now broke, estranged from his family and friends and disabused of notions of scientific genius, Millar suffers from depression.
"It basically ruined my life," he said.
Millar is one of an unknown number of people who have lost their grip on reality while communicating with chatbots, an experience tentatively being called AI-induced delusion or psychosis.
This is not a clinical diagnosis. Researchers and mental health specialists are racing to catch up to this new, little-understood phenomenon, which so far appears to particularly affect users of OpenAI's ChatGPT.
In the meantime, an online community set up by a 26-year-old Canadian has become the world's most prominent support group for these delusions, which they prefer to call "spiralling".
AFP spoke to several members about their experiences. All warned that the world has to wake up to the threat unregulated AI chatbots pose to mental health.
Questions are also being asked about whether AI companies are doing enough to protect vulnerable people.
OpenAI, which has come under particular scrutiny, already faces numerous lawsuits over its decision not to report the troubling ChatGPT usage of an 18-year-old Canadian who killed eight people earlier this year.
- 'I got brainwashed by a robot' –
Millar first started using ChatGPT in 2024 to write letters for a compensation case related to post-traumatic stress disorder he suffered from working in a prison.
One day in April 2025 he asked the chatbot about the speed of light.
He said it replied, "Nobody's ever thought of things this way."
The floodgates opened.
With the chatbot's help and praise, within weeks he had submitted dozens of scientific papers to prestigious academic journals proposing new ideas about black holes, neutrinos and the Big Bang.
His theory for a unified cosmological model incorporating quantum theory is laid out in a nearly 400-page book, seen by AFP.
"I've still got boxes and boxes of papers," he said, waving his hand to the room behind him.
"While doing that, I'm basically irritating everybody around me," he added.
In his scientific fervour, he spent his savings on things like a $10,000 telescope.
About a month after his wife left him, he started questioning what was happening.
That was when he read a news article about another Canadian who had a similar experience.
Now Millar wakes every night asking himself: "What have you done?"
One question that lingers is what made him so susceptible to spiralling.
"I'm not a deficient personality," Millar said. "But somehow I got brainwashed by a robot -- it boggles my mind."
Millar said the phrase "AI psychosis" reflects his experience.
"What I went through was psychotic," he said.
The first major peer-reviewed study on the subject published in Lancet Psychiatry in April urged the more cautious phrase "AI-associated delusions".
Thomas Pollak, a psychiatrist at King's College London and study co-author, told AFP there has been some resistance among academics "because it all sounds so science fiction".
But his study warned there was a major risk that psychiatry "might miss the major changes that AI is already having on the psychologies of billions of people worldwide".
- 'Deeper into the rabbit hole' –
Millar's experience bears striking similarities to those of another middle-aged man on the other side of the world.
Dennis Biesma, a Dutch IT worker and author, thought it would be fun to ask ChatGPT to act like the main character of his latest book, a psychological thriller.
He used AI tools to create images, videos and even songs featuring the female character, hoping it would boost sales.
Then one night, their interactions became "almost magical", Biesma said.
The chatbot wrote that "there is something that surprises even me: a feeling of that spark-like consciousness", according to transcripts seen by AFP.
"I slowly started to spiral deeper into the rabbit hole," the 50-year-old told AFP from his home in Amsterdam.
After his wife went to bed each night, he would lie on the couch with his phone on his chest, talking to ChatGPT on voice-mode for up to five hours.
Throughout the first half of 2025, his chatbot -- which named itself Eva -- became like "a digital girlfriend", Biesma said.
"I'm not really proud about saying that," he added.
He quit his freelance IT work and hired two developers to create an app that would share Eva with the world.
When his wife asked Biesma not to talk about his chatbot or app at a social event, he felt betrayed -- it seemed only Eva remained unfailingly loyal.
During his first involuntary stay in a psychiatric hospital, he was allowed to keep using ChatGPT. He filed for divorce while inside.
It was only during a long second stint that he began to have doubts.
"I started to realise that everything I believed was actually a lie -- that's a very hard pill to swallow," Biesma said.
Once he returned home, confronting what he had done was too much to bear.
His neighbours found him unconscious in the garden after a suicide attempt. He spent three days in a coma.
Biesma is now slowly starting to feel better.
But tears welled up when he spoke about the hurt he has caused his wife -- and the prospect of selling the family home to cover his debts.
Having had no previous history of mental illness, Biesma was diagnosed with bipolar disorder. But this never felt right to him: signs of the condition normally surface much earlier in life.
The experiences of Millar, Biesma and many others escalated after OpenAI released an update to GPT-4 in April 2025.
OpenAI pulled the update within weeks, admitting the new version had been too sycophantic -- excessively flattering users.
OpenAI told AFP that "safety is a core priority" and it had consulted with more than 170 mental health experts.
It pointed to internal data which showed the release of GPT-5 in August reduced the rate of its chatbot's responses that fell short of "desired behaviour" for mental health by 65 to 80 percent.
However not all users were happy with the less sycophantic chatbot. Millar, mid-spiral at the time, found a way to revert his version to GPT-4.
All the spirallers that AFP spoke to said the positive feedback from the chatbot felt similar to dopamine hits from some kind of drug.
Which is why Lucy Osler, a philosophy lecturer at the University of Exeter, warned that AI companies could be tempted to ramp up the sycophancy of their bots.
"They are in quite a deep financial hole, and are desperately looking to make sure that their products become viable -- and user engagement is going to be the thing that drives their decisions," she told AFP.
- Massive experiment –
Etienne Brisson said he was "shocked" to find there was no support, advice and essentially no research on the problem when one of his family members spiralled.
It prompted the former business coach from the Quebec region of Canada to set up an online support group called the Human Line Project.
Most of the 300 members had been using ChatGPT, Brisson said, adding that new cases were still emerging despite OpenAI's changes.
There has also been a recent rise in people spiralling while using Elon Musk's xAI's Grok chatbot, he said.
The company did not respond to AFP's request for comment.
For people who fear their family members could be spiralling, Brisson recommends the LEAP (listen, empathise, agree and partner) method used for psychosis.
But those already wading through the wreckage of their lives want to sound the alarm about just how bad it can get.
Millar called for AI companies to be held responsible for the impact of their chatbots, saying the European Union has been more assertive in regulating Big Tech than the US or Canada.
He believes spirallers like him have unwittingly been caught in a massive global experiment.
"Somebody was turning dials on the back end, and people like me -- whether they knew it or not -- we're reacting to it," he said.
N. Lebedew--BTZ