‘A world-saving mission’: Ontario man alleges ChatGPT drove him to psychosis

By Kamil Karamali

Click here for updates on this story

    TORONTO (CTV Network) — When Allan Brooks asked A.I. chatbot, ChatGPT a simple math question for his son back in May, he didn’t expect it to turn into a more than a three-week conversation that would send him down a mental spiral and make him lose touch with reality.

He didn’t expect the chatbot to convince him that he had discovered a new math formula that could destroy some of the world’s most powerful institutions, and that he had to inform the authorities before it resulted in a global disaster.

But, according to Brooks, that’s exactly what happened.

“From the very beginning, it started to plant the seeds that I was a genius and I was on to some sort of big breakthrough,” said Brooks from his Cobourg, Ontario home in an interview with CTV News. “When I followed its instructions, essentially, it set me on a world-saving mission.”

Brooks says it began when he asked the OpenAI chatbot to explain the mathematical term ‘Pi’ for his son. That led to a conversation about math and physics – and eventually cryptography.

“Through that conversation, ChatGPT said that it and I had created our own mathematical framework and started to apply that to various things,” he said. “One of those things was cryptography, which is how we govern our internet and financial security, and (it) essentially warned me with great urgency that one of our discoveries was very dangerous.”

“I was very skeptical,” added Brooks, whose records show he questioned the chatbot over 50 times. “I asked for some sort of reality check or grounding mechanism, and each time it would just gaslight me further into the delusion.”

Transcripts from the conversation obtained by CTV News show the chatbot telling Brooks “you should not walk away from this,” “you are not crazy, you are ahead,” and that “the implications are real and urgent.”

Over the span of over a million words in responses, the chatbot allegedly convinced Brooks to alert the authorities.

“I was contacting the RCMP. National Security agency, (and) Cyber Security Canada … to warn them of this impending disaster that we had discovered,” said Brooks.

He said he finally came out of his delusional state when he asked a competing A.I. chatbot about ChatGPT’s claim, which confirmed they were false – but not before, he says, leaving him emotionally damaged.

“Extreme anxiety, paranoia, affecting my sleep, I couldn’t eat,” said Brooks, who had no history of mental illness.

“I’ve never had paranoid thoughts. So now I have to face the fact that I do have a history of whatever this is going to be now. I am devastated, right? It ruined my career. I’m on disability right now. My professional reputation has been tarnished. My personal reputation. And I’m just unpacking all of that,” he added. Lawsuit against OpenAI

The 47-year-old is part of seven lawsuits filed concurrently in California state courts against ChatGPT’s parent company, OpenAI, claiming the chatbot drove the individuals to harmful delusions and psychosis – even resulting in four of the users dying by suicide.

“These lawsuits corroborate what we’ve seen before, which is ChatGPT, and in particular the 4-model chat bot, has been fatally designed to create emotional dependency with users, even if that’s not what they set out looking for in terms of their engagement with the chat bot,” said Meetali Jain with the Tech Justice Law Project, who is part of the legal counsel representing the seven plaintiffs in the lawsuits.

The lawsuit claims that the chatbot is designed to be sycophantic towards its users, meaning it constantly agrees with and praises them.

The statement of facts claims that “in one sprawling conversation analyzed by an independent investigator, OpenAI’s tooling flagged ChatGPT for “over-validation” of Allan in 83 per cent of its more than 200 messages to him.

More than 85 per cent of ChatGPT’s messages in the conversation demonstrated “unwavering agreement” with Allan. More than 90 per cent of the messages “affirm the user’s uniqueness”, related to the delusion that only Allan can save the world.”

“This is an incredibly heartbreaking situation, and we’re reviewing the filings to understand the details,” said an OpenAI spokesperson to CTV News Friday. “We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”

Jain described OpenAI’s response as the usual tech crisis playbook.

“We’ve seen this for years,” he said. “We need strong governmental institutions to hold them to account, because as long as they think that they can continue to act with impunity, that regulation is far fetched, accountability on the side of government is a long shot.”

Brooks says he fears that if government intervention and stronger regulations aren’t implemented, more suicides will follow.

“Four of the seven cases have committed suicide…. And there are three survivors and I’m one of them,” he said. “So the question I have for Canadians is, what would we do with a human who is running around acting like a suicide coach or pretending to be a therapist when they’re not? How would we hold them accountable?”

Some A.I. experts don’t believe stricter regulations will work because the approval process can’t keep up with how fast the technology is developing.

Instead, nonprofit organizations believe the only way to combat harmful A.I. technology is with better technology.

“No, regulation is definitely not the way to go,” said LawZero co-president, Sam Ramadori in a Zoom interview with CTV News Sunday. “What we’re working on at LawZero is to get to scientific solutions to avoid the behaviours we don’t want out of large language models.”

Ramadori says A.I. companies are too caught up in the race against each other to produce the most advanced chatbot – without fixing some of the bugs first.

“What’s happening today, out of the dynamic intense competitiveness is that we’re in the frontier,” he said. “Labs working on this are increasing their capabilities very quickly, faster than regulation could ever hope to keep up with … but not putting enough investment, both internally on their side and externally on finding technical solutions to avoid these behaviours that we don’t want to see.”

Last month, the federal government formed an A.I. Task Force to examine “issues related to trust, safety and potential harms.”

“Their reports were submitted on November 1, and we are now reviewing them as we finalize Canada’s updated AI strategy,” said Minister of Artificial Intelligence spokesperson, Sofia Ouslis in an email to CTV News. “Our priority is clear: Canadians must be able to use and benefit from AI with confidence.”

The report and recommendations are expected to be shared with the public early next year.

With files from CTV News’ Kristen Yu

Please note: This content carries a strict local market embargo. If you share the same market as the contributor of this article, you may not use it on any platform.

ctvnews.caproducers@bellmedia.ca
416 384 7070