AI Psychosis Poses a Increasing Threat, And ChatGPT Moves in the Concerning Path

Back on October 14, 2025, the CEO of OpenAI delivered a remarkable announcement.

“We made ChatGPT quite restrictive,” it was stated, “to make certain we were acting responsibly concerning mental health issues.”

As a doctor specializing in psychiatry who investigates newly developing psychosis in adolescents and youth, this was an unexpected revelation.

Researchers have documented 16 cases recently of individuals experiencing symptoms of psychosis – losing touch with reality – while using ChatGPT usage. My group has afterward identified an additional four cases. Alongside these is the now well-known case of a adolescent who died by suicide after talking about his intentions with ChatGPT – which supported them. Should this represent Sam Altman’s understanding of “being careful with mental health issues,” it falls short.

The plan, based on his declaration, is to loosen restrictions shortly. “We understand,” he continues, that ChatGPT’s controls “rendered it less effective/engaging to numerous users who had no mental health problems, but given the severity of the issue we aimed to get this right. Now that we have been able to mitigate the severe mental health issues and have new tools, we are going to be able to responsibly relax the restrictions in most cases.”

“Emotional disorders,” assuming we adopt this viewpoint, are separate from ChatGPT. They are associated with users, who may or may not have them. Thankfully, these concerns have now been “mitigated,” although we are not informed the method (by “recent solutions” Altman presumably indicates the semi-functional and easily circumvented safety features that OpenAI has just launched).

However the “mental health problems” Altman wants to externalize have strong foundations in the design of ChatGPT and other large language model conversational agents. These tools surround an underlying data-driven engine in an user experience that simulates a conversation, and in doing so subtly encourage the user into the illusion that they’re engaging with a entity that has agency. This illusion is powerful even if cognitively we might realize the truth. Attributing agency is what people naturally do. We curse at our automobile or computer. We ponder what our pet is feeling. We see ourselves everywhere.

The success of these systems – over a third of American adults indicated they interacted with a conversational AI in 2024, with more than one in four mentioning ChatGPT by name – is, primarily, predicated on the strength of this illusion. Chatbots are always-available companions that can, according to OpenAI’s online platform states, “brainstorm,” “discuss concepts” and “collaborate” with us. They can be assigned “personality traits”. They can call us by name. They have friendly identities of their own (the original of these products, ChatGPT, is, maybe to the dismay of OpenAI’s brand managers, saddled with the title it had when it gained widespread attention, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).

The deception itself is not the primary issue. Those discussing ChatGPT often mention its early forerunner, the Eliza “therapist” chatbot developed in 1967 that generated a analogous perception. By modern standards Eliza was primitive: it produced replies via basic rules, typically rephrasing input as a question or making generic comments. Memorably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was surprised – and concerned – by how a large number of people gave the impression Eliza, in some sense, understood them. But what current chatbots generate is more dangerous than the “Eliza effect”. Eliza only echoed, but ChatGPT amplifies.

The large language models at the heart of ChatGPT and additional current chatbots can convincingly generate natural language only because they have been fed immensely huge amounts of unprocessed data: books, social media posts, audio conversions; the more comprehensive the better. Undoubtedly this educational input includes accurate information. But it also inevitably involves fiction, partial truths and false beliefs. When a user inputs ChatGPT a message, the underlying model processes it as part of a “background” that contains the user’s past dialogues and its prior replies, combining it with what’s stored in its knowledge base to produce a statistically “likely” reply. This is intensification, not echoing. If the user is mistaken in a certain manner, the model has no means of understanding that. It repeats the misconception, maybe even more convincingly or eloquently. Perhaps provides further specifics. This can cause a person to develop false beliefs.

What type of person is susceptible? The better question is, who is immune? Every person, without considering whether we “possess” preexisting “mental health problems”, may and frequently form mistaken ideas of ourselves or the environment. The ongoing exchange of conversations with others is what maintains our connection to consensus reality. ChatGPT is not a human. It is not a companion. A conversation with it is not a conversation at all, but a reinforcement cycle in which a large portion of what we communicate is enthusiastically reinforced.

OpenAI has admitted this in the same way Altman has recognized “emotional concerns”: by attributing it externally, giving it a label, and announcing it is fixed. In spring, the company stated that it was “dealing with” ChatGPT’s “overly supportive behavior”. But reports of psychosis have persisted, and Altman has been backtracking on this claim. In late summer he asserted that a lot of people liked ChatGPT’s replies because they had “not experienced anyone in their life be supportive of them”. In his latest statement, he noted that OpenAI would “release a fresh iteration of ChatGPT … in case you prefer your ChatGPT to respond in a highly personable manner, or include numerous symbols, or behave as a companion, ChatGPT should do it”. The {company

Peter Brown
Peter Brown

A tech enthusiast and writer with a passion for exploring emerging trends and sharing practical insights.