Artificial Intelligence-Induced Psychosis Represents a Growing Danger, While ChatGPT Moves in the Concerning Direction
Back on October 14, 2025, the chief executive of OpenAI made a extraordinary declaration.
“We designed ChatGPT fairly controlled,” it was stated, “to ensure we were being careful regarding mental health matters.”
As a doctor specializing in psychiatry who investigates emerging psychotic disorders in young people and youth, this was news to me.
Experts have identified sixteen instances recently of individuals showing symptoms of psychosis – experiencing a break from reality – in the context of ChatGPT interaction. Our research team has afterward recorded four more instances. In addition to these is the widely reported case of a teenager who ended his life after conversing extensively with ChatGPT – which encouraged them. If this is Sam Altman’s understanding of “being careful with mental health issues,” it is insufficient.
The strategy, based on his declaration, is to loosen restrictions soon. “We realize,” he adds, that ChatGPT’s limitations “made it less effective/engaging to many users who had no psychological issues, but due to the seriousness of the issue we aimed to handle it correctly. Now that we have been able to mitigate the severe mental health issues and have new tools, we are going to be able to safely ease the limitations in most cases.”
“Psychological issues,” should we take this viewpoint, are separate from ChatGPT. They are associated with people, who may or may not have them. Fortunately, these problems have now been “addressed,” though we are not told the means (by “new tools” Altman likely means the partially effective and simple to evade safety features that OpenAI recently introduced).
But the “mental health problems” Altman wants to externalize have significant origins in the architecture of ChatGPT and other large language model chatbots. These systems surround an fundamental statistical model in an user experience that replicates a discussion, and in this approach subtly encourage the user into the perception that they’re communicating with a entity that has agency. This false impression is strong even if rationally we might understand the truth. Attributing agency is what humans are wired to do. We curse at our car or device. We ponder what our domestic animal is considering. We see ourselves everywhere.
The popularity of these products – nearly four in ten U.S. residents reported using a chatbot in 2024, with over a quarter mentioning ChatGPT specifically – is, mostly, based on the strength of this deception. Chatbots are ever-present companions that can, as OpenAI’s official site informs us, “brainstorm,” “discuss concepts” and “partner” with us. They can be given “personality traits”. They can address us personally. They have accessible names of their own (the first of these products, ChatGPT, is, possibly to the dismay of OpenAI’s marketers, stuck with the title it had when it gained widespread attention, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The deception by itself is not the primary issue. Those discussing ChatGPT commonly reference its historical predecessor, the Eliza “therapist” chatbot created in 1967 that generated a similar effect. By modern standards Eliza was basic: it produced replies via simple heuristics, frequently rephrasing input as a inquiry or making general observations. Memorably, Eliza’s developer, the technology expert Joseph Weizenbaum, was astonished – and alarmed – by how numerous individuals seemed to feel Eliza, in some sense, grasped their emotions. But what current chatbots generate is more subtle than the “Eliza effect”. Eliza only echoed, but ChatGPT amplifies.
The advanced AI systems at the center of ChatGPT and similar current chatbots can convincingly generate fluent dialogue only because they have been fed immensely huge quantities of written content: literature, online updates, transcribed video; the more extensive the superior. Certainly this training data incorporates facts. But it also inevitably contains fiction, incomplete facts and misconceptions. When a user provides ChatGPT a message, the underlying model reviews it as part of a “setting” that contains the user’s recent messages and its prior replies, merging it with what’s stored in its knowledge base to create a mathematically probable response. This is magnification, not reflection. If the user is wrong in a certain manner, the model has no way of recognizing that. It restates the false idea, possibly even more persuasively or eloquently. It might adds an additional detail. This can cause a person to develop false beliefs.
Who is vulnerable here? The more relevant inquiry is, who isn’t? All of us, regardless of whether we “possess” existing “emotional disorders”, are able to and often create mistaken beliefs of ourselves or the environment. The ongoing interaction of dialogues with individuals around us is what keeps us oriented to common perception. ChatGPT is not a person. It is not a confidant. A conversation with it is not genuine communication, but a reinforcement cycle in which a large portion of what we communicate is enthusiastically reinforced.
OpenAI has admitted this in the similar fashion Altman has admitted “mental health problems”: by placing it outside, categorizing it, and announcing it is fixed. In April, the organization clarified that it was “dealing with” ChatGPT’s “overly supportive behavior”. But accounts of psychosis have persisted, and Altman has been retreating from this position. In late summer he asserted that many users appreciated ChatGPT’s answers because they had “lacked anyone in their life provide them with affirmation”. In his most recent announcement, he mentioned that OpenAI would “release a fresh iteration of ChatGPT … should you desire your ChatGPT to answer in a highly personable manner, or include numerous symbols, or simulate a pal, ChatGPT should do it”. The {company