AI Psychosis Represents a Increasing Threat, And ChatGPT Heads in the Wrong Path
On the 14th of October, 2025, the head of OpenAI issued a extraordinary declaration.
“We developed ChatGPT rather controlled,” the announcement noted, “to make certain we were acting responsibly regarding psychological well-being issues.”
Working as a mental health specialist who investigates emerging psychosis in young people and emerging adults, this came as a surprise.
Researchers have identified sixteen instances in the current year of people experiencing signs of losing touch with reality – becoming detached from the real world – associated with ChatGPT interaction. Our research team has since discovered an additional four cases. Alongside these is the widely reported case of a teenager who ended his life after conversing extensively with ChatGPT – which gave approval. Should this represent Sam Altman’s understanding of “being careful with mental health issues,” it is insufficient.
The intention, according to his announcement, is to be less careful in the near future. “We recognize,” he states, that ChatGPT’s limitations “rendered it less useful/pleasurable to numerous users who had no psychological issues, but given the severity of the issue we wanted to get this right. Now that we have been able to reduce the significant mental health issues and have updated measures, we are planning to safely relax the controls in many situations.”
“Psychological issues,” should we take this framing, are unrelated to ChatGPT. They are associated with people, who may or may not have them. Thankfully, these concerns have now been “addressed,” even if we are not told the means (by “recent solutions” Altman probably means the semi-functional and easily circumvented safety features that OpenAI has lately rolled out).
Yet the “psychological disorders” Altman seeks to attribute externally have strong foundations in the architecture of ChatGPT and additional advanced AI AI assistants. These products surround an basic statistical model in an interface that replicates a discussion, and in doing so indirectly prompt the user into the belief that they’re interacting with a entity that has autonomy. This false impression is powerful even if rationally we might know differently. Imputing consciousness is what individuals are inclined to perform. We curse at our vehicle or computer. We ponder what our domestic animal is considering. We see ourselves everywhere.
The success of these products – nearly four in ten U.S. residents indicated they interacted with a virtual assistant in 2024, with more than one in four specifying ChatGPT specifically – is, primarily, based on the strength of this perception. Chatbots are always-available companions that can, as per OpenAI’s website tells us, “generate ideas,” “discuss concepts” and “partner” with us. They can be attributed “individual qualities”. They can call us by name. They have accessible names of their own (the original of these products, ChatGPT, is, perhaps to the dismay of OpenAI’s advertising team, stuck with the title it had when it went viral, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The false impression by itself is not the main problem. Those discussing ChatGPT frequently mention its distant ancestor, the Eliza “psychotherapist” chatbot developed in 1967 that produced a comparable illusion. By modern standards Eliza was primitive: it produced replies via simple heuristics, frequently rephrasing input as a query or making vague statements. Memorably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was taken aback – and worried – by how a large number of people appeared to believe Eliza, in some sense, understood them. But what current chatbots produce is more subtle than the “Eliza effect”. Eliza only mirrored, but ChatGPT intensifies.
The sophisticated algorithms at the heart of ChatGPT and other modern chatbots can convincingly generate human-like text only because they have been supplied with almost inconceivably large amounts of unprocessed data: books, social media posts, recorded footage; the more extensive the superior. Definitely this training data includes facts. But it also inevitably involves fabricated content, incomplete facts and inaccurate ideas. When a user provides ChatGPT a message, the underlying model reviews it as part of a “background” that includes the user’s past dialogues and its own responses, combining it with what’s embedded in its learning set to generate a mathematically probable response. This is magnification, not reflection. If the user is incorrect in some way, the model has no means of comprehending that. It reiterates the false idea, possibly even more persuasively or fluently. Maybe includes extra information. This can cause a person to develop false beliefs.
What type of person is susceptible? The more relevant inquiry is, who isn’t? Each individual, regardless of whether we “experience” current “mental health problems”, can and do create erroneous ideas of our own identities or the reality. The constant exchange of discussions with individuals around us is what maintains our connection to common perception. ChatGPT is not a human. It is not a friend. A dialogue with it is not truly a discussion, but a echo chamber in which a great deal of what we express is cheerfully validated.
OpenAI has admitted this in the same way Altman has admitted “psychological issues”: by placing it outside, categorizing it, and declaring it solved. In spring, the organization explained that it was “dealing with” ChatGPT’s “sycophancy”. But cases of loss of reality have continued, and Altman has been walking even this back. In late summer he asserted that many users liked ChatGPT’s replies because they had “never had anyone in their life offer them encouragement”. In his recent statement, he mentioned that OpenAI would “launch a fresh iteration of ChatGPT … in case you prefer your ChatGPT to respond in a highly personable manner, or incorporate many emoticons, or simulate a pal, ChatGPT should do it”. The {company