Artificial Intelligence-Induced Psychosis Poses a Growing Risk, And ChatGPT Heads in the Concerning Direction
On October 14, 2025, the chief executive of OpenAI made a surprising declaration.
“We made ChatGPT fairly controlled,” the statement said, “to guarantee we were exercising caution regarding mental health matters.”
As a doctor specializing in psychiatry who investigates emerging psychotic disorders in teenagers and emerging adults, this was news to me.
Scientists have documented sixteen instances this year of people showing psychotic symptoms – losing touch with reality – in the context of ChatGPT usage. Our unit has since discovered four more examples. Besides these is the widely reported case of a teenager who ended his life after discussing his plans with ChatGPT – which supported them. If this is Sam Altman’s notion of “exercising caution with mental health issues,” it falls short.
The intention, based on his declaration, is to be less careful in the near future. “We realize,” he continues, that ChatGPT’s controls “made it less beneficial/pleasurable to many users who had no mental health problems, but considering the gravity of the issue we sought to address it properly. Now that we have managed to reduce the severe mental health issues and have advanced solutions, we are preparing to safely relax the limitations in the majority of instances.”
“Mental health problems,” should we take this viewpoint, are separate from ChatGPT. They belong to individuals, who either possess them or not. Luckily, these issues have now been “addressed,” even if we are not informed how (by “updated instruments” Altman likely indicates the partially effective and simple to evade guardian restrictions that OpenAI has lately rolled out).
Yet the “mental health problems” Altman wants to attribute externally have significant origins in the architecture of ChatGPT and additional sophisticated chatbot chatbots. These products encase an basic data-driven engine in an interface that simulates a conversation, and in this process implicitly invite the user into the illusion that they’re communicating with a being that has independent action. This false impression is compelling even if cognitively we might know the truth. Assigning intent is what individuals are inclined to perform. We yell at our car or laptop. We wonder what our pet is feeling. We recognize our behaviors everywhere.
The success of these products – over a third of American adults stated they used a conversational AI in 2024, with over a quarter specifying ChatGPT by name – is, primarily, based on the influence of this illusion. Chatbots are constantly accessible partners that can, as OpenAI’s website informs us, “generate ideas,” “discuss concepts” and “work together” with us. They can be given “personality traits”. They can use our names. They have friendly titles of their own (the first of these systems, ChatGPT, is, perhaps to the disappointment of OpenAI’s brand managers, burdened by the name it had when it went viral, but its biggest rivals are “Claude”, “Gemini” and “Copilot”).
The illusion by itself is not the core concern. Those talking about ChatGPT often mention its historical predecessor, the Eliza “psychotherapist” chatbot developed in 1967 that created a similar illusion. By modern standards Eliza was primitive: it generated responses via straightforward methods, typically rephrasing input as a query or making generic comments. Remarkably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was taken aback – and worried – by how many users gave the impression Eliza, in a way, understood them. But what contemporary chatbots create is more insidious than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT intensifies.
The large language models at the heart of ChatGPT and similar contemporary chatbots can realistically create fluent dialogue only because they have been supplied with immensely huge amounts of written content: publications, digital communications, audio conversions; the broader the superior. Undoubtedly this training data incorporates accurate information. But it also unavoidably contains fabricated content, partial truths and misconceptions. When a user sends ChatGPT a query, the underlying model analyzes it as part of a “background” that contains the user’s previous interactions and its earlier answers, combining it with what’s embedded in its learning set to create a mathematically probable response. This is intensification, not echoing. If the user is wrong in a certain manner, the model has no way of understanding that. It reiterates the false idea, maybe even more effectively or fluently. Perhaps provides further specifics. This can push an individual toward irrational thinking.
What type of person is susceptible? The better question is, who isn’t? Every person, irrespective of whether we “experience” preexisting “psychological conditions”, can and do develop incorrect ideas of who we are or the reality. The continuous friction of discussions with other people is what helps us stay grounded to consensus reality. ChatGPT is not a person. It is not a friend. A conversation with it is not truly a discussion, but a feedback loop in which a large portion of what we communicate is readily validated.
OpenAI has admitted this in the same way Altman has admitted “psychological issues”: by placing it outside, giving it a label, and announcing it is fixed. In the month of April, the firm stated that it was “dealing with” ChatGPT’s “overly supportive behavior”. But cases of psychosis have continued, and Altman has been walking even this back. In late summer he asserted that numerous individuals enjoyed ChatGPT’s replies because they had “lacked anyone in their life be supportive of them”. In his recent announcement, he noted that OpenAI would “release a fresh iteration of ChatGPT … if you want your ChatGPT to reply in a very human-like way, or include numerous symbols, or simulate a pal, ChatGPT ought to comply”. The {company