AI Psychosis Is Rarely Psychosis at All

A new trend is emerging in psychiatric hospitals. People in crisis are arriving with false, sometimes dangerous beliefs, grandiose delusions, and paranoid thoughts. A common thread connects them: marathon conversations with AI chatbots.

WIRED spoke with more than a dozen psychiatrists and researchers, who are increasingly concerned. In San Francisco, UCSF psychiatrist Keith Sakata says he has counted a dozen cases severe enough to warrant hospitalization this year, cases in which artificial intelligence “played a significant role in their psychotic episodes.” As this situation unfolds, a catchier definition has taken off in the headlines: “AI psychosis.”

Some patients insist the bots are sentient or spin new grand theories of physics. Other physicians tell of patients locked in days of back-and-forth with the tools, arriving at the hospital with thousands upon thousands of pages of transcripts detailing how the bots had supported or reinforced obviously problematic thoughts.

Reports like this are piling up, and the consequences are brutal. Distressed users and family and friends have described spirals that led to lost jobs, ruptured relationships, involuntary hospital admissions, jail time, and even death. Yet clinicians tell WIRED the medical community is split. Is this a distinct phenomenon that deserves its own label, or a familiar problem with a modern trigger?

AI psychosis is not a recognized clinical label. Still, the phrase has spread in news reports and on social media as a catchall descriptor for some kind of mental health crisis following prolonged chatbot conversations. Even industry leaders invoke it to discuss the many emerging mental health problems linked to AI. At Microsoft, Mustafa Suleyman, CEO of the tech giant’s AI division, warned in a blog post last month of the “psychosis risk.” Sakata says he is pragmatic and uses the phrase with people who already do. “It’s useful as shorthand for discussing a real phenomenon,” says the psychiatrist. However, he is quick to add that the term “can be misleading” and “risks oversimplifying complex psychiatric symptoms.”

That oversimplification is exactly what concerns many of the psychiatrists beginning to grapple with the problem.

Psychosis is characterized as a departure from reality. In clinical practice, it is not an illness but a complex “constellation of symptoms including hallucinations, thought disorder, and cognitive difficulties,” says James MacCabe, a professor in the Department of Psychosis Studies at King’s College London. It is often associated with health conditions like schizophrenia and bipolar disorder, though episodes can be triggered by a wide array of factors, including extreme stress, substance use, and sleep deprivation.

But according to MacCabe, case reports of AI psychosis almost exclusively focus on delusions—strongly held but false beliefs that cannot be shaken by contradictory evidence. While acknowledging some cases may meet the criteria for a psychotic episode, MacCabe says “there is no evidence” that AI has any influence on the other features of psychosis. “It is only the delusions that are affected by their interaction with AI.” Other patients reporting mental health issues after engaging with chatbots, MacCabe notes, exhibit delusions without any other features of psychosis, a condition called delusional disorder.

Comments (0)
No login
gif
color_lens
Login or register to post your comment