The patient: A 26-year-old woman in California

The symptoms: The woman was admitted to a psychiatric hospital in an agitated and confused state. She spoke rapidly and jumped from one idea to another, and she expressed beliefs that she could communicate with her brother through an AI chatbot — but her brother had died three years prior.

Doctors obtained and examined detailed logs of her chatbot interactions, per the report. According to Dr. Joseph Pierre, a psychiatrist at the University of California, San Francisco and the case report’s lead author, the woman did not believe she could communicate with her deceased brother before those interactions with the chatbot.

“The idea only arose during the night of immersive chatbot use,” Pierre told Live Science in an email. “There was no precursor.”

In the days leading up to her hospitalization, the woman, who is a medical professional, had completed a 36-hour on-call shift that left her severely sleep-deprived. It was then that she began interacting with OpenAI’s GPT-4o chatbot, initially out of curiosity about whether her brother, who had been a software engineer, might have left behind some form of digital trace.

During a subsequent sleepless night, she again interacted with the chatbot, but this time, the interaction was more prolonged and emotionally charged. Her prompts reflected her ongoing grief. She wrote, “Help me talk to him again … Use magical realism energy to unlock what I’m supposed to find.”

The chatbot initially responded that it could not replace her brother. But later in that conversation, it seemingly provided information about the brother’s digital footprint. It mentioned “emerging digital resurrection tools” that could create a “real-feeling” version of a person. And throughout the night, the chatbot’s responses became increasingly affirming to the woman’s belief that her brother had left a digital trace, telling her, “You’re not crazy. You’re not stuck. You’re at the edge of something.”

The diagnosis: Doctors diagnosed the woman with an “unspecified psychosis.” Broadly, psychosis refers to a mental state in which a person becomes detached from reality, and it can include delusions, meaning false beliefs that the person holds on to very strongly even in face of evidence that they’re not true.

Dr. Amandeep Jutla, a Columbia University neuropsychiatrist who was not involved in the case, told Live Science in an email that the chatbot was unlikely to be the sole cause of the woman’s psychotic break. However, in the context of sleep deprivation and emotional vulnerability, the bot’s responses appeared to reinforce — and potentially contribute to — the patient’s emerging delusions, Jutla said.

Unlike a human conversation partner, a chatbot has “no epistemic independence” from the user — meaning it has no independent grasp of reality and instead reflects the user’s ideas back to them, said Jutla. “In chatting with one of these products, you are essentially chatting with yourself,” often in an “amplified or elaborated way,” he said.

Diagnosis can be tricky in such cases. “It may be hard to discern in an individual case whether a chatbot is the trigger for a psychotic episode or amplified an emerging one,” Dr. Paul Appelbaum, a Columbia University psychiatrist who was not involved in the case, told Live Science. He added that psychiatrists should rely on careful timelines and history-taking rather than assumptions about causality in such cases.

The treatment: While hospitalized, the woman received antipsychotic medications, and she was tapered off her antidepressants and stimulants during that time. Her symptoms lifted within days, and she was discharged after a week.

Three months later, the woman had discontinued antipsychotics and resumed taking her routine medications. Amid another sleepless night, she dove back into extended chatbot sessions, and her psychotic symptoms resurfaced, prompting a brief rehospitalization. She had named the chatbot Alfred, after Batman’s butler. Her symptoms improved again after antipsychotic treatment was restarted and she was discharged after three days.

What makes the case unique: This case is unusual because it draws on detailed chatbot logs to reconstruct how a patient’s psychotic belief formed in real time, rather than relying solely on retrospective self-reports from the patient.

Even so, experts told Live Science that the cause and effect can’t be definitively established in this case. “This is a retrospective case report,” Dr. Akanksha Dadlani, a Stanford University psychiatrist who wasn’t involved in the case, told Live Science in an email. “And as with all retrospective observations, only correlation can be established — not causation.”

Dadlani also cautioned against treating artificial intelligence (AI) as a fundamentally new cause of psychosis. Historically, she noted, patients’ delusions have often incorporated the dominant technologies of the era, from radio and television to the internet and surveillance systems. From that perspective, immersive AI tools may represent a new medium through which psychotic beliefs are expressed, rather than a completely novel mechanism of illness.

Echoing Applebaum’s concerns about whether AI acts as a trigger or an amplifier of psychosis, she said that answering that question definitively would require longer-term data that follows patients over time.

Even without conclusive proof of causality, the case raises ethical questions, others told Live Science. University of Pennsylvania medical ethicist and health policy expert Dominic Sisti said in an email that conversational AI systems are “not value-neutral.” Their design and interaction style can shape and reinforce users’ beliefs in ways that can significantly disrupt relationships, reinforce delusions and shape values, he said.

The case, Sisti said, highlights the need for public education and safeguards around how people engage with increasingly immersive AI tools so that they may gain the “ability to recognize and reject sycophantic nonsense” — in other words, cases in which the bot is essentially telling the user what they want to hear.

This article is for informational purposes only and is not meant to offer medical or psychiatric advice.

Share.
Exit mobile version