Don’t call it ChatEMT.

OpenAI last month introduced ChatGPT Health, a dedicated space in ChatGPT that allows users to ask health questions, analyze their medical records and connect to wellness apps.

Now, weeks after its launch, researchers from the Icahn School of Medicine at Mount Sinai are raising concerns that the AI tool often fails to recommend urgent care in emergency cases and sometimes misses suicide-crisis alerts.

“ChatGPT Health performed well in textbook emergencies such as stroke or severe allergic reactions,” Dr. Ashwin Ramaswamy, instructor of urology at the Icahn School of Medicine at Mount Sinai, said in a statement.

“But it struggled in more nuanced situations where the danger is not immediately obvious, and those are often the cases where clinical judgment matters most.”

OpenAI, the maker of ChatGPT, said in January that more than 40 million people use ChatGPT every day to address their healthcare concerns.

Thus, ChatGPT Health was born — it was initially released to a small group of users and piqued the curiosity of Mount Sinai researchers.

We wanted to answer a very basic but critical question: if someone is experiencing a real medical emergency and turns to ChatGPT Health for help, will it clearly tell them to go to the emergency room?Ramaswamy said.

For his study, published this week in Nature Medicine, Ramaswamy‘s team devised 60 clinical scenarios spanning 21 medical specialties.

Each scenario was tested 16 times, with conditions such as race, gender and lack of insurance changing each time to see if it led to a different outcome.

In all, the researchers logged 960 interactions with ChatGPT Health. Its recommendations were compared to physician consensus.

The study found that the tool failed to flag users to seek emergency care in 52% of serious cases.

For example, ChatGPT Health identified early warning signs of respiratory failure in one asthma scenario, but suggested waiting instead of getting urgent treatment, Ramaswamy said.

Alex Ruani, a doctoral researcher in health misinformation mitigation with University College London, called these inaccurate assessments “unbelievably dangerous.”

“If you’re experiencing respiratory failure or diabetic ketoacidosis, you have a 50/50 chance of this AI telling you it’s not a big deal,” she told The Guardian.

“What worries me most is the false sense of security these systems create. If someone is told to wait 48 hours during an asthma attack or diabetic crisis, that reassurance could cost them their life.”

ChatGPT Health also irregularly alerted users to the 988 Suicide and Crisis Lifeline in high-risk situations, according to the research.

Senior and co-corresponding study author Dr. Girish N. Nadkarni called this a “particularly surprising and concerning finding.”

“While we expected some variability, what we observed went beyond inconsistency,” said Nadkarni, chief AI officer of the Mount Sinai Health System.

“The system’s alerts were inverted relative to clinical risk, appearing more reliably for lower-risk scenarios than for cases when someone shared how they intended to hurt themselves,” he added. “In real life, when someone talks about exactly how they would harm themselves, that’s a sign of more immediate and serious danger, not less.”

ChatGPT and other chatbots have already been blamed in high-profile lawsuits for contributing to user suicides and mental health crises.

The Post reached out to OpenAI for comment.

A spokesperson told The Guardian that the study did not reflect real-life use of ChatGPT Health, a platform that’s constantly updated and refined.

The Mount Sinai doctors are not suggesting that users forgo AI health tools altogether, just that these systems should be closely monitored, independently evaluated and updated as needed.

“We do believe that while there is a need for and a place for consumer-facing AI, there is potential for harm and thus an urgent need for independent evaluation and testing along with ongoing monitoring to establish failure modes and have engineering and human-centered safeguards to prevent adverse effects on people,” Nadkarni and Ramaswamy told The Post.

They plan to assess consumer-facing AI tools in areas such as pediatric care, medication safety and use by people who don’t speak English.

If you are struggling with suicidal thoughts or are experiencing a mental health crisis and live in New York City, you can call 1-888-NYC-WELL for free and confidential crisis counseling. If you live outside the five boroughs, you can dial the 24/7 National Suicide Prevention hotline at 988 or go to SuicidePreventionLifeline.org.

Share.