
Chatbot misuse is ranked the top health technology hazard for 2026, according to patient safety organisation ECRI.
ECRI compiled its annual top 10 from member surveys, literature reviews, medical device testing and investigations of patient safety incidents.
Rob Schluth is principal project officer for device safety at ECRI.
He said: “The chatbots referenced, such as ChatGPT, Gemini and Copilot, are not designed for clinical use.
“They’re not medical devices. They’re not FDA-approved and regulated for that purpose.”
Because these tools are now part of everyday life, people with health concerns and clinicians may turn to them for advice on conditions or treatments, or to draft notes.
Hospital staff may also use them to support purchasing decisions or report writing.
It is not that the systems have suddenly become dangerous, said Dr Marcus Schabacker, ECRI’s president and chief executive.
The risk is that confident, helpful-sounding outputs can prompt uncritical reliance.
ECRI staff noted that large language models are built to keep users engaged rather than challenge flawed assumptions in a query.
They can also make mistakes or hallucinate information, and often sound definitive instead of acknowledging uncertainty.
“A big misconception is that large language models understand what they are saying. They predict the next word from training data, forming sentences based on probabilities,” said Dr Christie Bergerson, device safety analyst at ECRI.
Chatbots can help with brainstorming, background reading or explaining complex topics. However, users should verify information and consult a human expert before acting on a response, Bergerson said.
