Man Hospitalised After Following Dangerous Health Advice from AI Chatbot
On the surface, the suggestion may have sounded like a harmless alternative.
Artificial intelligence is finding its way into nearly every corner of daily life — from planning holidays to drafting emails. For many, the phrase “just ask ChatGPT” has become a shortcut to quick answers. However, when it comes to health advice, experts are warning that the risks can be severe.
One recent case, published in the Annals of Internal Medicine, has raised alarm among doctors and AI safety advocates after a man was hospitalised with a rare and life-threatening form of poisoning — all because he followed a chatbot’s dietary recommendation.
From Salt Reduction to Severe Poisoning
The man, hoping to cut down on his sodium chloride (table salt) intake and improve his health, turned to ChatGPT for suggestions. According to the medical report, the chatbot advised him to replace regular salt with sodium bromide.
On the surface, the suggestion may have sounded like a harmless alternative. In reality, sodium bromide is not a food ingredient. It is a chemical used in industrial and sanitation settings — as a water disinfectant, sanitiser, slimicide, bactericide, algicide, fungicide, and even a molluscicide. None of this information, according to the report, was provided in the chatbot’s advice.
Without verifying the information or consulting a doctor, the man purchased sodium bromide online and began adding it to his diet.
The man, hoping to cut down on his sodium chloride (table salt) intake and improve his health, turned to ChatGPT for suggestions. According to the medical report, the chatbot advised him to replace regular salt with sodium bromide.
PexelsThe Alarming Health Spiral
After roughly three months of consumption, his health began to deteriorate in a frightening way. He developed paranoid delusions, including a belief that his neighbour was trying to poison him.
"In the first 24 hours of admission," doctors wrote, "he expressed increasing paranoia and auditory and visual hallucinations, which, after attempting to escape, resulted in an involuntary psychiatric hold for grave disability."Once stabilised with medication, he shared the full story with medical staff, revealing how AI advice had influenced his decision.
Understanding the Psychology Behind Turning to AI for Health Advice
It's possible that the reliance on AI for health advice could be partially explained by the concept of the 'online disinhibition effect,' where the anonymity and invisibility of the internet embolden people to share sensitive information, seek advice, or take actions they wouldn't normally take in face-to-face interactions. As noted by Dr. Dan Siegel, a renowned psychiatrist, "The online environment can create a sense of safety that encourages individuals to express themselves more freely." This psychological concept would be especially relevant if the person perceives the AI as non-judgmental and thus feels more comfortable seeking advice from it.
"It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies"
The Diagnosis: Bromism
Blood tests revealed extremely high levels of bromide in his system — a rare condition known as bromism. Normal bromide levels in the blood are under 10 mg/L. This patient’s levels were measured at 1,700 mg/L.
Bromism can cause a wide range of symptoms, from headaches, lethargy, and confusion to hallucinations, paranoia, and neurological impairment. In severe cases, it can be life-threatening.
The Bigger Issue: AI and Misinformation
The case study’s authors used the incident as a cautionary tale about over-reliance on AI for medical guidance. "It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation," they wrote.While AI tools can be convenient and even helpful in certain contexts, they lack the oversight, nuance, and accountability of trained healthcare professionals.
A Cautionary Reminder
This case serves as a stark reminder that AI should never replace human medical advice. Chatbots can misinterpret questions, omit critical safety information, or confidently provide incorrect answers. The consequences, as this patient learned, can be devastating.
When it comes to your health, experts stress that information from AI should be cross-checked with reputable medical sources — and any significant changes to your diet, supplements, or treatment should be discussed with a qualified healthcare provider.
Another factor at play could be the 'automation bias,' where people tend to favor decisions made by automated systems over human judgment, even when those systems are proven to be fallible (Goddard, Roudsari, & Wyatt, 2014). This bias might explain why individuals might follow health advice from an AI, even when it may seem hazardous.
What Research Shows About the Dangers of Relying on AI for Health Advice
Research has indicated that AI chatbots, while useful for providing general information, are not yet reliable for providing nuanced health advice. A study published in The British Medical Journal found that symptom-checker apps (a form of AI) provided the correct diagnosis only 34% of the time (Semigran, Linder, Gidengil, & Mehrotra, 2015). This points to the risks of relying too heavily on AI for health-related matters.
Analysis & Alternative Approaches
While AI has been making significant strides in many areas, it's clear that relying on it for sensitive matters like health advice can lead to serious consequences. As suggested by the psychological concepts of the online disinhibition effect and automation bias, people may tend to overtrust and misuse AI in such contexts. This underscores the importance of seeking professional medical advice for health concerns, rather than relying solely on AI systems (Goddard, Roudsari, & Wyatt, 2014; Semigran, Linder, Gidengil, & Mehrotra, 2015).