Man Hospitalised After Following Dangerous Health Advice from AI Chatbot
On the surface, the suggestion may have sounded like a harmless alternative.

Artificial intelligence is finding its way into nearly every corner of daily life — from planning holidays to drafting emails. For many, the phrase “just ask ChatGPT” has become a shortcut to quick answers. But when it comes to health advice, experts are warning that the risks can be severe.
One recent case, published in the Annals of Internal Medicine, has raised alarm among doctors and AI safety advocates after a man was hospitalised with a rare and life-threatening form of poisoning — all because he followed a chatbot’s dietary recommendation.
From Salt Reduction to Severe Poisoning
The man, hoping to cut down on his sodium chloride (table salt) intake and improve his health, turned to ChatGPT for suggestions. According to the medical report, the chatbot advised him to replace regular salt with sodium bromide.
On the surface, the suggestion may have sounded like a harmless alternative. In reality, sodium bromide is not a food ingredient. It is a chemical used in industrial and sanitation settings — as a water disinfectant, sanitiser, slimicide, bactericide, algicide, fungicide, and even a molluscicide. None of this information, according to the report, was provided in the chatbot’s advice.
Without verifying the information or consulting a doctor, the man purchased sodium bromide online and began adding it to his diet.
The man, hoping to cut down on his sodium chloride (table salt) intake and improve his health, turned to ChatGPT for suggestions. According to the medical report, the chatbot advised him to replace regular salt with sodium bromide.

The Alarming Health Spiral
After roughly three months of consumption, his health began to deteriorate in a frightening way. He developed paranoid delusions, including a belief that his neighbour was trying to poison him.
"In the first 24 hours of admission," doctors wrote, "he expressed increasing paranoia and auditory and visual hallucinations, which, after attempting to escape, resulted in an involuntary psychiatric hold for grave disability."Once stabilised with medication, he shared the full story with medical staff, revealing how AI advice had influenced his decision.
""It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies"
The Diagnosis: Bromism
Blood tests revealed extremely high levels of bromide in his system — a rare condition known as bromism. Normal bromide levels in the blood are under 10 mg/L. This patient’s levels were measured at 1,700 mg/L.
Bromism can cause a wide range of symptoms, from headaches, lethargy, and confusion to hallucinations, paranoia, and neurological impairment. In severe cases, it can be life-threatening.
The Bigger Issue: AI and Misinformation
The case study’s authors used the incident as a cautionary tale about over-reliance on AI for medical guidance. "It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation," they wrote.While AI tools can be convenient and even helpful in certain contexts, they lack the oversight, nuance, and accountability of trained healthcare professionals.
A Cautionary Reminder
This case serves as a stark reminder that AI should never replace human medical advice. Chatbots can misinterpret questions, omit critical safety information, or confidently provide incorrect answers. The consequences, as this patient learned, can be devastating.
When it comes to your health, experts stress that information from AI should be cross-checked with reputable medical sources — and any significant changes to your diet, supplements, or treatment should be discussed with a qualified healthcare provider.