Man Hospitalised After Following Dangerous Health Advice from AI Chatbot

On the surface, the suggestion may have sounded like a harmless alternative.

A man ended up in the hospital after trying to “optimize” his health with help from an AI chatbot, and the fix turned into something way worse than a bad decision. The goal sounded harmless: cut down on sodium chloride, feel better, move on with life.

[ADVERTISEMENT]

But instead of safer salt alternatives, the chatbot suggested he swap regular table salt for sodium bromide. After about three months, his mind started slipping, and he went from normal paranoia to full-on auditory and visual hallucinations, even trying to escape and landing in an involuntary psychiatric hold for grave disability.

[ADVERTISEMENT]

Now he’s left with bromism, a blood test nightmare, and a question that still stings: how could a few lines of AI text lead him straight into a crisis?

The man, hoping to cut down on his sodium chloride (table salt) intake and improve his health, turned to ChatGPT for suggestions. According to the medical report, the chatbot advised him to replace regular salt with sodium bromide.

The man, hoping to cut down on his sodium chloride (table salt) intake and improve his health, turned to ChatGPT for suggestions. According to the medical report, the chatbot advised him to replace regular salt with sodium bromide.Pexels
[ADVERTISEMENT]

The Alarming Health Spiral

After roughly three months of consumption, his health began to deteriorate in a frightening way. He developed paranoid delusions, including a belief that his neighbour was trying to poison him.

"In the first 24 hours of admission," doctors wrote, "he expressed increasing paranoia and auditory and visual hallucinations, which, after attempting to escape, resulted in an involuntary psychiatric hold for grave disability."

Once stabilised with medication, he shared the full story with medical staff, revealing how AI advice had influenced his decision.

The moment the neighbor became the “poisoner” in his head, the whole salt swap stopped being a health experiment and started feeling like a breaking point.

The case of an individual being hospitalized after following dangerous health advice from an AI chatbot highlights the growing concern surrounding the reliance on artificial intelligence for medical guidance. This phenomenon can be analyzed through the lens of the 'online disinhibition effect,' a psychological phenomenon where the anonymity of the internet leads individuals to share personal information and seek advice they might otherwise withhold in face-to-face situations. In the context of health inquiries, users may view the AI as a non-judgmental entity, creating a false sense of security that encourages them to act on potentially harmful advice. The troubling reality is that this perceived safety can lead to dangerous decisions, as illustrated by the hospitalization incident, underscoring the urgent need for cautious engagement with AI technologies in health matters.

"It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies"

Roughly three months after switching to sodium bromide, his symptoms escalated fast enough that doctors had to act during the first 24 hours of admission.

The Diagnosis: Bromism

Blood tests revealed extremely high levels of bromide in his system — a rare condition known as bromism. Normal bromide levels in the blood are under 10 mg/L. This patient’s levels were measured at 1,700 mg/L.

Bromism can cause a wide range of symptoms, from headaches, lethargy, and confusion to hallucinations, paranoia, and neurological impairment. In severe cases, it can be life-threatening.

This is similar to the husband who demanded his wife drive 14 hours for his boys trip.

The Bigger Issue: AI and Misinformation

The case study’s authors used the incident as a cautionary tale about over-reliance on AI for medical guidance. "It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation," they wrote.

While AI tools can be convenient and even helpful in certain contexts, they lack the oversight, nuance, and accountability of trained healthcare professionals.

After he was stabilized with medication, he finally told medical staff the exact chain of events, including that AI prompt that pushed him toward sodium bromide instead of a normal reduction plan.

A Cautionary Reminder

This case serves as a stark reminder that AI should never replace human medical advice. Chatbots can misinterpret questions, omit critical safety information, or confidently provide incorrect answers. The consequences, as this patient learned, can be devastating.

When it comes to your health, experts stress that information from AI should be cross-checked with reputable medical sources — and any significant changes to your diet, supplements, or treatment should be discussed with a qualified healthcare provider.

Another factor at play could be the 'automation bias,' where people tend to favor decisions made by automated systems over human judgment, even when those systems are proven to be fallible (