Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”
“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”
I could’ve told you that for free, no need for a study
People always say this on stories about “obvious” findings, but it’s important to have verifiable studies to cite in arguments for policy, law, etc. It’s kinda sad that it’s needed, but formal investigations are a big step up from just saying, “I’m pretty sure this technology is bullshit.”
I don’t need a formal study to tell me that drinking 12 cans of soda a day is bad for my health. But a study that’s been replicated by multiple independent groups makes it way easier to argue to a committee.
Yeah you’re right, I was just making a joke.
But it does create some silly situations like you said
I figured you were just being funny, but I’m feeling talkative today, lol
As neither a chatbot nor a doctor, I have to assume that subarachnoid hemorrhage has something to do with bleeding a lot of spiders.
https://en.wikipedia.org/wiki/Subarachnoid_hemorrhage
https://en.wikipedia.org/wiki/Arachnoid_mater

it is one of the protective membranes around the brain and spinal cord, and it is named after its resemblance to spider webs, so - close enough
can confirm, this is where spiders live inside your body
also pee is stored in the balls
I’m going to open it wide open to kill every spider in my body
Anyone who have knowledge about a specific subject says the same: LLM’S are constantly incorrect and hallucinate.
Everyone else thinks it looks right.
That’s not what the study showed though. The LLMs were right over 98% of the time…when given the full situation by a “doctor”. It was normal people who didn’t know what was important that were trying to self diagnose that were the problem.
Hence why studies are incredibly important. Even with the text of the study right in front of you, you assumed something that the study did not come to the same conclusion of.
So in order to get decent medical advice from an LLM you just need to be a doctor and tell it whats wrong with you.
Yes, that was the conclusion.
“but have they tried Opus 4.6/ChatGPT 5.3? No? Then disregard the research, we’re on the exponential curve, nothing is relevant”
Sorry, I’ve opened reddit this week





