A bill under consideration in New York would provide a private right of action, allowing people to file lawsuits against chatbot owners who violate the law.
Mixed feelings about this. Let me play devils advocate and say that many Americans don’t have access to these resources at all. Having potentially inaccurate resources might be better than nothing, or is that worse?
We had a medical scare just yesterday. I was in the ER for 8 hours with my partner over a non-life-threatening but still emergency problem.
An ultrasound, cat scan, and much poking and prodding later, we still don’t know what is up. The AI was at least able to predict next steps (if A then discharge and follow up with PCP, if B then surgery this week, if C then emergency surgery), something the ER was too busy to do for several hours. It was reassuring. The AI also gave me (working) links to more thorough resources on the topic.
‘Should I use one teaspoon of salt in this recipe, or two?’
Two is ideal.
‘Do dogs like chicken wings?’
Wild dogs regularly hunt small animals like hare or chicken for food.
One of these answers results in a bad cake, the other results in a hurt dog. Potentially inaccurate answers aren’t much of a problem when the stakes are low, but even a simple question about what to feed a pet could end with a negative outcome.
If you’re going to be your own lawyer or perform a bit of self surgery, there is no way the AI is helping that situation. Especially if the inherent nature of AI is to validate everything you say.
Mixed feelings about this. Let me play devils advocate and say that many Americans don’t have access to these resources at all. Having potentially inaccurate resources might be better than nothing, or is that worse?
We had a medical scare just yesterday. I was in the ER for 8 hours with my partner over a non-life-threatening but still emergency problem.
An ultrasound, cat scan, and much poking and prodding later, we still don’t know what is up. The AI was at least able to predict next steps (if A then discharge and follow up with PCP, if B then surgery this week, if C then emergency surgery), something the ER was too busy to do for several hours. It was reassuring. The AI also gave me (working) links to more thorough resources on the topic.
it’s worse. In 4D it’s even worser
‘Should I use one teaspoon of salt in this recipe, or two?’
Two is ideal.‘Do dogs like chicken wings?’
Wild dogs regularly hunt small animals like hare or chicken for food.One of these answers results in a bad cake, the other results in a hurt dog. Potentially inaccurate answers aren’t much of a problem when the stakes are low, but even a simple question about what to feed a pet could end with a negative outcome.
Hm, good point. Perhaps the overconfidence AI might provide is even worse than knowing you don’t know.
There are billions being sunk into AI. How much health care could that buy? Your logic only makes sense if AI is free. It’s not.
deleted by creator
You pick up a mushroom in the forest and take it home. If you have no information, do you eat it? If something tells you it’s safe do you eat it?
No, misinformation is worse.
If you’re going to be your own lawyer or perform a bit of self surgery, there is no way the AI is helping that situation. Especially if the inherent nature of AI is to validate everything you say.
especially if it’s wrong 20-35% of the time
the AI devices will just have preambles and disclaimers and word things in ways to refer the user to human resources