Humans aren’t better at driving in bad situations, they’re just better at ignoring most of the input and focusing on one thing, and more importantly, taking risks, which a computer isn’t going to be programmed to do. If a human navigates through a bad rainstorm, barely able to see anything, and makes it out fine, then they claim they’re better than a self-driving car which would shut down. Or more simply, a route that is very tight and risky, but a human will YOLO and make it through. They’re not better, they’re just lucky a lot.
I just imagine the dumbest thing someone can do in a situation and model for that.
Icy roads? Well, the logical thing to do is make sudden moves at the last possible second without leaving a buffer. Stand on the throttle at every intersection, and start braking when you normally would in the middle of summer, of course.
Honestly, you learn to predict unpredictability. Slight movements will tell you when someone is going to change lanes without a shoulder check or cross three lanes of traffic to make an exit that they could just have easily gone on to the next interchange without endangering themselves and others. Hell, I watch peoples eyes in their side mirrors look at me as they incorrectly judge how much space they have to insert themselves in front of me, when there’s a kilometer of space behind me they could use instead.
Yes, if you read them. If you consider doing a Hail Mary in bad weather and managing to not hit anything better, then I guess they are better… at taking risks.
Every time you get in a car you’re taking a risk. Being a good driver is about knowing which risks are acceptable to take. That’s why waymo offloads complex situations to humans.
What are the risks taken when you get in a car? Oh, right, accidents. Caused by all the AI on the road. Not the humans.
If we had developed automated transportation first and then tried to introduce human driving, people would say that’s insane. It’s the human element that breaks things, every time. I don’t care about all the downvotes, but I know each of them is from someone who thinks they’re the best driver out there. It’s a human thing to do.
Accidents are caused by plenty of things that aren’t humans. I had a distant relative die from a tree falling on their car at a stop sign. The world is random and unpredictable.
This entire conversation is about the small percentage of time that AI can’t handle the situation. And you haven’t addressed that point. And neither have AI companies. And that’s why they aren’t succeeding yet.
And you can’t just say those situations don’t exist. They clearly do. And it’s not because human drivers are out there. Road hazards, shut down roads, sink holes, extreme weather, these things all exist.
I’m starting to think you don’t have much experience on the road. How long have you been driving for? Have you really never come across a unique situation that you don’t think an AI could handle? Have you never driven in a city?
My 40+ years of driving experience with various vehicles and equipment and in all sorts of conditions has nothing to do with this. So let’s not use that fallacy.
My first post on this was exactly about the times when AI can’t handle the experience, and was made to point out that sometimes humans just drive through situations they should have not, and get lucky. By design a machine isn’t going to do that, or if it did would be banned because that’s reckless, yet people drive like that daily and manage to usually avoid incidents. If you read back, the whole point I was making is that humans are superior necessarily, they’re just different in how they approach things. Lots of automation for safety is a great thing and has saved lives, because it fills in the gap where humans tend to fail. Full automation just can’t do some of the things humans do, and part of that is taking a risk and getting lucky. One thing that they’ve tried to simulate is our ability to filter out the noise, and that is a difficult task.
My answer that involved accidents was to point out that humans tend to cause the accidents that hurt other people. Other things happen too, but they can probably be traced back to a human making a mistake somewhere, even if it’s as simple as following too close or being distracted (sometimes our noise filter doesn’t work well).
I don’t think any of my posts have inferred that AI is better or even up to the task, they were more about how humans aren’t as great as they’re made out to be in these AI arguments. We just accept the level of problems and use technology to try and counter their deadliness, or avoid them if possible.
Humans aren’t better at driving in bad situations, they’re just better at ignoring most of the input and focusing on one thing, and more importantly, taking risks, which a computer isn’t going to be programmed to do. If a human navigates through a bad rainstorm, barely able to see anything, and makes it out fine, then they claim they’re better than a self-driving car which would shut down. Or more simply, a route that is very tight and risky, but a human will YOLO and make it through. They’re not better, they’re just lucky a lot.
Humans are better in bad situations because humans drive like humans, and they expect all the other cars on the road to also drive like a human.
The worst thing on the road is to be unpredictable, and an AI encountering a situation not in its training set is unpredictably unpredictable.
You’re correct on AI. But I laughed at you saying humans are predictable. Seen any dashcam footage?
I just imagine the dumbest thing someone can do in a situation and model for that.
Icy roads? Well, the logical thing to do is make sudden moves at the last possible second without leaving a buffer. Stand on the throttle at every intersection, and start braking when you normally would in the middle of summer, of course.
Honestly, you learn to predict unpredictability. Slight movements will tell you when someone is going to change lanes without a shoulder check or cross three lanes of traffic to make an exit that they could just have easily gone on to the next interchange without endangering themselves and others. Hell, I watch peoples eyes in their side mirrors look at me as they incorrectly judge how much space they have to insert themselves in front of me, when there’s a kilometer of space behind me they could use instead.
Seriously?
Yes, if you read them. If you consider doing a Hail Mary in bad weather and managing to not hit anything better, then I guess they are better… at taking risks.
Every time you get in a car you’re taking a risk. Being a good driver is about knowing which risks are acceptable to take. That’s why waymo offloads complex situations to humans.
What are the risks taken when you get in a car? Oh, right, accidents. Caused by all the AI on the road. Not the humans.
If we had developed automated transportation first and then tried to introduce human driving, people would say that’s insane. It’s the human element that breaks things, every time. I don’t care about all the downvotes, but I know each of them is from someone who thinks they’re the best driver out there. It’s a human thing to do.
Accidents are caused by plenty of things that aren’t humans. I had a distant relative die from a tree falling on their car at a stop sign. The world is random and unpredictable.
This entire conversation is about the small percentage of time that AI can’t handle the situation. And you haven’t addressed that point. And neither have AI companies. And that’s why they aren’t succeeding yet.
And you can’t just say those situations don’t exist. They clearly do. And it’s not because human drivers are out there. Road hazards, shut down roads, sink holes, extreme weather, these things all exist.
I’m starting to think you don’t have much experience on the road. How long have you been driving for? Have you really never come across a unique situation that you don’t think an AI could handle? Have you never driven in a city?
My 40+ years of driving experience with various vehicles and equipment and in all sorts of conditions has nothing to do with this. So let’s not use that fallacy.
My first post on this was exactly about the times when AI can’t handle the experience, and was made to point out that sometimes humans just drive through situations they should have not, and get lucky. By design a machine isn’t going to do that, or if it did would be banned because that’s reckless, yet people drive like that daily and manage to usually avoid incidents. If you read back, the whole point I was making is that humans are superior necessarily, they’re just different in how they approach things. Lots of automation for safety is a great thing and has saved lives, because it fills in the gap where humans tend to fail. Full automation just can’t do some of the things humans do, and part of that is taking a risk and getting lucky. One thing that they’ve tried to simulate is our ability to filter out the noise, and that is a difficult task.
My answer that involved accidents was to point out that humans tend to cause the accidents that hurt other people. Other things happen too, but they can probably be traced back to a human making a mistake somewhere, even if it’s as simple as following too close or being distracted (sometimes our noise filter doesn’t work well).
I don’t think any of my posts have inferred that AI is better or even up to the task, they were more about how humans aren’t as great as they’re made out to be in these AI arguments. We just accept the level of problems and use technology to try and counter their deadliness, or avoid them if possible.