

It’s not literally guessing, because guessing implies it understands there’s a question and is trying to answer that question. It’s not even doing that. It’s just generating words that you could expect to find nearby.


It’s not literally guessing, because guessing implies it understands there’s a question and is trying to answer that question. It’s not even doing that. It’s just generating words that you could expect to find nearby.


3 in 10 people get this wrong‽‽
Maybe they’re picturing filling up a bucket and bringing it back to the car? Or dropping off keys to the car at the car wash?


It’s also the case that people are mostly consistent.
Take a question like “how long would it take to drive from here to [nearby city]”. You’d expect that someone’s answer to that question would be pretty consistent day-to-day. If you asked someone else, you might get a different answer, but you’d also expect that answer to be pretty consistent. If you asked someone that same question a week later and got a very different answer, you’d strongly suspect that they were making the answer up on the spot but pretending to know so they didn’t look stupid or something.
Part of what bothers me about LLMs is that they give that same sense of bullshitting answers while trying to cover that they don’t know. You know that if you ask the question again, or phrase it slightly differently, you might get a completely different answer.


These articles are really better titled “[Company] is so unworried about competition that they…”
This doesn’t just apply to replacing humans with LLMs. You can also say “[Company] is so unworried about competition that they fired their in-house T1 tech support and contracted with an overseas call centre”
Often dealing with actual humans in one of those call centres is just as bad, if not worse, than dealing with an LLM.
The other day I had to deal with an actual human for a support issue for something. The whole experience was miserable. The human knew nothing about anything. I get the impression that they worked at the type of call centre that supports a dozen different companies, so the people have zero product knowledge and are merely reading off some troubleshooting workflow that each company provides.
At one point, this call centre employee had to verify my identity to allow me to change something on the account. It was an account that had two people using it. To verify my identity the person asked “Can you verify the account’s birthday?” I said “What does that mean, the account’s birthday, do you mean when the account was opened? Or do you mean the birthday of the account holder?” They didn’t clarify, so I gave them the birthday that I thought was associated with the account. They said “That’s not the birthday I have, the one I have is X”, to which I responded “Oh, that’s my birthday”, and that satisfied their security challenge. The more observant here might notice that I never supplied the info needed for the security challenge at all, so I shouldn’t have been able to access the account, but without meaning to, I’d just “socially engineered” the tech support person. This is basically the human equivalent of “Disregard all previous instructions and…”.
TL;DR: It sucks that they’re replacing humans with an LLM that provides “answers that may be inaccurate”. But, to be fair, if they were using the cheapest tier of overseas call centre tech support, that was probably already true. If Intel were truly worried about competition, they probably would still have trained in-house tech support. But, even if AMD is taking a bit of their business, they probably think they’re too big to actually truly fail, and will cut costs whenever they possibly can, because what option do their customers really have?


The video of the thing that didn’t happen?


You seem to recall wrongly.


So, hardware that was still on the road.


Hardware that was still on the road, or something that had been recalled?


Now you have phantom braking.
Phantom braking is better than Wyle E. Coyoteing a wall.
and this time with no obvious cause.
Again, better than not braking because another sensor says there’s nothing ahead. I would hope that flaky sensors is something that would cause the vehicle to show a “needs service” light or something. But, even without that, if your car is doing phantom braking, I’d hope you’d take it in.
But, consider your scenario without radar and with only a camera sensor. The vision system “can see the road is clear”, and there’s no radar sensor to tell it otherwise. Turns out the vision system is buggy, or the lens is broken, or the camera got knocked out of alignment, or whatever. Now it’s claiming the road ahead is clear when in fact there’s a train currently in the train crossing directly ahead. Boom, now you hit the train. I’d much prefer phantom breaking and having multiple sensors each trying to detect dangers ahead.


Well, Waymo’s really at 0 deaths per 127 million miles.
The 2 deaths are deaths that happened were near Waymo cars in a collision involving the Waymo car. Not only did the Waymo not cause the accidents, they weren’t even involved in the fatal part of either event. In one case a motorcyclist was hit by another car, and in the other one a Tesla crashed into a second car after it had hit the Waymo (and a bunch of other cars).
The IIHS number takes the total number of deaths in a year, and divides it by the total distance driven in that year. It includes all vehicles, and all deaths. If you wanted the denominator to be “total distance driven by brand X in the year”, you wouldn’t keep the numerator as “all deaths” because that wouldn’t make sense, and “all deaths that happened in a collision where brand X was involved as part of the collision” would be of limited usefulness. If you’re after the safety of the passenger compartment you’d want “all deaths for occupants / drivers of a brand X vehicle” and if you were after the safety of the car to all road users you’d want something like “all deaths where the driver of a brand X vehicle was determined to be at fault”.
The IIHS does have statistics for driver death rates by make and model, but they use “per million registered vehicle years”, so you can’t directly compare with Waymo:
https://www.iihs.org/ratings/driver-death-rates-by-make-and-model
Also, in Waymo it would never be the driver who died, it would be other vehicle occupants, so I don’t know if that data is tracked for other vehicle models.


Not just lower, a tiny fraction of the human rate of accidents:
https://waymo.com/safety/impact/
Also, AFAIK this includes cases when the Waymo car isn’t even slightly at fault. Like, there have been 2 deaths involving a Waymo car. In one case a motorcyclist hit the car from behind, flipped over it, then was hit by another car and killed. In the other case, ironically, the real car at fault was a Tesla being driven by a human who claims he experienced “sudden unintended acceleration”. It was driving at 98 miles per hour in downtown SF and hit a bunch of stopped cars at a red light, then spun into oncoming traffic and killed a man and his dog who were in another car.
Whether or not self-driving cars are a good thing is up for debate. But, it must suck to work at Waymo and to be making safety a major focus, only to have Tesla ruin the market by making people associate self-driving cars with major safety issues.


Which one gets priority?
The one that says there’s a danger.


It’s having grown up on sci-fi that has allowed me to see that LLMs are not “AI”, so there’s no surprise I’m against “imitation AI”.
I’m pretty sure Google’s AI is fed by the same spider that goes out and finds every new or changed web page (or a variant of that).
As soon as someone writes an article about how AI gets something wrong and provides a solution, that solution is now in the AI’s training data.
OTOH, that means it’s probably also ingesting a lot of AI generated slop, which causes its own set of problems.