• 0 Posts
  • 11 Comments
Joined 1 year ago
cake
Cake day: October 20th, 2024

help-circle




  • The last US hijacking was in 1990, when a hijacker claimed to have a bomb but it turned out later it was a fake. Before that, in 1987 a man threatened to start a fire using a cigarette lighter and a packet of chemicals. There was one in 1983, and a couple in 1980, but the majority of them happened prior to 1973 when basic security checkpoints were instituted.

    There were no notable hijackings in the US between 1990 and 2001.

    The reason 9/11 was so successful is because people expected it to be like historical scenarios where the hijackers make a little threat, get the plane diverted, and no one dies. Back then, a hijacking was seen as something like an unruly flier today - a little scary, but not too much more than an inconvenience.

    After 9/11, people realized that planes could be used as guided missiles by dedicated actors. That the goal is no longer to get attention, but to plow a jet loaded with fuel into any structure in the US. Everyone realized that allowing an attacker to take control of the aircraft was a potential death sentence to everyone on board, not to mention any targets on the ground.

    To counter this threat, they instituted two positive reforms: bulletproof, locking cockpit doors, and armed air marshals. No longer would pilots respond to a threat in the cabin of the aircraft, allowing the attackers to control the plane directly or indirectly, and an air marshal on board can eliminate any actual threat to passengers.

    Hijackings didn’t stop in response to TSA security theater. There was already a drastic reduction after basic and minimal security measures were introduced at the airports in 1973, and by the 1980s they were super uncommon and after 1990 they had already vanished.

    TSA security theater also didn’t stop casual hijackings, as many previous hijacking’s used the threat of fake bombs or fires, something that enhanced security will do nothing to prevent. Instead, it was the stakes of hijacking that escalated, meaning any casual threat is treated as the worst case scenario and dealt with as such. Any would-be hijacker knows that they can’t get to the cockpit, and even if no air marshal is on board or thinks that they can subdue them, knows that the passengers will assume they’re all going to die and attack them.

    Ironically, TSA security theater doesn’t actually do what it was intended to do - stop another 9/11 style attack. There are so many instances of security failing to do its job; their failure rates should be absolutely terrifying to anyone who believes they’re actually protecting us from hijackings as firearms and knives frequently make their way onto planes. All it does is inconvenience travelers and make simpletons feel safer, while costing us civil liberties and taxpayer dollars.

    The actual, effective reforms were cheap and invisible. The TSA screening at the airports is a bullshit waste of time and money. If anyone wanted to do a mass casualty event with a bomb, they’d get to the middle of a crowded TSA security line and detonate rather than try to board a plane.






  • I feel like that’s exactly the point of the title - you can generate a ton of code, but if you care at all about the quality or the overall architecture, it’s made your job harder. It makes the easy stuff easier, and the harder stuff harder, exactly the reason I hate ORMs typically.

    Incidentally, I say the title rather than the article, because I’m not going to waste my precious remaining life knowingly consuming AI output. That’s a hella long article that was probably generated off a few bullet points, and if the “author” can’t be bothered to actually write it, then I’ve got better things to do than read it.

    It’s ironic to me because I’ve gotten so angry in the past at people who shallowly reacted and commented based solely on a title without reading the article, but these articles are usually so bland and devoid of meaningful insights that you can glean most of the idea from just the title or headline.


  • This is more specific to Tesla than self driving in general, as Musk decided that additional sensors (like LiDAR and RADAR on other self driving vehicles) are a problem. Publicly he’s said that it’s because of sensor contention - that if the RADAR and cameras disagree, then the car gets confused.

    Of course that raises the problem that when the camera or image recognition is wrong, there’s nothing to tell the car otherwise, like the number of Tesla drivers decapitated by trailers that the car didn’t see. Additionally, I assume Teslas have accelerometers so either the self driving model is ignoring potential collisions or it’s still doing sensor fusion.

    Not to mention we humans have multiple senses that we use when driving; this is one reason why steering wheels still mostly use mechanical linkages - we can “feel” the road, we can detect when the wheels lose traction, we can feel inertia as we go around a corner too fast. On a related tangent, the Tesla Cybertruck uses steer-by-wire instead of a mechanical linkage.

    This is why many (including myself) believe Tesla has a much worse safety record than Waymo. I’ve seen enough drunk and distracted drivers to believe that humans will always drive better than a human robot. Don’t get me wrong, I still have concerns about the technology, but Musk and Tesla has a history of ignoring safety concerns - see the number of deaths related to his desire to have non-mechanical handles and hide the mechanical backup.