They always say self-driving cars are safer, but the way they prove it feels kind of dishonest. They compare crash data from all human drivers, including people who are distracted, drunk, tired, or just reckless, to self-driving cars that have top-tier sensors and operate only in very controlled areas, like parts of Phoenix or San Francisco. These cars do not drive in snow, heavy rain, or complex rural roads. They are pampered.
If you actually compared them to experienced, focused human drivers, the kind who follow traffic rules and pay attention, the safety gap would not look nearly as big. In fact, it might even be the other way around.
And nobody talks about the dumb mistakes these systems make. Like stopping dead in traffic because of a plastic bag, or swerving for no reason, or not understanding basic hand signals from a cop. An alert human would never do those things. These are not rare edge cases. They happen often enough to be concerning.
Calling this tech safer right now feels premature. It is like saying a robot that walks perfectly on flat ground is better at hiking than a trained mountaineer, just because it has not fallen yet.
The fact that many human drivers are “distracted, drunk, tired, or just reckless” is a huge point in favor of self-driving cars. There’s no way to guarantee that a human driver is focused and not reckless, and experience can only be guaranteed for professional drivers.
operate only in very controlled areas, like parts of Phoenix or San Francisco. These cars do not drive in snow, heavy rain, or complex rural roads. They are pampered.
This I fully agree with.
Plus, the way they fail is different from human drivers, which makes them harder to react to for other drivers.
Plus, the way they fail is different from human drivers, which makes them harder to react to for other drivers.
This is the point that concerns me the most.
Driving with humans has a sort of “dance” to it, and you can often tell when a human being is about to be something reckless. They might be already driving erratically. You might be able to see they are distracted or not looking the right way. You can also prepare for more common human errors because you can know how humans think when driving.
With a driverless car, there is no “body language” to read. You can’t make eye contact before you cross in front of them, preemptively drifting into a lane change, angrily tailgating, or looking at their cell phone. There is zero warning or indication that an AI is about to make a mistake. It’s just done instantly, casually, and with no predictable logic.
Yes, humans are sloppy, but we are social creatures precisely because it has endless utility. In this case, your social ability to connect with other drivers is an important safety feature.
People driving in horse and buggies probably thought the same way when they saw an out of control roadster speeding towards them at a brisk 25mph. One of the things that humanity generally excels at is adapting to a changing environment.
No, they didn’t. The buggy still had a human driver. Back then they were often open cab, meaning you could easily see and maybe even talk to the driver. And horse or buggy, it’s the driver that’s responsible for the vehicle’s behavior.
This is not like the horses to automobiles transition. That’s a false narrative perpetuated by those running the companies trying to sell autonomous driving.
I’m not saying we can’t adapt. I’m saying that while self-driving solves some problems, it introduces completely new ones.
Both of these cases are still human-controlled vehicles. This is more like trying to learn to predict the behavior of wild animals, and even more so, as these bots aren’t living beings honed by the same evolutionary forces that shaped us. And people still struggle to predict the behavior of animals, even just other mammals. If we struggle to predict even animal behavior, why would we have such an intuition for the behavior of an utterly alien machine? Look what happened to Siegfried and Roy, two people who spent their entire lives learning the behavior of the wild animals they lived and worked with.
There’s also no way to guarantee that an AI car’s sensors are working and not returning a ‘there is currently no pedestrian on the crossing ahead and the speed limit here is 500kph, full speed ahead!’
The fact that many human drivers are “distracted, drunk, tired, or just reckless” is a huge point in favor of self-driving cars. There’s no way to guarantee that a human driver is focused and not reckless, and experience can only be guaranteed for professional drivers.
You’re right that many human drivers are distracted, drunk or reckless, and that’s a serious problem. But not everyone is like that. Millions of people drive sober, focused and carefully every day, following the rules and handling tough situations without issue.
When we say self-driving cars are safer, we’re usually comparing them to all human drivers, including the worst ones, while testing the cars only in favorable conditions, such as good weather and well-mapped areas. They often avoid driving in rain, snow or complex environments where judgment and adaptability matter most.
That doesn’t seem fair. If these vehicles are going to replace human drivers entirely, they should be at least as good as a responsible, attentive person, not just better than someone texting or drunk. Right now, they still make strange mistakes, like stopping for plastic bags, misreading signals or freezing in uncertain situations. A calm, experienced driver would usually handle those moments just fine.
So while self-driving tech has promise, calling it “safer” today overlooks both the competence of good drivers and the limitations of the current systems.
Plus, the way they fail is different from human drivers, which makes them harder to react to for other drivers.
Once again, I believe we’ll get there eventually, but it’s still a bit rough for today.
If these vehicles are going to replace human drivers entirely, they should be at least as good as a responsible, attentive person, not just better than someone texting or drunk.
If you aren’t going to enforce that people only drive when they’re responsible and attentive (which is generally not done), self-driving cars only have to compare to the average driver IMO.
They need to compete with the average driver. I’m a much worse driver when it is icy than when the pavement is dry - this is nothing about me, just conditions. However I live where those icy conditions happen often enough that we as a society do not consider it reasonable to shut everything down and so I’m forced to drive despite how bad I am. If self driving wants to compare to average, they need to compare the all the conditions that average is calculated in. They need to compare when the driver is drunk (low bar to be better), when the weather is bad (really hard problem), and when conditions are perfect. If they want to remove outliers they need to be fair in the removal - did they remove humans driving in bad weather too?
Drunks are a case where the majority don’t drive that way. Thus those people have a reasonable expectation that you remove them from your average as it doesn’t apply as we need to debate how much worse you can be with that outlier removed (assuming you are worse which we don’t know).
Nah. They need to compare to competent human drivers. That average is being dragged down by people literally breaking the law. Driving drunk, on cell phones, high, etc. The standard cannot be, “well, it doesn’t really drive like a skilled driver, but if you average in the criminal delinquents, the average performance is comparable.”
The criminal delinquents are a fact of life, though, and it’s a real feature that self-driving cars don’t do that stuff. They fail in other ways.
If only we could somehow put cars on rails with predetermined routes, speed, and stops. Then they’d be even safer!
To bad that’s impossible and has never been done.
You could even link multiple cars together if a lot of people are going to the same place…
Nah. That’ll never work.
And you could easily provide them with electric, and therefore possibly clean, power, without charging.
Sounds like comoonizt propaganda to me 🚗🚗🚗
Edit: more accurate (north america exclusive)
🚷🚗🚳🚛
Yeah, lets take cars…
and make them worse!
Ask me how I know you’re from the US.
Ask me how I know that you’re prejudiced.
I’m talking about a train you fucking burger.
Did you think I didn’t know that or something? Trains are worse than cars. How do you not see so clearly the argument I’m making here?
Holy crap the world is in trouble with people like you around.
“hurr durrr I hate teh US” is not a valid argument against cars; it’s a trite attempt at derailing the subject and attacking the person making the argument instead.
what argument? you just said a thing and didn’t elaborate on why you think that way
*bla bla bla burger noises*
The real test wouldn’t be by making it only compete against responsible drivers, it would be by making the self driving cars compete in all the same conditions humans drive in. Humans are dangerous and unreasonable, that is the problem that self driving cars are trying to solve. However that solution is useless if it cant perform in rain, snow, and undeveloped roads. Which the reality is they cant ans end up being more unsafe than human drivers
I don’t think that’s the real test of self-driving cars. The real test is legal liability. I will trust a self-driving car when the company is willing to accept all legal liability for the operation of the vehicle. Or, perhaps another way, if the software is good enough that the car doesn’t even come with a steering wheel, then that is the moment to trust it. If their software really is many times safer than human drivers, then they could advertise to buyers, “we accept all legal liability for any accidents.” If the cars really are that safe, they should be able to increase the purchase prices by a few hundred dollars and give you a car that you never need to purchase insurance for. You’re not the driver; the liability falls on the manufacturer, not you. You’re just a passenger.
No company would accept such a liability until the software really is far safer than a human. Until they’re willing to take on full liability, I refuse to believe that these vehicles are safer than human drivers. If their software really is that good, they can put their money where their mouth is and prove it.
I agree with this as well
This whole comparing AI to only the best and brightest humans at their absolute pinnacle is so tiring.
Humans are stupid, they’re aggressive, they’re ignorant, they’re inattentive - so comparing only to the educated, civilized bits is just selective bias. Humans are wrong. Humans are bad drivers constantly. They think Vaccines cause 5G and that being gay is transmissible.
Fuck Humans.
Why not see both as the stupid thing that they are?
Because there are different “types” of AI, and some of them are genuinely impressive. I don’t see AI as inherently stupid; I see that the way it’s being implemented and abused as stupid.
Kinda the same way I see humans. MOST of them are pieces of trash, but there are some diamonds in the rough among them that are absolute treasures to be around.
I would agree with that, except I don’t want my electricity bill to go 50%+ for the same service and no choice of my own. That is the definition of stupid.
The problem is
- you’re trying to compare ai to a wide range of driving abilities
- everyone thinks they’re an above average driver
- ai has different strengths and weaknesses eases from humans. It’s differently safe
Humans are not only bad drivers But they will have a difficult time accepting that ai is better, until it’s overwhelmingly better. Even then people will blame ai for inevitable accidents and claim “I could have done better”
Fuck humans? Fuck human nature
Fine. I propose a new law. Self-driving cars will only be allowed on the road if their manufacturers accept total legal liability for any damage caused by them. Humans aren’t supposed to be operating these vehicles, thus all liability should fall on the companies that make them. This would be a very effective way of cutting through AI bullshit. It’s easy to game numbers and make your product look safer than it is. And companies lie to their customers all the time. But if we make manufacturers fully liable for any accidents caused by their self-driving vehicles, then they will only release those vehicles when they actually are far safer than human drivers.
The auto insurance industry should be made a thing of the past. If self-driving cars really are that safe, then the manufacturers can afford to cut a check whenever one of those fluke rare accidents really does happen.
Until a manufacturer is willing to do this, don’t believe a damn word they say about the safety of their cars.
They have to accept the liability- it’s unreasonable for liability to continue being on the driver if they’re not driving.
that is why I’m waiting on independent analysis. The data exists. I’ve never seen someone who isn’t biased look at the data.
Self-driving cars aren’t good just average driver sucks at not speeding up to 80 mph when I signal I wish to change lanes.





