They always say self-driving cars are safer, but the way they prove it feels kind of dishonest. They compare crash data from all human drivers, including people who are distracted, drunk, tired, or just reckless, to self-driving cars that have top-tier sensors and operate only in very controlled areas, like parts of Phoenix or San Francisco. These cars do not drive in snow, heavy rain, or complex rural roads. They are pampered.

If you actually compared them to experienced, focused human drivers, the kind who follow traffic rules and pay attention, the safety gap would not look nearly as big. In fact, it might even be the other way around.

And nobody talks about the dumb mistakes these systems make. Like stopping dead in traffic because of a plastic bag, or swerving for no reason, or not understanding basic hand signals from a cop. An alert human would never do those things. These are not rare edge cases. They happen often enough to be concerning.

Calling this tech safer right now feels premature. It is like saying a robot that walks perfectly on flat ground is better at hiking than a trained mountaineer, just because it has not fallen yet.

  • WoodScientist@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    3 days ago

    I don’t think that’s the real test of self-driving cars. The real test is legal liability. I will trust a self-driving car when the company is willing to accept all legal liability for the operation of the vehicle. Or, perhaps another way, if the software is good enough that the car doesn’t even come with a steering wheel, then that is the moment to trust it. If their software really is many times safer than human drivers, then they could advertise to buyers, “we accept all legal liability for any accidents.” If the cars really are that safe, they should be able to increase the purchase prices by a few hundred dollars and give you a car that you never need to purchase insurance for. You’re not the driver; the liability falls on the manufacturer, not you. You’re just a passenger.

    No company would accept such a liability until the software really is far safer than a human. Until they’re willing to take on full liability, I refuse to believe that these vehicles are safer than human drivers. If their software really is that good, they can put their money where their mouth is and prove it.