Featured Post

Plug In Drivers Not Missin' the Piston

This is the Kodak Moment for the Auto Industry. Electric vehicles are here to stay. Their market acceptance and growth will continue....

Friday, June 1, 2018

Self Driving Cars: Unfortunately, Safer is Not Safe Enough

The technologist perspective is that once self-driving cars are better/safer drivers than humans, they should be adopted. Looking simply at the numbers, this is logical.

According to the Association for Safe International Road Travel, nearly 1.3 million people die due to traffic crashes each year. This is an average 3,287 deaths a day. Additionally there are 20-50 million people are injured or disabled annually.

If self-driving cars reduced these accidents by 10% then 130,000 fewer people would die and 200,000 to 500,000 fewer people would be injured annually. That sounds great, hundreds of thousands of deaths and injuries would be prevented each year. So logically, even 1% safer means lives saved and that we should all start letting an AI pilot our cars, right? It's not that simple.

It's About Emotions, Not Math

We are not purely logical beings. Even with a great autonomous drive system, crashes will occur. No one making a self-driving system claims that it will eliminate crashes. When crashes occur, people will be hurt and they will look for someone/something to blame. Parents and spouses of victims will demand justice.

If two human-driven vehicles are in a crash, blame will be assigned and justice will usually be metered out. When there is loss, there is someone to blame and target for anger.

When an AI-driven vehicle is in a crash, the same anger and blame emotions occur when there is injury or loss of life, but now the target is different.

Fewer Victims, but Not Necessarily From The Same Population

Say during a given period, there would have been 3000 crashes if humans were the sole drivers. Now place self-driving cars on the roads instead and say there were only 1500 crashes as a result. But the 1500 resulting crashes might not be a subset of the 3000 crashes that would have occurred. Some of them would be such as a tree falling that neither a human driver nor an AI could have avoided. But other crashes would occur that a human might have avoided.

If you are a passenger in an AI-driven car and you see an upcoming hazard but are powerless to prevent it, you will not be satisfied to know that in other locations at that same time there are self-driving cars avoiding accidents that you might not have been able to avoid. Said another way, if you are in the non-over lapping portion of the blue circle in the Venn diagram above, you are going to blame the AI for any injury or loss that occurs.

If there is an accident in the green overlapping section, you might still assume that you would have been able to avoid it since we all inflate our driving skills.

For these reasons and more, AI driving systems will be held to a near impossible driving safety standard by the public and the media. We already see this in the media today, whenever there is a crashing involving the likes of Waymo, Tesla, or Uber, the headlines make national news. The vehicles could have millions of crash-free miles, but they will be judged only by their failures. They could be performing 10X better than a human, but that is not the standard by which they will be judged.