Asking the Wrong Questions About Self-Driving Cars

I recently watched Iyad Rahwan's TED talk "What moral decisions should driverless cars make?" and I enjoyed his exploration of the moral dilemmas surrounding self driving cars.

I was particularly interested when he brought up the dilemma of purchasing a self-driving car that might choose to kill its own passengers:

So this is what we did. With my collaborators, Jean-Fran├žois Bonnefon and Azim Shariff, we ran a survey in which we presented people with these types of scenarios. We gave them two options inspired by two philosophers: Jeremy Bentham and Immanuel Kant. Bentham says the car should follow utilitarian ethics: it should take the action that will minimize total harm -- even if that action will kill a bystander and even if that action will kill the passenger. Immanuel Kant says the car should follow duty-bound principles, like "Thou shalt not kill." So you should not take an action that explicitly harms a human being, and you should let the car take its course even if that's going to harm more people.

What do you think? Bentham or Kant? Here's what we found. Most people sided with Bentham. So it seems that people want cars to be utilitarian, minimize total harm, and that's what we should all do. Problem solved. But there is a little catch. When we asked people whether they would purchase such cars, they said, "Absolutely not."

While polling people for their opinions might seem like a reasonable way to find out if they're interested in purchasing products—such as new shoes or a large appliance—the question of whether a person would buy a self-driving car that could, in strict circumstances, choose to kill the driver is misleading.

It's misleading in that it assumes people actually know what they will do when confronted with the choice. It's misleading in that it assumes that people will even know that they're being offered this choice.

As it is people actively make this choice on a daily basis when they hand the keys to a designated driver, or carpool with a friend. Buying a car and allowing another person to chauffer you is no different than buying a self-driving car as far as the trolley problem is concerned. Just imagine that your driver has to choose between running over pedestrians or smashing into a barrier that will kill you as their passenger.

Apart from the fact that people already make these implicit choices regularly, there are two other reasons the question is misleading.

Marketing

The question of whether people would buy a car that might choose to kill them fails to account for marketing. People everywhere already unwittingly buy various products that lead to their untimely demise.

People buy drugs, alcohol, firearms, and vehicles already. If you asked people whether they would buy a gun that could, in special circumstances, discharge and kill them instantly, most people would say "no", and yet people still purchase guns that discharge and kill them.

Marketers are already very good at convincing people to make choices that are detrimental to their well-being. Even if self-driving cars are worse than human drivers on average, I'm certain marketing teams will be able sell them to people. I'm also pretty certain that this has already happened.

Statistics

The question also fails to account for statistics. It fails to account for the fact that the probability of the car entering into a situation where the driver might be sacrificed in order to avoid killing others is lower than the probability of a driver entering into a similar situation.

And once you've introduced statistics into the issue, people lose the ability to reason accurately about the meaning of the statistics. When you talk about the probability of an event occurring, most people conceptualize linear homogeneous events. In reality, probabilities occur in exponential heterogeneous ways.

If we rephrase the question to account for statistics, it might look something along the lines of this:

In 2015 35,092 traffic fatalities were recorded in the US with 1.12 deaths per 100,000,000 miles. This means the chance of dying during any given mile while driving was approximately 0.00000112%.

  1. Would you be willing to travel in a car that drove itself if the chances of dying while travelling in such a car were 0.00000112% per mile (same as 2015 average)?

  2. Would you be willing to travel in a car that drove itself if the chances of dying while travelling in such a car were 0.00000056% per mile (half the odds of dying as 2015 average)?

  3. Would you be willing to travel in a car that drove itself if the chances of dying while travelling in such a car were 0.000000112% per mile (one tenth the odds of dying as 2015 average)?

For me the answer is for all of these questions is a resounding yes.

Once a car is able to match average human performance, I'm willing to take the risk because with automation comes consistency. An average self-driving car during a morning commute is better than me drowsy. An average self-driving car during an evening commute is better than me distracted.

When it comes to the trolley problem, once cars have gotten to where they are ten times better than humans (3), the car can choose to plow through all five pedestrians and kill the driver and it will still be an improvement over the human driver who would have had to decide how to resolve their own trolley problems ten times over.

Statistically speaking, I would be perfectly fine with a self-driving car that could explode and destroy a five block radius so long as the probability of it occuring were low enough. But if you asked anyone

Would you buy a self-driving car that could, in special circumstances, explode and destroy a five block radius?

I'm certain their answer would be "no".