Ethics of driverless cars: Do pedestrian lives matter more than passengers'?

The rise of autonomous vehicles revives an old ethical dilemma, known as the trolley problem.

Raj Rajkumar, professor of engineering at Carnegie Mellon University, drives an autonomous vehicle down Schenley Drive in Schenley Park in Pittsburgh, on June 1. The public is conflicted about how driverless cars should be programmed to prioritize the lives of passengers versus the lives of pedestrians, according to a study published Thursday in the journal Science.

Nate Guidry/Pittsburgh Post-Gazette/AP

June 24, 2016

Research suggest driverless cars could reduce road accidents up to 90 percent, but avoiding an accident entirely isn't always an option, especially when pedestrians are involved.

No matter how well programmed driverless cars are, they are likely to be placed in a situation at some point in which they are forced to make a choice. If, for example, a driverless car carrying a small family faces the choice of veering off the road to avoid a group of pedestrians, or remaining on the road at the potential expense of the pedestrians, which should it choose?

A new study, published Thursday in the journal Science, suggests people may be conflicted about how the car should respond.

In Kentucky, the oldest Black independent library is still making history

In a series of six online surveys, researchers at MIT, the University of Oregon, and France’s Toulouse School of Economics, asked 2,000 people in the US whether they would prefer a car with a utilitarian approach of injuring the smallest number of people possible (even if that small number included the car's passengers), or a car that would save its passenger.

The majority of those surveyed said that they approved of autonomous vehicles that would make the best choice, numbers wise, when put in situations of unavoidable harm. But survey respondents also indicated that they would prefer not to ride in those vehicles themselves.

This dilemma might sound familiar if you’ve ever taken a philosophy or ethics class, as The New York Times points out. Called the “trolley problem,” it was created by a British philosopher named Philippa Foot in 1967, and instead of questioning the ethical programming of vehicles, it concerns the choice a trolley operator might be forced to make.

The scene is thus: You are on a runaway trolley, and as you look down the line, you see a group of five people gathered on the tracks. There is no way you can stop the trolley before you reach the people, but if you pull a lever, you can switch tracks. Unfortunately, there is a single workman on these new tracks and he would undoubtably be killed if you choose to switch tracks. Which do you choose?

It is a difficult decision, and for good reason. The question has been debated for years. But study author Dr. Iyad Rahwan of MIT says that prior to this study, it had not been quantified.

A majority of Americans no longer trust the Supreme Court. Can it rebuild?

“One missing component has been the empirical component,” Dr. Rahwan told the Times. “What do people actually want?”

Perhaps unsurprisingly, it turns out that what they want is to survive. Survey respondents indicated that they wanted others to buy cars that would choose to save the greatest number in any situation, including those with children at risk, however, they also said that they themselves would not buy those cars.

Critics say that the study might be missing the point. “AI does not have the same cognitive capabilities that we as humans have,” Ragunathan Rajkumar of Carnegie Mellon told Scientific American. Instead, autonomous vehicles focus on preventing harmful choice situations in the first place.

Chris Urmson, the head of Google's self-driving car project has also downplayed the significance of the trolley problem.

"It’s a fun problem for philosophers to think about, but in real time, humans don’t do that,” Mr. Urmson told government workers at the Volpe, National Transportation Systems Center in Cambridge, Mass., in December.

“There’s some kind of reaction that happens. It may be the one that they look back on and say I was proud of, or it may just be what happened in the moment," he added.

The study’s authors say that while this research is interesting from an ethical perspective, it will also have a very real impact on the way that automakers, lawmakers, and consumers approach driverless cars and their regulation.

One legal question posited by researchers, for example, is: “If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?”

This question is just one of many that automakers and others will have to answer before autonomous vehicles can really hit the road.