Pages

2016-07-03

When cars sit on death panels

Many have warmed up to Tesla, Google, Apple and others in the U.S., EU and Japan promoting the idea – and feasibility – of self-driving, even fully autonomous cars. Meanwhile, little attention has been paid to the fact that the decisions left by drivers to their cars need to be made by someone, somehow. Life in the digital age implies some algorithm applying artificial in lieu of human intelligence and ethics.

One of the greatest anticipated benefits of fully autonomous cars is a dramatic reduction of accidents by up to 90%, assuming that such cars represent a majority of vehicles in circulation. That should be achievable reasonably soon, since 10 million of them are expected to be on the road by 2020. The elimination of “pilot error” alone – for example influence of alcohol and drugs, distraction by smart phones or other devices, and speeding – rationally accounts for such a forecast.  But even fully autonomous cars will encounter unforeseeable events, and some of those will result in accidents, including fatal accidents. NHTSA has started to examine the first fatality involving a Tesla S, which showed that full autonomy is the only acceptable solution for self-driving cars under most circumstances, not least because reaction speed and decisional quality and predictability can be developed to surpass human capabilities by a considerable margin.

But making and carrying out decisions does not absolve manufacturers, drivers, and society at large from the burden of analysis and – sometimes tragic – choices. How is an “autonomous” car to react when a child suddenly runs into the traffic lane but the evasive action would hit another pedestrian? Should the car be programmed to accept death for its passenger(s) if, say, ten other lives will (or merely can) likely be saved? AI is a matter of coding, but also of machine learning.

Researchers including Jean-François Bonnefon of the Toulouse School of Economics, Azim Shariff  of U. Oregon’s Culture and Morality Lab, and Iyad Rahwan at MIT’s Scalable Cooperation Group polled some 2,000 individuals on their views and values on ethical choices in autonomous vehicles. The participants considered the questions in different roles and situations, as either noninvolved bystanders or as passengers of the car. The participants were then asked how they would like to see their own autonomous car programmed.

A clear majority of participants preferred to minimize the total number of casualties. 76% chose to sacrifice the driver to save ten other lives. The scenario pitting one driver against one pedestrian, however, produced a virtually exact complementary outcome: only 23% approved of sacrificing the driver. But passengers that were not drivers took fundamentally different positions.

While utilitarian ethics may leave questions to be probed, there are also commercial considerations for the automotive industry: if autonomous cars are programmed to reflect generally accepted moral principles, or legally ordained ones, fewer individuals may be prepared to buy them. Another option may be to allow for “scalable morality” by allowing the buyer or driver to choose to some extent how selfish or altruistic he wants to set his car’s “ethics thermostat.” Welcome to machine ethics. Tort law reaches a new frontier by naming software (or its programmer, i.e., the manufacturer) as a defendant – with prospects for mass recalls based on the stroke of a judicial pen.