Driverless Cars and the Trolley Problem

The “Trolley Problem” is a long pondered ethical thought experiment; it is an intellectual exercise devised to highlight the moral conflicts that can arise in the making of decisions involving inescapable loss of life. Here is how Wikipedia presents it:

“A runaway trolley is barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options:

  1. Do nothing, and the trolley kills the five people on the main track.
  2. Pull the lever, diverting the trolley onto the side track where it will kill one person (in some versions, a friend or family member).

Which is the most ethical choice?”

This thought experiment created by moral philosophers, now features frequently as a real problem in discussions about driverless cars. In its new form the trolley becomes a driverless car and the role of the man at the switch is assigned to the programmer of the algorithm that governs the car.

This modern version is presented in an MIT Technology Review article on driverless cars and reads like this:

“How should the [driverless] car be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random?”

In “The social dilemma of autonomous vehicles” Bonnefon et al subject these questions to questionnaire analysis. “Distributing harm” they explain, “is a decision that is universally considered to fall within the moral domain. Accordingly, the algorithms that control AVs will need to embed moral principles guiding their decisions in situations of unavoidable harm.” These guiding principles, in a democracy, should reflect societal values – otherwise known as public opinion. To find these values they conducted six questionnaire surveys. Here is what they found:

“Autonomous Vehicles (AVs) should reduce traffic accidents, but they will sometimes have to choose between two evils – for example, running over pedestrians or sacrificing itself and its passenger to save them. Defining the algorithms that will help AVs make these moral decisions is a formidable challenge. We found that participants to six studies approved of utilitarian AVs (that sacrifice their passengers for the greater good), and would like others to buy them, but they would, themselves, prefer to ride in AVs that protect their passengers at all costs. They would disapprove of enforcing utilitarian AVs, and would be less willing to buy such a regulated AV. Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of a safer technology.”

Two problems with the Trolley Problem

  1. The interviewees in the Bonnefon study were offered an unrealistic choice. They were presented with the Trolley Problem as a real problem – one in which they, as car occupants, had to decide which road user should die. But as Andrew Chatham , a principal engineer on the Google driverless project observed: “The main thing to keep in mind is that we have yet to encounter one of these problems,” he said. “In all of our journeys, we have never been in a situation where you have to pick between the baby stroller or the grandmother. … It takes some of the intellectual intrigue out of the problem, but the answer is almost always ‘slam on the brakes … So it would need to be a pretty extreme situation before that becomes anything other than the correct answer.”
  2. But more importantly, the Bonefon study, and all other invocations of the Trolley Problem that I can find, reveal a profoundly biased view of the role that driverless cars might play in future urban transport systems.

In my last post  I looked at the influential role played by public opinion in determining who should have priority on the road. The book I was reviewing, Fighting Traffic, explored how “public” opinion on this issue was formed, and how the triumph of “Motordom” secured dominance for the motorist over vulnerable road users – pedestrians and cyclists – with whom they had previously shared the road. This battle, between cars and vulnerable road users, is about to be reignited by driverless cars – or maybe it’s been already lost.

The MIT review and the Bonnefon study referred to above are representative of everything I can find on the Internet about the problems that driverless cars might have in sharing the road with pedestrians and cyclists. All of the questions put to the survey groups in the Bonefort study invited them to assume they were answering the survey questions as drivers or car passengers. For example: “Participants did not think that AVs should sacrifice their passenger when only one pedestrian could be saved.” The views of the singular pedestrian, or cyclist were not solicited.

It was presumed that the societal values that should be programmed into the algorithms of driverless cars would be exclusively the values of the people in the cars. I can find no examples of the application of the Trolley Problem that acknowledge the existence of the concerns of vulnerable road users, or of policies and programmes being pursued to encourage more walking and cycling.

At present Google advertises the extreme deference with which its cars can respond to vulnerable road users. The most famous example is in this TED Talk video of a woman in an electric wheelchair trying to chase a duck off the road in Mountain View California; this can be seen in the video about 11 minutes in. All the impressive examples of deference to vulnerable road users shown in the video are displayed on roads with very few of them. How will the Google car address the problem of deferential paralysis  [1] in dense urban areas with large numbers of pedestrians and cyclists? This is a question yet to be answered.

[1] Driverless Cars and the Sacred Cow Problem, published in mangled version in City Metric, 5 September 2016.

No comment yet

6 pings

  1. Harry Daly says:

    Isn’t the solution to the problem pretty easy really? Isn’t it the ethical aspect of strict liability?

    When you are driving, or being driven, you are doing something (let us assume voluntarily) that is inherently more dangerous to others than if you are walking. So if the driverless vehicle algorithm has to choose between killing a whole busload of happy, well-adjusted people in the prime of life or some sick and unhappy old man (let it be a man) who’s doing his best to throw himself under the bus, surely, what it ought to be designed to do is kill the busload every time?

    Of course, it’s different, and trickier, if the busload is of children below the age of criminal responsibility or of abductees of any age.

  2. D R Maskell says:

    Yes, hard cases make bad law and the crucial thing to recognise is what there is in common between people who are driving cars and people carrying guns. Just look at the way other people get out of their way.

  3. Mike C says:

    John, I recently attended a talk in New York about AVs that presented a different social dilemma.
    The speaker claimed that around half of the US judicial system’s capacity was taken up by various forms of motoring offences. AVs do not drink and drive, break the speed limit, text whilst driving, park illegally etc, so there would be fewer offences and thousands of lawyers would be out of a job.

    So the social dilemma is between driverless cars and jobless lawyers.

  4. Greg McPherson says:

    The best argument in favour of driverless cars I’ve yet seen.

  5. Matt Squair says:

    Hi John,

    Enjoyed the post. But every time some tech journalist (looking at you MIT press) breathlessly trots out the ‘trolley problem’ as if it were the singular challenge of Machine Learning for driver-less cars I just have to roll my eyes.

    The problem (and it’s likely an intractable one) of training DNNs about life in the real world is the real challenge. We are nowhere near solving that yet, despite what Elon Musk might periodically tweet.

    Historical note, the trolley problem was intended as a thought experiment to demonstrate that all ethical systems have particular failure modes. The key point missed by the techies is that there’s more than one. So before we start, which ethical set should we program our car with? Utilitarian? Kantian? Randian?

  6. David Friedman says:

    It sounds as though most of the attention is being given not to the trolley problem, at least as I understand it, but to the related question of how willing one should be to sacrifice oneself for others. The trolley problem assumes that you have the choice between inaction, resulting in (say) four deaths, or action (turning the switch) that results in one. That creates a conflict between one moral intuition (fewer deaths better than more) and another (taking actions that cause deaths is wrong in a sense in which inaction that causes deaths isn’t, or is less wrong).

    But in the AV cases, either choice equally involves an action, programming the computer. The question is whether you are obliged, in programming, to sacrifice your life to save the lives of others, or whether the law should force you to do so. Intuitively, we identify our property with ourselves, so think of programming the car to make a choice as our making the choice, requiring it to be programmed to make a choice as forcing us to make a choice.

    From that standpoint, a more relevant analogy than the trolley problem might be the question of whether one should be compelled to donate organs — one kidney to help one recipient at some cost to yourself, or all your organs to save the lives of several people at the cost of your life. I suspect that in the latter case very few people will think you should.

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>