Decision Skills
  • Home
  • Videos
    • The OODA Loop
    • The RPD Model
    • Reducing the Dunning-Kruger Effect
    • Using a Premortem
    • The Planning Fallacy
    • Accelerated Expertise
    • Conduct a SWOT Analysis
    • 4D's on a To-Do-List
    • Mere Exposure Effect
    • The Trolley Problem
    • Wicked Problems
    • Reciprocity Bias
    • Motivated Change
    • Correlation vs. Causation
    • Maslow's Hierarchy and Innovation
    • Understanding Psychological Anchors
    • IDEA 4-Step Problem Solving
    • Using SMART Goals
    • How to Gain Insights
    • The Eisenhower Matrix
    • SMART Goals - 60 Seconds
    • Tactical Decision Games
  • Articles

A Smart Car that Kills? A Modern Trolley Problem

1/19/2017

5 Comments

 
With driverless cars on the horizon there is renewed interest in a classic ethical dilemma, the trolley problem. What decision would you make?
​
You spot a trolley barreling down the track towards five workers. There is a lever you can pull to switch the trolley over to another track, but there is a worker on this track as well. You quickly assess and other than the lever, there are no other feasible options. It is either one life or five lives. Do you pull the lever? 

Picture
If you said yes, then you are with the 90% of people that also would pull the lever. It is a utilitarian approach. Only 1 person in 10 decided to allow the trolley to continue on its original path, killing the five. With 90% pulling the lever, this seems to resolve our ethical dilemma....or does it?

Making a small adjustment to your dilemma, instead of a lever used to divert the trolley to another track, you are standing on a footbridge that the trolley will pass under. There is a rather large man standing close to you. As with the original scenario there are five people on the track. You quickly assess that the only way to stop the trolley from killing the five is to push the large man off the bridge. As with the first scenario you are faced with a 1 or 5 choice. What do you do?

When presented with this version of the trolley problem, only 10% would push the man off the bridge. This result has been at the heart of what has puzzled philosophers and researchers for decades. Why is there such a huge disparity between how people react to the two scenarios? Maybe ethical dilemmas are not so utilitarian after all. While both scenarios are about exchanging 1 life to save five, people do not see them as being ethically equivalent. It is okay to pull the lever, it is not okay to push the man. 

Criticism
There is some criticism of using the Trolley Problem as a method to discuss ethics. One of the main criticisms is that the "fatman" variant is not very realistic. Regardless, there are so many variations of the Trolley Problem that have been repeated across hundreds of experiments that there is little doubt that ethics is not simply a utilitarian choice. Studies have focused on gender, race, disabilities, and other differences, each confirming time and again that how we make ethical choices is highly subjective.

Smart Cars
This brings us to the modern version of the Trolley Problem along with some real world implications. Driverless cars, drones, and other forms of automated transportation are quickly becoming a reality. While driverless cars would save more lives and reduce injuries, how might your car be programmed to resolve an ethical dilemma? 

This was the subject of a series of 5 experiments published in a 2016 study. Are you okay with a car being programmed to veer into a wall or off a bridge to save a group of pedestrians?
​
Results from these studies confirmed that similar to the original Trolley Problem a large percentage of people are okay with a utilitarian program that sacrifices one to save the many, but things quickly change when it is a family member in the car. Of equal interest is that while people are okay with a utilitarian program, they would be reluctant to actually buy such a car. 

Social Engineering
While the trolley problem is just a thought experiment, how smart cars are programmed is a reality. Design ethics and social engineering means facing some difficult trade offs in an effort to preserve a moral principle. More at the heart of the dilemma, who makes these decisions for the majority? The company that programs the car, a governing body, or some other alternative? 

To end, consider the following. Would it be ethical to program a car differently depending on the passenger? Would say the airplane, car or transportation of a world leader have the same program as you or I? What about a world renowned scientist working on a cure for cancer? How will it be decided which passengers or vehicles are provided which program? What do you think?
​

​References
Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science.

Hauser, M., Cushman, F., Young, L., Kang Xing Jin, R., & Mikhail, J. (2007). A dissociation between moral judgments and justifications. Mind & Language, 22(1), 1-21.

Workman, L., & Reader, W. (2014). Evolutionary Psychology: An Introduction (3 ed.). Cambridge University Press. 

https://en.wikipedia.org/wiki/Organ_donation#Opt-in_versus_opt-out

http://healthland.time.com/2011/12/05/would-you-kill-one-person-to-save-five-new-research-on-a-classic-debate/
5 Comments
Ros
1/19/2017 11:12:14 am

Clearly <grin> I would save the poeple I liked best. Other than that where would you start ? Bad Dictator vs New Mum? Drug dealer Vs drunk? Fairness Vs Wealth? Ugly vs Pretty.? Anything you come up with would be socially subjective.
Perhaps a random generator would be as useful as anything that we could dream up based on current and changeable social values.

Reply
Richard Feenstra link
1/19/2017 12:30:59 pm

A random generator. I like it! Lol. I will add you to my persons I like best list.

Reply
Peg
1/19/2017 06:48:59 pm

Thinking out loud, at first I thought there was a 3rd choice - not to do anything at all (choose not to choose) - like I wasn't there. And could I live with that choice. Because I was there

In the first scenario - it would mean 5 would die. Same for the second scenario.

I could choose not to choose because I couldn't choose -- I am not strong enough to switch the lever or push someone. (Oh, he could probably push me).
I think my brain is hurting.

But back to the smart car scenario -- I am not there...

The random generator is an interesting alternative. Very creative thinking.

Reply
Richard Feenstra link
1/19/2017 07:17:23 pm

Hi Peg, that sounds like a philosophy to not interfere with the normal order of the universe, a sort of karma rational. Interesting. Thanks for the comment.

Reply
Peg
1/20/2017 09:54:55 am

In the moment we will respond, somehow. I hope I am not tested. I would like to believe it would be the right thing. We can look back to understand, and ask "what if". We can look forward and ask "what could / should /would do". But until that moment, a decision is made that has a "butterfly" effect. One thing for sure is there will be questions and there will be judgment.

Back to the smart car: This discussion reminded me of a 60 Minute segment about the use of airbags in cars (may be late 70s - early 80s). At that time, it became common knowledge that car companies had put airbags into "some" cars to test - randomly. One deployed and the family survived. So my thoughts are the smart car technology will probably work the same way - randomly, by request, expected, then required.




Leave a Reply.

    Authors


    Picture
    Richard Feenstra is an educational psychologist, with a focus on judgment and decision making.
    ​(read more) 


    Picture
    Bobby Hoffman is the author of "Hack Your Motivation" and a professor of educational psychology at the University of Central Florida.
    ​(read more)

    RSS Feed

    Archives

    February 2022
    January 2022
    December 2021
    December 2020
    February 2019
    September 2018
    March 2018
    February 2018
    January 2018
    August 2017
    April 2017
    January 2017
    October 2016
    September 2016
    August 2016
    May 2016
    March 2016
    February 2016
    January 2016
    December 2015
    November 2015
    October 2015
    September 2015
    August 2015
    July 2015
    June 2015
    March 2015
    January 2015
    October 2014
    July 2014
    June 2014
    January 2014
    December 2013
    July 2013

    Categories

    All


Connect



Decision Skills
"Better decisions, better lives, better communities."
Copyright © 2009 - 2023, product of Richard LLC. All rights reserved.
  • Home
  • Videos
    • The OODA Loop
    • The RPD Model
    • Reducing the Dunning-Kruger Effect
    • Using a Premortem
    • The Planning Fallacy
    • Accelerated Expertise
    • Conduct a SWOT Analysis
    • 4D's on a To-Do-List
    • Mere Exposure Effect
    • The Trolley Problem
    • Wicked Problems
    • Reciprocity Bias
    • Motivated Change
    • Correlation vs. Causation
    • Maslow's Hierarchy and Innovation
    • Understanding Psychological Anchors
    • IDEA 4-Step Problem Solving
    • Using SMART Goals
    • How to Gain Insights
    • The Eisenhower Matrix
    • SMART Goals - 60 Seconds
    • Tactical Decision Games
  • Articles