Driving Without Your Mind

An adaption of the famous ‘trolley problem’ known as ‘the fat man’ is a thought experiment in ethics conceived by Judith Jarvis Thomson:

“A trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by putting something very heavy in front of it. As it happens, there is a very fat man next to you – your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?”

This kind of ethical uncertainty is being faced by the emergence of autonomous vehicles (AV’s), better known as driverless cars.

The Moral Dilemma

Imagine you are in your future driverless car approaching a busy intersection with a red light. A mechanical fault causes the brakes to cease working. Your car is faced with two options. Option 1: Swerve left killing a bystander walking along the path. Option 2: Continue through the intersection hitting a family of five crossing the road. What is the best option? What should the AV do? Is killing one better than killing five? What if one of the options means you (the AV’s passengers) die?

There are numerous variations of the trolley problem related to driverless cars, including this TED video.

Driverless car companies and their programmers have the seemingly impossible task of finding the answer to this sort of problem.

Do you want to have a taste of the moral difficulties? Moral machine simulates situations faced by AV’s and the user decides the best outcome. Massachusetts Institute of Technology are examining the moral decisions humans would choose.

But…

More than 90% of international traffic accidents are the result of human error. The types of errors range from driving under the influence of alcohol and illicit drugs, all the way to insects inside the car causing accidents. Since AV’s will eliminate the human element in traffic accidents they will drastically reduce the overall road toll. Situations like the one described previously will be incredibly rare.

The Whole Cake

A study published in Science examined the morals and ethics which need to be determined by the AV car makers, car buyers and regulators. The majority of participants believed that AV’s should be programmed not to crash into pedestrians even if that resulted in the passengers being killed. But when the same participants were asked if they would buy an AV that would not necessarily protect its passengers, most of them said they would only buy AV’s that protected the passengers at all costs.

No one wants to be the bad guy or girl here, but no wants to put themselves at risk of death. The human race wants the whole cake – not just a piece.

The people and companies surrounding the emergence of AV’s need to decide whether they will prioritise self-interest or the public good. If they prioritise public good, AV’s probably won’t be adopted by the general public. Conversely, if self-interest is prioritised, they may be breaking the law and certainly will be penetrating moral boundaries.

Who Will Be Responsible?

So what if our imagined scenario happened. Who would be responsible for the deaths? If a human is driving, they would REACT; they would instinctively move with no time for forethought. The result is an unfortunate accident. However, if the car is in autopilot, the car makes a calculated DECISION based on its programmed code. This sounds like deliberate murder.

In the end no one will be solely responsible. You can think of it as tragic fate.

Frengineer 2017
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s