The Modern Scientist

The Modern Scientist aspires to connect builders & the curious to forward-thinking ideas. Either you are novice or expert, TMS will share contents that fulfils your ambition and interest. Write with us: shorturl.at/hjO39

Follow publication

--

The Ethics of Self-Driving Cars

Almost all automobile manufacturers today incorporate some level of AI into their vehicles, ranging from blind spot information systems, to rear cross traffic alerts, to adaptive cruise control, to full on autonomous behavior (what NHTSA refers to as a level 5 driverless vehicle). In the case of the latter, what happens when the vehicle is faced with an ethical dilemma? Imagine a child chases a ball into on-coming traffic where catastrophe is unavoidable. In this variation of the classic trolley problem, does the vehicle sacrifice its occupants for the child, or vice versa. For instance, let’s say there are four adults in the car. Are those four lives worth more than that of a single child? From a human perspective, running over a child is unthinkable, regardless of the math. But according to utilitarianism, the good of the many outweighs the good of the one. While this is currently an academic argument, it will soon be a real-world conundrum.

In the not-so-distant future, self-driving cars will be able to evaluate the ages of all those involved in an accident, their chances of surviving it, and their otherwise projected longevity and value to society if they do. Based on this information, they will decide who lives and who dies. The real crux of the matter is that such decisions will be dispassionate and void of emotion. While an AI’s choice might be deemed the best course of action from an intellectual point of view, it is not necessarily the visceral choice a human being would make. Further, we must ask ourselves how a machine can grasp the emotional pain associated with the loss of a loved one, or for that matter, experience the pleasure of love to begin with. They can of course reason that two fatalities are worse than one, they can predict the likelihood of a crash based on physics and probability, and they can estimate how many people on the whole would be affected by the outcome. But if a machine cannot relate to the amount of suffering linked to an action, it cannot be held morally accountable for the consequences.

To put it another way, emotion is an essential component to our decision-making process, while the absence of it is crucial for computers to make decisions effectively. The point is that artificial intelligence is inherently amoral. The innate sense of right and wrong that human beings possess does not exist in the digital space. Whether or not it is something we are born with, or something which is nurtured during our formative years, it is an intangible element — a soul if you will — that is missing in machines. Computers do not experience physical suffering, they do not experience fear or disappointment, and they do not seek the approval or love of others. Yet it is from these shared feelings that human beings learn the lessons of fortitude, fellowship, generosity and humility that help shape our moral landscape.

By deferring decisions to the promise of new technology that supposedly makes our lives better, we run the risk of flattening that landscape. Moreover, no matter how convincing the illusion of empathy, the cold calculating machinations of AI are traits that mental health professionals might say are consistent with a sociopath if we were talking about a person. Therefore we must be vigilant about what choices we let AI make for us. We must ask ourselves does an application fit within a moral framework given its intent, implementation, and usage. Further, does that intent serve humanity as an end, does the implementation guard against corruption of that intent, and will its usage be overseen to ensure the implementation is universal. Why should we be any less circumspect just because we have resigned ourselves to the fact that such changes are inevitable? Thus, when it comes to AI, the ethical responsibility must fall to those who create, validate, legislate and regulate the technology.

This of course begs the question, what makes us ethical beings? In short, free will. Free will to choose the road less traveled, double back and start over again if we don’t like where it takes us. Free will to strike a balance between the greater good and our own self-interest. Since computers have no self, they have no real choice. They can at best obey their programming, which is often deceptively human. In the end, they are just binary engines distilling decisions down to a single logical outcome without regard for the individual. Self-driving cars are a real-world example of such behavior, and a bell weather application for AI since it is one of the first to allow machines to make life or death decisions for the general public.

So far all evidence points to the fact that autonomous vehicles will make the correct choice resulting in the least number of fatalities in the overwhelming majority of cases, making automobile travel safer on the whole. Just look at the number of alcohol related deaths each year that could be avoided. It is easy to justify self-driving cars given these statistics alone. But there are several things that are lost in translation here. First, is the moral agency that comes with getting behind the wheel of a car after getting your driver’s license — a rite of passage that future generations may never know. Second, is the sheer joy of driving. No matter how safe or cutting-edge such cars may seem, they cannot compete with the feeling of speeding down the highway in a vintage convertable with the top down and the wind blowing in your hair. It is part of what makes us human.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

The Modern Scientist
The Modern Scientist

Published in The Modern Scientist

The Modern Scientist aspires to connect builders & the curious to forward-thinking ideas. Either you are novice or expert, TMS will share contents that fulfils your ambition and interest. Write with us: shorturl.at/hjO39

David J Herman
David J Herman

Written by David J Herman

Business Intelligence Architect

No responses yet

Write a response