Professional Documents
Culture Documents
Self-Driving Cars
Self-Driving Cars
Self-Driving Cars
Name
Instructor
Course
Date
Self-Driving Cars
Introduction
Sooner than we expect, roads will be dominated by autonomous vehicles, and that future
is now closer than ever. The ability to provide cheaper, safer and more accessible transport has
made this innovation likable with most people welcoming the change. While behind the wheel of
a vehicle, human drivers regularly make moral decisions and self-driving cars would not be able
to the same unless they are programmed to do so. Self-driving vehicles are human-made and not
immune to accidents, but when they occur, it is had to decide whether it’s the program or
maker’s responsibility. Utilitarian and Kantian ethics will be applied through this essay to see
what action is to be taken by someone who follows either of them. The ethics will provide an
answer as to whether the cars should be programmed to kill the least number of people or
Utilitarian Ethics
Many moral decisions that seem to be maximizing the overall welfare are consistent with
utilitarianism. Utilitarian believes that the purpose of morality is to make life better by increasing
the number of good things and decrease the number of bad things; morality is justifiable by a
positive contribution to human beings. One classic example is the runaway trolley that can be
Surname 2
switched from a track that would kill five people to a track that would kill one person. Many
people find this morally acceptable because it maximizes the overall welfare by diverting harm
from a large group to just one person. The theory has been criticized for a long time, but many
are abiding by it in the 21st century. Utilitarianism could either be act or rule where the former
focuses on individual action while the latter focuses on the effects of the work (Nathanson, p.1).
In an inevitable crash and the car to choose between crashing two different groups, a utilitarian
appropriate as it benefits the more inclusive society. Robots in self-driving cars are expected to
make decisions using algorithms and without any emotions, as it is the case for many human
beings. If these intelligent systems have to make ethical decisions, then they will be founded on
consistent utilitarian principles and occupants of the vehicle do not contribute to the process.
Being able to make decisions irrespective of occupants’ state of mind is the distinct autonomic
feature where technology uses a rational approach. Passengers may be unable to drive or
intoxicated, but whatever their state is while occupying the car, the autonomous vehicle is
demonstrating safety in a significant way. That shows a considerable reduction of risk associated
with the vehicle when it is programmed. Utilitarian ethics can, therefore, be easily fronted in a
program that commands precise actions that protect the lives of many people when such
When the practical approach is used, all programs for self-driving cars would be
produced in the same manner. Having a consistent, less complex, and welcome consequences of
applications from different manufacturers reduce the adverse legal outcomes as well. When
manufacturers make their programs, they prioritize their drivers rather than the common good at
Surname 3
a time of harm. When the decision is about minimizing the loss of life in all situations, then the
particular program would be a lot easier and simpler to implement. Other ethical frameworks that
are based on many different principles would require too many complications to implement the
program in an inevitable crash situation. There is a general outcome of minimizing the cost of
technology making utilitarian cars cheaper to manufacture. Insurance companies offer lower
premiums for easier to comprehend policies and since utilitarian ethics cater for the best for
humankind. The utilitarian approach may be simple and useful in many ways but has its
downsides too.
The utilitarian approach holds the car responsible for any harm and the occupants are
immune from any implications. It is a practical and sensible world of driverless cars, and it is
unfair to punish occupants of a vehicle that caused a crash. The vehicles would interact with
imperfect humans on the road, and when accidents occur, this approach would be immoral and
unfair to these humans. If a human driver crashes with a driverless car, the human driver is
deemed safe because human drivers act in their own best interests; an apparent flaw in
utilitarianism. Also, the binary choice between harming a pedestrian and an occupant of the
driverless car, where the theory prefers injuring the pedestrian is outright wrong. The pedestrian
has not made any choice to endanger their lives by walking along the streets as utilitarian
suggests. The man in the driverless car has taken a conscious risk to enter a driverless car. Roads
occupied by human-driven cars have complex moral issues for the self-driven vehicles in
general.
Kantian Ethics
known as the Categorical Imperative. The principle is an absolute and universal moral code
Surname 4
meaning they apply to everyone at all times; circumstances do not matter. Kant believed that
right moral ethics are not guided by the results but rather, the action. Actions are testable and
absolute principles that can be input an equation. If you want to do something consider what
would happen if everyone wanted to do it, whether you are using someone, or whether you are
taking away their autonomy (Misselbrook, p.1). Kant not only respected but also empowered
people’s sovereignty. In an inevitable car crash, Kant’s ethics disagree with the idea of telling a
learning machine to decide who to kill. Factors other than the amount of harm should be built
into those vehicles programming. Choosing to kill someone does not exist in Kant’s moral
In Kant’s ethics, any choice you make is because you believe anybody can make that
choice. The runaway trolley scenario and decision to kill one man and save five others is heavily
supported in a utilitarian, but Kant strongly refutes it. Pulling that switch is a decision to kill
someone, and the consequences do not matter because you believe anyone can execute. While
utilitarian wants to save as many lives as possible, Kant suggests saving all lives instead. Kant
choice in the driverless car scenario is to continue its course no matter the consequences. The car
should not choose who to kill but, when it comes to life, not choose at all. Humans are
predictable when it comes to reaction, but driverless cars would only assess and take away the
least lives possible. An example situation is when a driverless car has three passengers, and there
is an emergency crash. There could be a single pedestrian on the road, one on the side and a
barrier.
Utilitarian suggests killing the one pedestrian for the least loss of lives instead of killing
two on the road or hitting the barrier and killing three passengers. Kant’s choice would be to
continue with its course and kill the two pedestrians on the road. Driverless cars can follow
Surname 5
binary instruction and execute it because it is given to it. Human beings do not function like
these cars, and so many factors affect their decisions in any situation; complexity. Humans are
emotional, and they can see what is going on around them and, if possible, react appropriately. If
the car continues its course, the pedestrians might notice it and run away from the impending
danger. The driverless vehicle designed to keep its course by Kant’s ethics is highly predictable,
and would not chase after them. It is not deciding to kill. Kant’s theory of something being either
good or bad with no in between has been criticized. A car might have a chance to swerve when
no passengers are on the road but since it is trained to carry-on, misses an opportunity to kill no
one.
Utilitarian is generally appealing for choosing to save many lives in dangerous situations.
A driverless car does not think like a human being when on the road and should therefore not be
allowed to function as one completely. The theory guards action concerning consequences to
avoid any possible danger. Self-driving cars will encounter human drivers, and they should not
disadvantage human drivers when accidents occur. Human drivers are not exactly safe because
of their decisions as Utilitarian suggests, both drivers are making decisions on the road. The
theory champions for the common good but it is contradictory when comparing human drivers
and self-driving cars. The Kantian theory guard actions irrespective of consequences which is a
good intention and bad in equal measure. It is a perfect approach because it gives people a
reflective view of their efforts which eventually refers to many possibilities. Before you steal,
think what would happen if everyone decides to steal; whatever you still would be stolen from
you. A self-driving car cannot determine the value of human beings and decide who to kill. The
theory is, however, too rigid as an idea cannot be only good or bad regardless of the
Surname 6
consequences. Human is too complex to be binary; either right or wrong. When a self-driving car
has to choose between killing one pedestrian two pedestrians, or three passengers, none of those
choices is wrong or right. Comforting someone who is dying might be interpreted as telling a lie,
Conclusion
Utilitarianism and Kantianism have distinct ways of determining ways of whether an act
is right or wrong. Kantianism is action oriented while utilitarian is focused on the result;
primarily to protect the common interest. Kantianism attaches great value to human life while
utilitarianism is about what causes most happiness. The self-driving cars are supported by both
ethics, but each has a different approach in the case of a sudden crash. Utilitarianism supports
taking the fewest number of lives in such situations while Kantianism believes all lives are
invaluable and other factors other than the amount of harm should be featured in the
programming.
Surname 7
Works Cited
Misselbrook, David. “Duty, Kant, and deontology.” The British journal of general practice: the
(2019).