Professional Documents
Culture Documents
MEH ET Subjective
MEH ET Subjective
• Fortitude
• Temperance
• Justice
Virtue Ethics
• Eudaimonia—Concept of the Chief Good----Happiness
• Ethnocentrism
• Majoritarianism
THE ENRON SCANDAL
• The Players:
• Kenneth Lay-Chairman and Founder
• Jeffrey Skilling-CEO and President
• Andrew Fastow- Chief Financial Officer
• Arthur Andersen- Chief Auditing Firm
• US Securities and Exchange Commission
• Enron’s Board of Directors
• Enron Shareholders
• Merril Lynch
THE ENRON SCANDAL-FACTSHEET
• Founded in 1985 by merging the Houston Natural Gas Company and
Internorth Electric Company to become Enron
• Attained the sixth position in the Fortune 500 list in 2001 and rated as
the Most Innovative Large Company in Fortune ‘s Most Admired
Companies Survey
THE ENRON SCANDAL-THE FALL
• The stock price began to drop from $ 90 per share in mid-2000 to $ 1
by end-November, 2001
• Special Purpose Entities and hedging created and indulged in by Fastow for Off-
Balance Sheet transactions that hid losses and helped show profits at quarter-
ends to meet Wall Street expectations
• Conflict of interest and lax auditing practices by chief audit firm Arthur Andersen,
who were getting huge non-audit consulting fees from Enron
• Paolo Pereira, Former President and CEO of the State Bank of Rio de Janeiro in Brazil
• John Wakeham, Former UK Secretary of State for Energy and Parliamentary Secretary
to the Treasury
B. Indian Ethics
Dharma, or the righteous way to live our lives, is the Indian
version of ethics. Lord Krishna, in the Bhagwat Gita, elaborates
the concept of Dharma: “Every organism is born to serve a
purpose. Understanding the purpose and living accordingly is
Dharma.” In this definition, it is ethical to wage a just war, if
that is your purpose and as long as you are not seeking to gain
personal prestige or wealth or power—you can even kill your
cousins, if required. Vyasa, in the Mahabharat, expands on the
concept of Dharma— “To actively help those in need as well as
passively not harming others and being fair and just in one’s
judgements.” There are elements of Dharmashastra in Virtue
Ethics, Utilitarianism and Moral Rights Theory, except in
Deontology, which is a philosophy of absolutes—war is wrong,
even if it’s a just war; killing is wrong, even in self-defense;
stealing is wrong, even if to feed a dying person; lying is wrong,
even if that saves the life of an innocent person. Find other
ways other than war, killing, stealing and lying. Ethics is today
incorporated fully into the law of the land and it is western law
that most countries have adopted, with the exception of some
countries that follow the Shariat Law, which is based on the
Islamic version of the Divine Command Theory.
Lay and Skilling went on trial for their part in the Enron scandal in
January 2006. The 53-count, 65-page indictment covers a broad
range of financial crimes, including bank fraud, making false
statements to banks and auditors, securities fraud, wire fraud,
money laundering, conspiracy, and insider trading. United States
District Judge Sim Lake had previously denied motions by the
defendants to have separate trials and to relocate the case out of
Houston, where the defendants argued the negative publicity
concerning Enron's demise would make it impossible to get a fair
trial. On May 25, 2006, the jury in the Lay and Skilling trial returned
its verdicts. Skilling was convicted of 19 of 28 counts of securities
fraud and wire fraud and acquitted on the remaining nine, including
charges of insider trading. He was sentenced to 24 years and 4
months in prison.[102] In 2013 the United States Department of
Justice reached a deal with Skilling, which resulted in ten years being
cut from his sentence.[103]
Lay pleaded not guilty to the eleven criminal charges, and claimed
that he was misled by those around him. He attributed the main
cause for the company's demise to Fastow.[104] Lay was convicted
of all six counts of securities and wire fraud for which he had been
tried, and he was subject to a maximum total sentence of 45 years in
prison.[105] However, before sentencing was scheduled, Lay died on
July 5, 2006. At the time of his death, the SEC had been seeking more
than $90 million from Lay in addition to civil fines. The case of Lay's
wife, Linda, is a difficult one. She sold roughly 500,000 shares of
Enron ten minutes to thirty minutes before the information that
Enron was collapsing went public on November 28, 2001. Linda was
never charged with any of the events related to Enron.
Arthur Andersen was charged with and found guilty of obstruction of
justice for shredding the thousands of documents and deleting e-
mails and company files that tied the firm to its audit of Enron.
Although only a small number of Arthur Andersen's employees were
involved with the scandal, the firm was effectively put out of
business; the SEC is not allowed to accept audits from convicted
felons. The company surrendered its CPA license on August 31, 2002,
and 85,000 employees lost their jobs. The conviction was later
overturned by the U.S. Supreme Court due to the jury not being
properly instructed on the charge against Andersen.The Supreme
Court ruling theoretically left Andersen free to resume operations.
However, the damage to the Andersen name has been so great that
it has not returned as a viable business even on a limited scale.
The Ethics of Corporate Trusting Relationships
1. Organizational and brand identity: Research within this third
theme has generally been an extension of the second (i.e., self-
presentation) and, to a lesser extent, the first (i.e., self-
concept), to entities other than individuals, namely,
organizations and their brands. Scholars within this theme
frequently draw on the theoretical foundations in both classical
philosophy and impression management as well as the work in
the first two themes outlined above.
Here, research does not fall quite as cleanly into distinct
streams; however, in general, studies focus on either the
identity of an organization or of a brand.
First, some research has focused on the authenticity of
organizations. In defining organizational authenticity, scholars
tend to draw explicit links to the theoretical foundations in
classical philosophy as well as work from psychology within the
self-concept theme. For example, Carroll and Wheaton suggest
that “...by analogy, an organization would be authentic to the
extent that it embodies the chosen values of its founders,
owners or members...” The emphasis in
such definitions is on organizational values (i.e., the
backstage), but, at the same time, most empirical studies tend
to focus on audience perceptions of organizational action (i.e.,
the front stage). Audiences have been shown to make
authenticity attributions on the basis of observed
production processes, product names, adver-
tising campaigns, ownership struc-
ture, the extent to ́which it is “local”, and even CEO
portraits. Such attributions of authenticity tend to translate
into audience appeal for the organization and its products and
services. In addition, audiences have been shown to evaluate
the authenticity of an organization on the specific basis of its
corporate social responsibility programs and the manner in
which such programs are publicized or not. Although
most research has focused on audience perceptions of the
front stage, some have considered how orga-
nizational members collectively understand and even construct
the backstage, often via an agentic use
of its own history; such considerations have also extended
beyond the boundaries of the organization to communities and
other collective identities. In sum, this
collection of research may seem disparate at first blush, but the
common thread is an interest in organizational authenticity,
conceived as the consistency between the organization’s values
and its actions.
NIKE as an example for Org Trust:
Several universities, unified by the Worker Rights Consortium,
organized a national hunger strike in protest of their school
using Nike products for athletics. Feminist groups mobilized
boycotts of Nike products after learning of the unfair conditions
for the primarily female workers. In the early 1990s, when Nike
began a push to increase advertising for female athletic gear,
these groups created a campaign called "Just Don’t Do It" to
bring attention to the poor factory conditions where women
create Nike products.
Nike began to monitor working conditions in factories that
produce their products.[17] During the 1990s, Nike installed a
code of conduct for their factories. This code is called SHAPE:
Safety, Health, Attitude, People, and Environment.[12] The
company spends around $10 million a year to follow the code,
adhering to regulations for fire safety, air quality, minimum
wage, and overtime limits. In 1998, Nike introduced a program
to replace its petroleum-based solvents with less dangerous
water-based solvents.[18] A year later, an independent
expert[who?] stated that Nike had, "substituted less harmful
chemicals in its production, installed local exhaust ventilation
systems, and trained key personnel on occupational health and
safety issues."[dead link][19] The study was conducted in a
factory in Vietnam.
Between 2002 and 2004, Nike audited its factories
approximately 600 times, giving each factory a score on a scale
of 1 to 100, which is then associated with a letter grade. Most
factories received a B, indicating some problems, or C,
indicating serious issues aren't being corrected fast enough.
When a factory receives a grade of D, Nike threatens to stop
producing in that factory unless the conditions are rapidly
improved. Nike had plans to expand their monitoring process
to include environmental and health issues beginning in 2004.
The person who placed the cyanide in the Tylenol capsules was
never found.
Facebook Research app was removed from the App Store about
six months ago after Apple complained about it violating its
guidelines on data collection.
1. Thus ethics is the philosophy of morality. Ethics (from the ancient Greek “ta ethika”,
translated as the moral doctrine) is then the science of morality, whereby the goal of this
science as part of philosophy (the friend of wisdom) is to regulate the world and in particular
the behavior of man.
2. The Cardinal Virtues of Socrates and Plato were bravery, prudence, wisdom and justice.
Virtues correspond to characteristics and are related to the persons. Aristotle (384–322 BC)
concluded here by formulating “virtue is the way to happiness (eudaimonia)”. Christian
ethics supplemented these virtues by three more: faith, love, and hope. The heavenly virtues
(and the contrary vices) of the Occidental Middle Ages were widespread by the musical work
of Hildegard of Bingen in the Christian West of the Middle Ages: humility (arrogance),
benevolence (avarice, greed), abstinence (lewdness), moderation (gluttony), goodwill (envy),
diligence (laziness), patience (anger).
3. Aristotle puts virtue over the economy because man can only achieve his happiness through
the exercise of his virtues. The perfect virtue for Aristotle is justice, which serves as a
measure of the economy.
4. Modern business management does not distinguish between moral and immoral goods. The
goods that are sold can be immoral, such as pornography or even directly harmful such as
drugs, as far as the legislator permits. . The discovery of atomic energy has enabled both the
use of this energy form and the development of the atomic bomb. Ultimately, however, it is
also clear that a society cannot be allowed to do this. It must prevent this kind of science
from being used against it. The question of the benefit or harm to man and to the society
must be posed as early as possible and answered in order to forestall societal damage.
5. For Kant (b. 22 April 1724 in Königsberg, Prussia, † 12 February 1804 ibid), an action
is especially moral when it benefits others at expense to one’s own benefit. Under no
circumstances should the pursuit of one’s own happiness, or one’s own benefit
maximization be carried out at the expense of the others, or that of general wellbeing. And
when in doubt, morality, that is the wellbeing of others, must be placed above one’s own.
6. In Kant, the mind ultimately determines whether an action is to be classified as moral. If
looking in on your old aunt is a duty, but not a moral one, if you do so only to be considered
in the inheritance.
7. But what about the consequences? Is it enough to want good? No, unfortunately not.
Otherwise, every fanatic, every terrorist would be a morally acting man, even though he
harms many people. It would depend only on the subjective assessment by the actor, his, in
his opinion, positive attitude. Well meant is not well done. Every judge has to deal with this
problem if he is to decide whether an action with a negative effect for third parties was
intentional. The intent distinguishes murder from manslaughter and thus also clearly the
penalty changes. In addition, even the actor can often not determine the motives that have
guided him, since he can also be influenced by the subconscious.
8. In principle, an ethics of conviction would suffice to produce good behaviour for mankind if
all men were to have the same perceptions and objective reason, in order to correctly asses
the consequences of their actions. Kant doubts this, which is why in his work Metaphysics of
Morals he develops a duty ethics for general human behaviour (deontological ethics, from
Greek to déon: the necessary, the duty). In addition, he developed imperatives or rules as an
aid to the practical reasoning about human coexistence: a categorical imperative and a
practical imperative as well as the publicity rule. The conviction of the agent to do good has
to be added to the dutiful action.
9. How do my actions affect people? The purpose of my action should be to do good, or at
least not to harm anyone. We should therefore take into account the purpose, which means
the effect on other people, and not regard humans as a means, i.e. without the effects of
our actions, on our actions or behaviour, which also includes allowing inaction. Kant,
however, also refers to the agent himself. He should not regard himself as a means, but also
as a purpose, and therefore not harm himself. In current situations this would mean that a
manager should not harm his health, just to further his career.
10. Kants Universal Rules: A) The Categorical Imperatives: Do unto others what you would
others to do unto you; Only act according to the maxim that you can make a universal law.
B) The Practical Imperative: Act in the way that you use humanity, both in your person and
in the person of each other, at any time not just as means but also as a purpose. The
purpose of my action should be to do good, or at least not to harm anyone. We should
therefore take into account the purpose, which means the effect on other people, and not
regard humans as a means, i.e. without the effects of our actions, on our actions or
behaviour, which also includes allowing inaction. C) The Publicity Rule: All actions related to
the right of other people, whose maxim is not compatible with publicity, is wrong.
That is, the behavioral rule is such that if the agent would fear the response of his
community should his actions become public, we assume that the rights of others
are unfairly, thus disproportionately, affected. One should ask oneself whether those
affected by an action would approve of it. For example, if a pharmaceutical company
conceals the side effects of a drug, the publicity rule would be violated because the patients
would not understand the dangers to their health.
11. For Kant the duties to other people include respecting their dignity, helping them
in need, being grateful and conciliatory, not deceiving them, not lying, nor mocking
or slandering. As inner attitudes, he demands virtues such as benevolence, compassion,
gratitude, truthfulness and integrity. Negative inner attitudes or characteristics (virtues), on
the other hand, are envy, dislike, pleasure in the pain of others, arrogance, revenge and
greed. Economic obligations are respect for the laws and the property of others, the
observance of contracts and the payment of debts. As a principle of individual freedom, “the
freedom of arbitrariness of everyone can co-exist with everyone’s freedom in accordance
with a general law.” This corresponds to the principle of modern democracy that the
freedom of the individual stops where the freedom of the other begins. Individuals are not
allowed to exercise their freedom without consideration or to the detriment of others.
12. What if negative consequences arise from the duties? Let us take the duty “You shall not
lie”. Kant sees truthfulness as a top priority, for which there can be no exceptions, even if a
murderer asks for the whereabouts of his victim, one must tell him the truth, even if the
victim is then murdered. In such a case one is not responsible for the consequences.
13. In an extreme case, two duties can also contradict each other and lead to tragic dilemmas.
Let us take the current euthanasia discussion as an example. The prohibition of killing and
the cessation of assistance prohibit the physician’s euthanasia, even if the patient explicitly
wishes for it and suffers a great deal. This is a contradiction to the assistance offered and the
principle of human dignity and the welfare of the physician against the patient.
14. Like Mill, Weber criticizes the Kantian duty ethics because of the unavoidable dilemma and
conflict situations resulting from contradictory duties. He gives two examples of duty ethics
that lead to immoral consequences. Thus, the Kantian duty of truthfulness would make the
preservation of state secrets impossible, even if this would cause great damage to the
country. The Christian commandment of nonviolence would, consistently implemented, lead
to the inability to counter violence, which would lead to further violent acts.
15. Max Weber propagates his ultimate end as an ethics of responsibility, consequentialism
(also called teleological ethics after the Greek télos, the goal, the purpose).
The actions are moral when they achieve good. This principle is the basis of our
jurisprudence “knowingly accepted” or “gross negligence” is interpreted by our
courts as fault. Having followed an order (duty ethics in the narrower sense) is not accepted
as an exculpation argument. Before the Nuremberg court many Nazis had used their orders
to kill as excuses for their actions. In the current social norms, however, orders do not set a
person free from responsibility for his actions as a person. Of course, a moral condemnation
of murderers would be difficult if they had been killed themselves if they had disobeyed an
order to kill.
16. In general, the ethics of responsibility is very demanding and therefore not always an
applicable measure. It is not always possible to clearly assess the consequences of the
actions. Either there are too many influencing factors or the result depends simply on
chance. Furthermore, teleological ethics presupposes not only a high level of information,
but also a high intellectual and moral capacity from the actor if the consequences of options
for action are not only to be foreseen, but their results are also weighed against each other.
17. It is a pure ethics of responsibility in which the conviction does not matter, but the greatest
happiness of the greatest number, or the principle of the greatest happiness (principle) of all
men. It is therefore about the determination of the net happiness resulting from actions and
their maximization. Joy and suffering are offset against each other individually as well as in
between all the people affected by the action. The action with the greatest net happiness is
the most moral—Utilitarianism.
18. Criticism about this approach is generally its hedonistic orientation, thus the strong ego and
pleasure-seeking. This kind of morality does not correspond to the idea of good found in
Plato, Aristotle, and Kant. Ultimately, everything that spreads joy is weighted equally. Is it
possible, for example, to equate joy from malevolence and lustful pleasures with the joy of
charity, and to calculate it for a net profit? ? Is it moral to sacrifice a few to save many?
Ultimately, the use of soldiers in war is always justified by higher goals, which are often not
rooted in truth. The sacrificing of slaves in the Circus Maximus of Rome would be justified if
many thousands of spectators felt more joy than the few slaves felt pain. There would be
a positive net-happiness. Utilitarianism in the narrower sense is not an ethical approach
because the welfare of others is not the focus. The approach is ethical inasmuch as the
greatest general happiness, as the happiness of all men, is striven for. In this case, damage
to third parties is acceptable.
19. Rule utilitarianism provides an alternative approach to the act utilitarianism. Rule
utilitarianism does not encourage the individual action that provides the greatest happiness,
but rather the general rule that maximizes happiness. The difference lies in the overall
happiness of the society, which is the outcome if general rules are followed. If we use rule
utilitarianism in our example, the torture of human beings could not be justified, even if a
sadist feels more joy than his victims. As a general rule torture would not maximize utility to
society, since the utility becomes negative if everybody tortures others. Nonetheless, there
is also an account of the pleasure and pain of different people in our Western democracies.
When a judge decides on the expansion of an airport, he takes into account the interest of
the general public in the form of jobs and a good traffic connection if he approves the
complaints of the residents. In the case of aircraft catastrophes such as 9/11, there are
launch orders, which are intended to minimize deaths in inner cities. Here the passengers of
the machine are sacrificed to prevent more dead on the ground.
20. Mill’s utilitarianism is therefore added to Kantian duties as a consequence of ethics, if these
do not give a clear statement of action. According to Mill, a lie is allowed, contrary to Kant,
if, with all its consequences, it produces less harm than the truth. Schopenhauer also
contradicts Kant and regards the lie as justified in these circumstances. We know this
connection under the concept “emergency or white lie”. Mill’s utilitarianism is therefore also
generally used for the solution of ethical dilemmas
21. The problem with this approach is that the assessment is ultimately left to the individual.
Every lie can be justified, you just have to paint the consequences of the truth extremely
enough. There is, therefore, the basic question of who is to evaluate ethically; the individual
or the group or society. Without a correlation to the usefulness or well-being of other
people, a distinction between good and evil can neither be made in individual ethics nor in
discourse ethics.
22. Thus, on the one hand, people need the right to vote and to regulate procedures by
means of a discourse ethics, to grasp and weigh the views and interests of all parties
involved in order to arrive at a morally balanced decision or to carry out a moral evaluation
as a collective. In addition, they must first be consensus- and common-minded, and thus also
morally oriented, and be able to put themselves into the position of other parties in order to
develop a moral reconciliation result. Otherwise, suboptimal horse-trading will result in the
enforcement of the stronger group or no decision will be made at all. Individual ethics is
therefore the basis for ethical evaluations and institutional ethics, individual ethics and
discourse ethics must work together.
23. Moral Economics: Morality Must Be Worthwhile: It is precisely this goal conflict between
one’s own benefit and that of the other that moral economy (or economic theory of
morality) addresses. It asserts that if morality is to be achieved, the incentives must be
designed to make moral behaviour worthwhile. The main representative and co-founder of
moral economy, Karl Homann (born April 9, 1943 in Everswinkel), developed an approach to
incentive ethic that tries to direct the individual’s development into the morally desired
direction using the right incentive design.
24. Homann rejects the moral self-control of the individual by means of internalized values
because it would be exploited in market competition. If, for example, child labor is not
prohibited, an entrepreneur must resort to it because they would otherwise have a
competitive disadvantage. A moral framework should be designed in such a way that self-
interest becomes socially productive.
25. Human Rights: An economic action is moral or ethical if it does not harm others. The basis
for this assessment is the acceptance of the rights of other people and living creatures
(including animals). Though the ideas originate from the Enlightenment, human rights were
formulated for the first time in the American Virginia Bill of Rights in 1776 and then in the
French Declaration of Human Rights in 1789.These rights are thus internationally legitimated
and interested or affected parties can demand their implementation.
26. Many human rights have been formulated. The most well-known is the “Universal
Declaration of Human Rights” of the General Assembly of the United Nations of 10
December 1948, the so-called UN Declaration of Human Rights. Karl Marx was an opponent
of the human rights movement. He saw the rights of society threatened by human rights.
The individual concept of freedom was dismissed as a bourgeois invention in the countries
of real socialism. Instead, so-called basic social rights were expressed; the right to work, the
right to vocational training and social protection. However, the individual was left with no
decision-making freedom and his life was planned centrally.
27. From the rights, however, come indirect obligations for how people are to deal with each
other. People must recognize the rights of the others as equal in principle and even accept a
restriction of their own rights, if it is the only way for the rights and freedom of others or
general welfare to be guaranteed. And finally, there is a duty to work for the realization of
human rights.
28. Markets and Ethics: To what extent does the market exhibit moral and ethical behaviour? In
1989, Kerber found that the young leaders were inclined to opportunism and accepted
immoral and often criminal behavior when material success was achieved. Slogans like
“Everyone is the next one”, “One hand washes the other” or “To achieve a higher goal,
sometimes wrong cannot be circumvented” were popular.
29. Kerber summarized the trend as follows: “The tendency seems to be a stronger ego-
orientation and more attention to success, material goods and enjoyment”. At the beginning
of the 1990s there was a trend away from duties such as order, discipline, loyalty,
thoroughness and reliability to so-called unfolding values such as independence, self-
responsibility, participation and creativity.
30. Adam Smith was aware that the invisible hand alone is not sufficient to protect the common
good from damage by individuals. He stressed the necessity of an economic system and a
system to keep order, which did not exclude intervention to protect the common good. (The
Theory of Moral Sentiments). Only if the legal system functions well and there is “trust in the
sovereignty of the state” can trade on markets develop to the advantage of people,
according and create welfare. Smith also identifies the most important components of order
to be internal security, jurisprudence, infrastructure, educational institutions and national
31. defence. Adam Smith had already differentiated between an economy and an economic
system. The economic system must set the framework for economic behaviour in such a way
that the invisible hand of the market and competition can develop optimally, meaning that
the actions of people determined by their own interests are channelled for the common
good.
32.Institutional Ethics: The State Regulatory Framework
33. The Ethical Prisoner Dilemma: Even if the company were to behave
morally, it does not know how the other companies will behave, and
therefore must assume immoral behaviour and behave immorally in
order to ensure its survival. The ethical prisoner dilemma is not just true
for companies in competition but also for companies with unethical
business cultures and for the employees themselves. This also applies to
the internal competition of employees within the company. Here an
employee can gain a career advantage by lying. Unethical companies
cannot realize the collective best case with high productivity if the
employees do not behave morally. Like with Enron, the employees
compete internally and do not cooperate. The return on teamwork
cannot be realized.
34.
Payment B behaves morally B behaves immorally
A behaves morally 5,5 0,6
A behaves immorally 6,0 1,1=Nash Equilibrium
35.The ethical prisoner dilemma for a fair competition is as follows: The
worst case for a company manager A is if he behaves morally, but the
company manager of another company B does not and the best case for
A is if A behaves immorally, but B does not. B is in the same decision-
making situation. The result is the combination in which both companies
operate unfairly, thus the worst case for all (Nash equilibrium= No one
can unilaterally improve their profits thru another strategy). Without
ethical rules, such as law enforcement when the ethical prisoner
dilemma arises, a company finds itself in the worst-case situation if it
behaves ethically (Fig. 6.1).
38. An empirical study has shown that managers are only willing to stick to
moral standards when they believe their business partners are sticking
to them. If this is not so, they are not willing to behave morally, even if
they consider the rules to be important and meaningful. In the case of
the prisoner’s dilemma, there is uncertainty about the conduct of the
other companies. Even if they all wanted to behave ethically, they could
not, because there was then the risk of coming into the worst case
situation.
39.The solution to this problem is: 1. Clarification of A and B on the value of
moral behaviour, thus increasing the incentive to behave morally.
2. Moral behavior is rewarded by incentives (morality must be worth
practicing). An ethical consumer awareness leads to increased sales of
ethical products.
3. Binding contracts with sanctions: laws, state control and sanctions in
cases of misconduct (ethical order policy).
1. Tools of Ethics for Management
2. Vision and Values: But there is also a great danger. The visions and
principles sound ethical and convey the impression that the company is
solely a good thing. The suspicion is always there that some companies
present the ethical guidelines only for image and PR, but that they play
no role in everyday functioning of the company. However, if a case
publicly contradicts the guidelines and is not an exception, this obvious
contradiction seems hypocritical and weakens the credibility of the
company. The guiding principles can be checked by the public and
requested by the stakeholders. Corporate ethics is the consistent
implementation of ethical goals in company policy and not a pure PR
action. Again, there must be no contradictions. For example, it is
hypocritical when a clothing producer who uses child labor in India to
keep production cheap is trying to create a morally positive image in
Germany by promoting SOS Children’s Villages.
3. With the help of a product lifecycle analysis, a company can determine
the effects of production on humans or nature at every production
stage. For example, Shell has identified 350 stakeholder groups from
business, politics and environment in the project of exploration of gas
deposits in the Amazon basin in Peru (“Camisea Project”), contacted 200
groups directly and classified 40 groups as primary stakeholders.
Following an intensive ethical stakeholder analysis, Shell concluded that
the environmental impacts and the negative impact on the natives were
predominant and dispensed with the exploitation of gas deposits in the
Amazon basin. An action is ultimately only ethically justifiable if the
interests of the shareholder are weighed with those of the stakeholders.
One cannot principally be subordinated to the other, but the priority
must be examined ethically in each case. Criteria for an ethical test will
be the greatest concern.
However, society’s sense of responsibility cannot be left to companies,
but must be demanded by society in the public and in the form of laws.
If this does not happen, the company does not have a monetary
incentive to behave in a socially desirable way, but rather can maximize
its profit at the expense of society (such as the non-environmental
disposal of production waste, competition offences or even balance
sheet manipulation).
4. Organizational Ethics: The James A Waters study identified seven
barriers in companies that hamper moral or legal behaviour. Four of
these barriers refer to corporate culture and the remaining three refer
to the organizational structure of the company:
a. Division of work; specialization (division of work): If a task is
distributed to many, the specialists have no overall view. If every
employee sees only his small section, there can be mistakes and
misunderstandings if there is a lack of coordination. Furthermore, it is
easy to create blinders, which leads to the dominance of special
interests and lack of a global view (selectivity of the viewing angle)
b. Separation of decisions and execution (separation of decisions):In
a strict hierarchy the responsibility is always with the higher level, so
that all responsibility lies with the management, the executive
committee or the supervisory board. However, they have neither the
information nor the reference to carry out a follow-up on the decisions
of the lower level. As a rule, they are not involved at all, so no one is
responsible. The employees at the lower levels are only given
quantitative targets. Result-oriented quantitative management systems
support unethical behavior, since they put employees under pressure to
reach the given figures. If management only controls the performance
results, this corresponds to a goal that the end justifies the
means.
c. Principle of command and obedience (strict line of command): Waters
quotes a witness who was asked why he did not report the illegal
behavior:
“I had no power to go higher. I do not report to anyone else than my
superior“ and “I had to assume that whatever he told me came from his
superior, just as my subordinate would have to assume that what I told
him came from my superior.”
The principle of command and obedience (strict line of command) leads
to a lack of responsibility for the lower levels, which are the only ones
that have the information for an ethical impact assessment.
Decentralized management presupposes ethics among the employees,
insofar as they have to assume responsibility. They must make decisions
with an impact on the success of the company and the welfare of third
parties, i.e. a balancing in the sense of an ethical stakeholder approach.
5. According to the Waters study, four out of seven criteria that prevent
ethical behavior in the company are attributable to corporate culture:
1. an unethical role model function of the superiors, as a general
toleration of unethical behavior or as unethical socialization, thus
modelling of such behaviour by the superiors. In particular, initial start-
ups can be influenced.
2. an overgrown group loyalty that prevents misbehavior from being
reported to the outside and encourages competition among the groups.
3. a strong orientation of the success indicators on quantities in the case
of a simultaneous internal undervaluation of ethical, qualitative factors,
especially in order not to endanger the quantifiable goal fulfilment. This
results inter alia in an inhibition to openly address moral aspects in the
company.
4. A tendency of the company, thus indirectly all employees in the
company, to hide ethical violations, in order to prevent a poor image
and possible punishment from the outside.
1. According to the moral foundations theory (Graham et al. 2011), there are at
least five basic moral preferences that cannot be simply ordered—individuals,
groups or cultures assign different priorities to them. People facing
organizational ethical dilemmas may not only care about justice (as the opposite
of dishonesty and cheating), but also about loyalty to a group, respect for
authority, sanctity or purity (not degradation), and especially care for others.
Although behavioral ethics stems from the theory of moral foundations, few
empirical studies test how employees solve contradictions between different
moral foundations.
2. An instance of the conflict between moral preferences is deception that is used
to protect others from harm or even to benefit them (white lies). Examples
include lying to avoid undermining a colleague by criticizing him in front of
others, praising poor artwork done by a child, or complimenting a partner for a
failed meal. Levine and Schweitzer (2014) showed that people who lie to help
others (especially at their own expense) are viewed as more moral than people
telling the truth and benefiting from it. A lie is typically regarded negatively only
when it is self-serving and benefiting the liar.
e. Ineffective or Harmful Ethics Systems
In effective ethical systems “[managers must model the desired behavior and employees need to see
that sanctions occur if codes are violated. Communication is a requirement for codes to be
successful” . In contrast, organizations with window-dressed ethics systems could end up punishing
people who respect the accepted moral norms..e.g plagiarism and the punishing professor and poor
feedback from students.
external pressures.
Bounded Ethicality- Continued
• Self-View vs Self-Threat
• These cognitive biases operate outside our own awareness and therefore in a
fading process, which removes the difficult moral issues from a given problem or
• The notion of moral distance holds the idea that people will have only ethical
concerns about others that are near to them. If the distance increases, it
becomes easier to behave in unethical ways.
Albert Bandura
The Theory of Moral Disengagement
The Theory of Moral Disengagement
Eight Cognitive Distortion Mechanisms
authority
Displacement of Responsibility
Diffusion Of Responsibility
hiding behind a collective that is engaged in the same behaviour– also called the
By-stander Effect
Diffusion of Responsibility
The Theory of Moral Disengagement--Continued
actions
Behavioural Ethics-Cullings
• 1.“Toward a Better Understanding of Behavioral Ethics in the
Workplace”, David De Cremer and Celia Moore, Annual Review
of Organizational Psychology and Organizational
Behavior,2020, 7:19.1–19.25
• When organizations fail to conduct their business in an
honorable way, they damage their reputations, the
interests of the industries they represent, and eventually
the welfare of society as a whole. As a result, trust in
business is hit hard, and profits and performance suffer.
This makes identifying how organizations can improve the
ways in which they manage unethical behaviors more
important, and is perhaps why ethics in organizations has
never received more research attention than it does
today.
• 2. Behavioral Field Evidence on Psychological and Social
Factors in Dishonesty and Misconduct , Lamar Pierce Olin
Business School Washington University in St. Louis Parasuram
Balasubramanian Olin Business School Washington University
in St. Louis
• Social processes: One of the most promising and
important topics on dishonesty is how social processes
influence behavior, with a growing body of work using
behavioral field evidence to explore it. Bucciol et al. [4]
used direct observation and interviews to identify how
bus passengers travelling with family members were
more likely to have a valid ticket, but not those travelling
with friends…. This is consistent with a field experiment
by Wenzel [11] that found information on others’
behavior improved tax compliance, as well as results
showing employees become more dishonest when joining
dishonest firms.
• Fairness, equity, and social comparison: Social
comparison and related fairness and equity concerns are
also a focus of recent work. Early work by Greenberg [13]
was one of the first to address this topic using behavioral
field data, showing increased theft following a pay
decrease at two out of three factories.
• Moral reminders and preferences: Related to this, Shu et
al. [23] used a field experiment to show that insurance
customers who signed at the top of forms reported higher
annual mileage than those who signed at the bottom,
presumably because signing provided a moral reminder.
• Culture: Other papers focused on how interactions within
and across ethnic and national groups can change levels
of dishonesty, including favoritism in Olympic judging
[25], ethnic diversity and corruption in Indonesia [26], and
stock market fraud in Kenya [27]. One approach by
Bianchi and Mohliver [28] links economic conditions
during executives’ formative periods to stock option
backdating.
• Professionalism: Similarly, teachers who are expected to
instill ethical values in children have been shown to cheat
when pressured with strong financial and career
incentives [31].
• Incentives and control: Monitoring, for example, has been
shown to reduce theft [33, 34], unexcused absenteeism
[35], and dishonest reporting [36] in organizational
settings such as call centers, restaurants, schools, and
banks
3. Blind forces: Ethical infrastructures and moral disengagement in
organizations, Sean R. Martin, Jennifer J. Kish-Gephart, James R.
Detert. Organizational Psychology Review 1–31, Reprints and
permission: sagepub.co.uk/journalsPermissions.nav DOI:
10.1177/2041386613518576
• To address unethical behavior in organizations, scholars have
discussed the importance of creating an ethical organizational
context or ethical infrastructure that encourages ethical, and
sanctions unethical, behavior both formally and informally.
• Specifically, in recent decades, research in social psychology,
behavioral economics, and behavioral ethics has increasingly
uncovered the multitude of ways in which otherwise good
people can be morally blind and engage in unsavory acts
without being aware of the unethical nature of their actions.
• bounded ethicality, self-deception and ethical fading, intuitive
morality, plus a host of other cognitive biases, indirect agency
biases, attribution biases. Moral disengagement, a theory that
explains the process and mechanisms by which an individual’s
moral self-regulatory system is decoupled from his or her
thoughts and actions, represents a particularly powerful
manner by which individuals can rationalize or neutralize
reprehensible conduct.
• While organizational infrastructures may be effective in
reducing the unethical behavior that organizational members
are aware of, this aforementioned research suggests that even
in organizations with formal and informal systems prioritizing
ethics, many unethical decisions and behaviors may go
unrecognized, or be rationalized in ways that make them seem
ethical to insiders.. Extreme examples of this phenomenon,
referred to in O’Reilly and Chatman’s (1996) review of
organizational culture, include the thoughts and actions of cult
members who see their organization as morally beyond
reproach.
• Trevino and Brown (2004, p. 75) described Arthur Anderson
employees believing in the ethicality of their organization
saying, ‘‘we’re ethical people; we recruit people who are
screened for good judgment and values.’’ Yet at the same time,
their auditors and consultants were engaged in numerous
unethical activities. These examples suggest it is possible for
members to perceive their organization as being one in which
ethics are prioritized, routinely enacted, and as having formal
and informal systems supporting those priorities— that is, as
having a strong ethical infrastructure— and yet still be working
in an environment where various unethical behaviors go
unnoticed or are easily rationalized.
• Importantly, we do not argue that strong ethical infrastructures
necessarily foster more unethical behavior in an absolute
sense. Indeed, they likely do root out severe and blatantly
unethical types of behavior (Jones, 1991). Rather, we argue
that there are numerous outcomes associated with perceptions
of a strong ethical infrastructure that can trigger members’
tendencies to morally disengage about common, less intense
behaviors. Further, we argue that moral disengagement likely
plays a role in reinforcing members’ perceptions that the
ethical infrastructure of their organization is strong.
• We follow Tenbrunsel and colleagues’ (2003) lead in
considering culture and climate as key components of a more
expansive term—ethical infrastructure—that incorporates
these constructs and others to describe the general ethical
context of an organization.
• When organizational members perceive consistent
expectations being communicated by the formal and informal
systems, the organization’s ethical culture is said to be strong
and employees are likely to abide by the clear and consistent
messages about behavioral expectations. When these
messages are seen as conflicting, the ethical culture is deemed
weaker (Trevin ˜o, 1990). Whether based on Trevino’s
theorizing or other models of ethical culture that have been
proposed (e.g., Hunt & Vitell’s [1986] research on corporate
ethical values, and Kaptein’s [2008, 2011] corporate ethical
values model), empirical work generally supports the expected
negative relationship between perceptions of the
organization’s ethical culture and unethical behaviour.
• Corresponding to the introduction of ethical culture, Victor and
Cullen (1987) introduced the ethical climate construct, or ‘‘a
group of perspective climates reflecting the organizational
procedures, policies, and practices with moral consequences’’
(Martin & Cullen, 2006, p. 177). The authors identified two
dimensions that, when crossed, theoretically derive nine ethical
climate types. The first dimension, ethical criteria, includes
three broad categorizations of moral philosophies: egoism,
benevolence, and principled. These dimensions parallel
Kohlberg’s theory of cognitive moral development wherein an
individual’s level of moral reasoning is classified as self-
centered (Level 1), other-centered (Level 2), or focused on
broad principles of fairness and justice (Level 3). The second
dimension, locus of analysis, draws on sociology literature (e.g.,
Merton, 1957) to identify the referent group as individual (i.e.,
within the individual), local (i.e., internal to the organization
such as a work group), or cosmopolitan (i.e., external to the
organization such as a professional organization).
• Later theorizing offered a more simplified model of the
relationship between ethical climate types and unethical
behavior, arguing that employees are more likely to behave
ethically in organizations that stress ‘‘the consideration of
others’’ (such as benevolent and principled climates) versus in
organizations that stress self-interest (egoistic climate).
Empirical results, which rest on employees’ perceptions of their
work environment, generally support a positive relationship
between egoistic climate and unethical behavior, and negative
relationships between benevolent and principled climates and
unethical behaviour.
• Researchers have recognized that ethical climate and ethical
culture are highly related descriptors of an organization’s
overall ethical context. In a comprehensive model, Tenbrunsel
et al. (2003) subsumed elements of ethical culture and ethical
climate under the term ‘‘ethical infrastructure,’’ which they
defined as the organizational climates, informal systems, and
formal systems relevant to ethics in an organization. The
authors modeled ethical infrastructure as three concentric
circles—starting with the innermost circle of formal systems,
followed by informal systems, and then encompassed by the
outermost circle, organizational climates—that simultaneously
support and influence each other. The formal systems refer to
the documented and standardized procedures upholding
(un)ethical standards. The informal systems are those signals
that are not documented—they are felt and expressed through
interpersonal relationships. Both the formal and informal
elements are undergirded by individuals’ shared perceptions of
those systems.
• Over the past decades, approaches to studying the ethical
decision making of individuals have proliferated and evolved.
Some emphasize ethical decision making from a more
deliberative frame—emphasizing, for example, individuals’
moral awareness and reasoning, their level of moral
development, their dispositional tendency to attend to and
reflect upon moral information, or their prioritization of a
moral identity (their desire to be and be seen as a moral
person). From this perspective, individuals are treated as
decision makers who perceive moral information, establish
moral judgment, form an intention for action, and act
accordingly. And indeed, moral awareness and level of moral
development have been found to be positively (negatively)
related to (un)ethical intentions.
• Recently, however, other research has shown that individuals,
and not just those with obvious moral development limitations,
often engage in unethical behavior with little pre-active
cognition about the moral considerations involved. This work
has shown how various factors can lead individuals to make
decisions that result in unethical behavior that is either unseen
or cognitively re-construed. One particularly valuable approach
to explaining the overlooking or re-construing of unethical
behavior is the study of moral disengagement (Bandura,
1986)—a process by which the connection between individuals’
moral self-regulation systems and thoughts and actions is
interrupted. Moral disengagement can operate as an automatic
and anticipatory factor preventing individuals from perceiving
moral cues, or as a post hoc rationalization to justify unethical
decisions. In other words, not only can it facilitate unethical
action by dampening moral awareness and preventing
individuals from perceiving moral information, but it can also
bias judgment when individuals are somewhat morally aware.
• The notion that individuals have the cognitive capability to
rationalize inconsistencies in their espoused moral beliefs and
their behavior in practice, and thus make themselves (and
others) blind to ethical gaffs, has a long history. For example,
drawing on interviews of white-collar criminals accused of
embezzling money from their employers, Cressey noted that
‘‘normal’’ people refused to accept their actions as criminal.
Rather, they minimized their indiscretions by using neutral
language (e.g., ‘‘borrowing’’ rather than ‘‘stealing’’) or citing
injustices at the hand of the victim (i.e., their organizations).
Similarly, criminal theorists Sykes and Matza (1957) argued
against the prevailing theory that juvenile delinquency was the
result of learning a different set of values in low socioeconomic
environments. Instead, the authors suggested that juvenile
delinquents share society’s conventional values but, in certain
situations, use cognitive neutralization techniques to weaken
the apparent necessity of applying those values. The authors
identified several neutralization techniques such as denying
responsibility for one’s actions or denying that a victim had
been unjustly harmed (or harmed at all). Drawing on this
foundational work, organizational researchers have suggested
additional types of cognitive distortion techniques, that are
commonly found in organizations where systemic corruption is
uncovered.
• Moral disengagement theory posits that people generally
behave in ways that are consistent with their internal standards
of morality because they experience anticipatory negative
emotions such as guilt, shame, or remorse when they consider
deviating from those from standards. However, individuals are
at times motivated (consciously or non-consciously) to
disengage this moral self-regulatory process in ways that fit
their needs, effectively bypassing the negative emotions that
would normally come from violating internal standards.
• Bandura (1986) articulated eight cognitive distortion
mechanisms by which individuals morally disengage. Moral
justification occurs when individuals justify their actions as
serving the ‘‘greater good’’ (as in the case of substandard jobs
being characterized positively as ‘‘economic development’’).
Euphemistic labeling involves using sanitized or convoluted
language to make an unacceptable action sound acceptable—
such as ‘‘borrowing’’ software purchased by someone else, or
engaging in ‘‘creative accounting.’’ Advantageous comparison
involves making a behavior seem less harmful or of no import
by comparing it to even worse behavior. For example, a person
who takes a ream of paper home from the office for personal
use might say, ‘‘It’s not like I’m taking a printer home with me.’’
With displacement of responsibility, people deflect
responsibility for their own behavior by attributing it to social
pressures or the dictates of others, usually a person of higher
power or authority (e.g., ‘‘I was just following orders’’).
Diffusion of responsibility allows individuals to avoid personal
feelings of culpability for their actions by hiding behind a
collective that is engaged in the same behavior, or by using the
rationale that ‘‘everyone else is doing it, too.’’ Distortion of
consequences involves misrepresenting the results of one’s
actions by minimizing them or focusing only on the positive.
Claiming that one’s (unethical) actions are ‘‘no big deal,’’or that
they ‘‘don’t hurt anyone’’ are common ways of trying to
convince oneself and/or others that one’s behaviour is
acceptable because little or no harm is done. Attribution of
blame (also known as ‘‘blaming the victim’’) is the process of
justifying one’s behaviors in reaction to someone else’s
provocation or behavior (e.g., ‘‘It’s their own fault for trusting
others with this responsibility’’).The notion of ‘‘buyer beware’’
may be considered a broader example of the way business
behavior has been construed so as to make harming a victim
easily justifiable as being the victim’s own fault. Last,
dehumanization involves minimizing or distorting the humanity
of others so as to lessen identification with or concern for those
who might be harmed by one’s actions (e.g., ‘‘those clowns’’).
Additional examples of each moral disengagement mechanism
are provided in Table 1.
• Dispositional moral disengagement can be defined as ‘‘an
individual difference in the way that people cognitively process
decisions and behaviour with ethical import that allows those
inclined to morally disengage to behave unethically without
feeling distress’’ (Moore et al., 2012, p. 2). According to this
approach, people who have a tendency to morally disengage
will be more likely to engage in unethical or deviant behaviour
across situations.
• According to Beu and Buckley (2004), for instance, politically
astute leaders can influence followers toward unethical
behaviour by reframing actions and situations in ways that
draw attention away from ethical issues and by encouraging
the use of morally disengaged reasoning. An important part of
using their political skill effectively is the ability to inspire trust,
defined as one’s willingness to be vulnerable to another, which
in turn reduces others’ felt need to closely monitor their words
and deeds. In effect, the leader, whose rationale is trusted with
little thought or questioning, helps the follower to reinterpret
the situation using a morally disengaged lens.
• The very nature of moral disengagement is alarming because it
demonstrates the power of the human mind to distort
perceptions and rationales such that unethical thinking and
behavior is not recognized as such. If individuals’ perceptions of
unethical behavior are readily distorted in this way, it seems
plausible that employees could perceive (and report) that an
organization’s infrastructure is ethical when indeed unethical
rationales and practices exist and persist but are simply
unnoticed. In the following section, we thus caution against the
assumption that organizational infrastructures are ethical
because members—even many members—view them as such.
Instead, we argue that an ethical infrastructure may not only
harbor unethical thinking and behavior, but also, in some ways,
may make it more difficult for members to see certain types of
problems— particularly those of the day-to-day, less morally
intense variety (Jones, 1991). Further, we posit that moral
disengagement is, to some degree, an important factor in
preserving employees’ perceptions that their organization
enjoys an ethical infrastructure. Our argument rests on the
recognition that several fundamental human tendencies found
in prior work to motivate morally disengaged thinking may
actually be present more often in situations in which
employees perceive themselves to be part of an ‘‘ethical’’
infrastructure.
• Defined as ‘‘a motive or behavior that seeks to benefit the self’’
(Cropanzano, Stein, & Goldman, 2007, p. 6), self-interest is a
powerful human motive. A potential problem arises in that
both broad organizational objectives and specific performance
goals can at times be extremely challenging or even impossible
to achieve, and thus potentially motivate employees to take
shortcuts or engage in unethical behavior to avoid losing out on
maximum personal gain. Schein (2004) has noted that
rationalizations for unethical behavior that is easily identifiable
to outsiders—including those in different functions within the
same organization—is often unrecognized by embedded
members for whom it has become part of the taken-for-
granted fabric of their environment. This is because
‘‘normatively appropriate’’ is largely a perceptual process that
can vary among individuals and groups who have chosen, over
time, to prioritize different bases for judging social action.
• The bad news, however, is that although strong ethical
infrastructures are likely to suppress blatantly self-interested
motivations and unethical behavior, they are not necessarily
equally effective at suppressing morally disengaged reasoning
and unethical behavior related to other motivations—such as
the desire to maintain a positive self-image or the desire to
reduce cognitive load—commonly linked to strong ethical
infrastructures. Indeed, in their original theorizing, Victor and
Cullen (1987) recognized that even the venerable benevolent
(or caring) ethical climate is imperfect: Corporations with caring
or rules climates may be more prone to violations of trade laws
than corporations with a professional climate ... when faced
with the dilemma of offering a bribe or losing a contract, an
employee from a caring climate may judge that s/he is
expected to give the bribe because the contract would help
people who work for the firm, even though it is illegal. (Victor &
Cullen, 1987, pp. 67–68). This suggests that even organizations
with noble ethical intentions prioritize some values over others,
and some groups or people over others (e.g., in-groups such as
employees over out-groups such as customers or competitors),
which creates a series of opportunities for distorted cognition
about what is appropriate (Giessner & van Quaquebeke, 2010).
• In one example from Margolis and Molinsky’s (2008, p. 856)
study of ‘‘necessary evils,’’ a police officer must evict a
delinquent tenant from her home. Although this action will
cause emotional and financial pain to the tenant, the officer
needs to carry out the act to comply with the law and protect
the rights of the landlord. The officer’s reasoning—‘‘Well, they
put themselves in this situation’’ (attribution of blame)—is
likely an institutionalized rationalization that helps to minimize
the discomfort of a challenging situation while maintaining the
positive self-image of officers who must undertake such
behavior.
• Ethical infrastructure, limited cognition, and moral
disengagement: Classic psychological research has shown
various risks resulting from humans’ desire to reduce cognitive
effort (Fiske & Taylor, 1984) and their susceptibility to social
influences. For example, followers in a hierarchy will often
automatically experience an ‘‘agentic shift’’ in which they
become an instrument of a perceived authority figure and do
not think carefully for themselves about the ethical
ramifications of their own (leader instructed) behavior
(Milgram, 1969). In the classic Milgram experiments and more
recent replications, participants used morally disengaged
language to explain why they continued to shock another
person when directed to do so by an experimenter: ‘‘I was just
doing what he told me’’ (displacement of responsibility; ‘‘Basic
instincts,’’ 2007). Similarly, work in social learning theory and
social information processing indicates that individuals learn
about norms and expected behaviors from those around them
(Bandura, 1986; Salancik & Pfeffer, 1978), thus sparing
themselves the cognitive effort of having to think through or
experience everything for themselves. And when it comes to
moral reasoning and behavior more generally, the finding that
most individuals operate at a ‘‘conventional level’’ of moral
development (Kohlberg, 1969; Trevin ˜o, 1992)— wherein they
take their cue from what they see others around them doing—
suggests that individuals do not routinely ‘‘think through’’ the
ethical implications of every stimulus they face in their work
life.
• ‘‘the function of the cultural pattern [is] to eliminate
troublesome inquiries by offering ready-made directions for
use, to replace truth hard to attain by comfortable truisms, and
to substitute the self-explanatory for the questionable.’’
• As shown in Figure1, the influence of a strong perceived ethical
infrastructure on decreased cognition and hence potentially
increased moral disengagement is proposed to operate in part
through increased trust, commitment, and identification. For
instance, ethical infrastructures have been linked empirically
and theoretically to trust (seeFigure1,Step1).And, people are
less suspicious of and less concerned about monitoring the
behaviors of those they trust and more open to absorbing new
knowledge from them without careful analysis. In short, trust
allows individuals to reduce their cognitive effort (see Figure 1,
Step 2). Thus, if trust minimizes the extent to which people are
likely to closely examine others’ rationales for action, it follows
that moral disengagement in co-workers may be less likely to
be noticed or questioned and more likely to be mimicked in
ethical infrastructures because of the trust that exists in such
environments (see Figure 1, Step 3).
• Recent scandals involving some of the most well respected
corporations in the world, including Johnson & Johnson, Merck,
and Toyota, provide some anecdotal evidence for this
possibility. In 2008, for instance, Johnson & Johnson initiated a
‘‘phantom recall,’’ instructing employees to surreptitiously buy
back problematic Motrin IB caplets from convenience stores
(Besser & Adhikari, 2010). Given Johnson & Johnson’s
reputation for recalling Tylenol in the early 1980s and its
corporate reputation for a climate of care, organizational
decision makers may have unconsciously engaged in moral
licensing when initiating and overseeing this discreet recall.
Furthermore, because the legal implications of the action were
unclear and the ultimate outcome was intended to be positive
(i.e., prevent sickness from tainted medication), there were
certainly multiple bases for rationalizing that the action was
‘‘morally justified’’ and in line with the company’s strong
ethical culture.
• Solutions: training can be used to help employees identify
morally disengaged reasoning in their own and others’ thinking;
devil’s advocates; ‘‘stop and think’’ moments; ethics officers,
Those individuals would also need to be endowed with
sufficient power to avoid being blindly overruled or shouted
down by the majority.
• For example, extant research suggests that reaffirming one’s
core values helps to counter the negative effects of ego
depletion i.e., weakened self-control, because it refocuses
one’s perspective on the bigger picture,
4. Building Houses on Rocks: The Role of the Ethical Infrastructure
in Organizations, Ann E. Tenbrunsel, Kristin Smith-Crowe and
Elizabeth E. Umphress3
• We argue that designing ethical organizations requires an
understanding of how and why such systems work; that is, one
must be able to distinguish between ethical foundations of rock
and those of sand. First and foremost, such an understanding
requires an informed, theoretical identification of the
organizational elements that contribute to an organization’s
ethical effectiveness. We introduce the term ethical
infrastructure to describe these elements, which we identify as
incorporating the formal systems, the informal systems, and
the organizational climates that support the infrastructure. We
suggest that the first two elements can be categorized both by
the formality of these systems as well as the mechanisms used
to convey the ethical principles, including communication,
surveillance, and sanctioning systems. We further argue that
these formal and informal elements are part of another
element of the ethical infrastructure—the organizational
climates that support the infrastructure—that permeates the
organization.
• The third, and equally crucial step is to understand how these
elements interact to influence ethical behavior. We propose a
theory of ethical embeddedness to describe these
interrelations. We argue that formal systems are embedded
within their informal counterparts, which in turn are embedded
within the organizational climates that support the
infrastructure. The strength and ultimate success of each layer,
we assert, depends on the strength of the layer in which it is
embedded. We use this theory to develop predictions about
the relationships between the ethical infrastructure and ethical
behaviors. We conclude by linking these predictions to their
associated practical implications, including offering
recommendations for organizations that desire to enhance
their ethical effectiveness. (Draw three layers to explain this
concept of ethical infrastructure).
• Formal systems are those that are documented, that could be
verified by an independent observer. We focus on three types
of formal systems that we believe to be the most prevalent and
the most directly observable: communication, surveillance, and
sanctioning systems. Formal communication systems are those
systems that officially communicate ethical values and
principles. Formal representations of such systems include
ethical codes of conduct, mission statements, written
performance standards, and training programs. Formal
surveillance systems entail officially condoned policies,
procedures, and routines aimed at monitoring and detecting
ethical and unethical behavior. Examples include the
performance appraisal itself as well as procedures for reporting
ethical and unethical actions, including reporting hot lines and
ethical ombudsmen. Formal sanctioning systems are those
official systems within the organization that directly associate
ethical and unethical behavior with formal rewards and
punishments, respectively. Perhaps the most obvious example
of such a system is one in which unethical behavior is clearly
and negatively related to performance outcomes, such as
evaluations, promotions, salary, and bonuses.
• Each of these processes is independent. It is possible, for
example, that a performance standard is set, but never
monitored, or that behavior is monitored, but not sanctioned.
Thus, it is important to recognize the contributions that each of
these mechanisms makes to the ethical infrastructure.
• Formal communication systems include ethical codes of
conduct, mission statements, written performance standards,
and training programs. These systems are used quite frequently
by organizations.
• Informal communication systems are defined as those
unofficial messages that convey the ethical norms within the
organization. Informal, “hallway” conversations about ethics,
informal training sessions in which organization members are
“shown the ropes,” and verbal and nonverbal behaviors that
communicate ethical principles all represent different
mechanisms by which ethical principles are informally
communicated. Informal Surveillance and Sanctioning Systems.
In order for informal communication systems to be effective,
there must be an accompanying informal surveillance system,
consisting of someone or some mechanism that can informally
monitor ethical and unethical behaviors. Informal surveillance
systems are those systems that monitor and detect ethical and
unethical behavior, but not through the official channels of the
formal surveillance systems. Rather, informal surveillance
systems are carried out through, among other channels,
personal relationships (e.g., peers) and extra-organizational
sources (e.g., the police). The informal representation of the
surveillance system may best resemble a spy network, an
“internal CIA.” Informal sanctioning systems are those systems
within organizations that directly associate ethical and
unethical behavior with rewards and punishments; however,
unlike its formal counterpart, informal sanctioning systems do
not follow official organizational channels. Informal sanctioning
systems may take the form of group pressure to behave in a
certain manner or the perceived consequences that are
experienced if one engages in certain ethical or unethical
activities. Organizational members may threaten to punish
someone for engaging in an ethical behavior, such as whistle
blowing, with such punishment including isolation from group
activities, ostracism (Bales, 1958; Feldman, 1984), and even
physical harm.
• In general, we define organizational climate as organizational
members’ shared perceptions regarding a particular aspect of
an organization; in other words, organizational climates are in
reference to something (e.g. , ethics). Because climate is born
out of the context of an organization, climates vary across
different contexts. Also, because the experiences that
organizational members have of any given context are so
complex, multiple organizational climates for different aspects
of an organization exist simultaneously. We should note that
some theorists have made a fundamental distinction between
organizational climate and a related concept, organizational
culture, with the latter construct being essentially broader than
the former. However, for our purposes, we do not assume that
these are two distinct constructs, but rather that they are two
different perspectives (i.e., using different language and coming
from different disciplines) of the same phenomenon.
• Organizational climate consists of the perceptions of
organizational members (e.g., Schneider, 1990) regarding
ethics, respect, or procedural justice within organizations,
whereas formal ethical systems consists of tangible objects and
events pertaining to ethics, such as codes of ethics. Likewise,
the informal ethical system consists of tangible objects and
events relevant to ethics (e.g., conversations among workers),
while, again, climate is made of perceptions.
• At the root of the proposed curvilinear relationships between
elements of the ethical infrastructure and ethical behavior is a
proposed cognitive shift that occurs when an ethical
infrastructure is in place as compared to when such an
infrastructure is nonexistent. When an ethical infrastructure is
nonexistent, an individual must decide what is ethical. In
contrast, when an ethical infrastructure is in place, the
individual interpretation of what is ethical is supplanted by the
interpretation that is advanced by the organization. Individuals
in this type of organization no longer rely on their own values;
rather, they look to the organization to decide what is ethical.
• We argue that a weak ethical infrastructure, because it does
not promote individual reflection, results in more unethical
behavior than when the ethical infrastructure is nonexistent or
is strong. When an organization has a weak ethical
infrastructure, individuals exhibit more unethical behavior than
when such an infrastructure is nonexistent because they
engage in less sophisticated moral reasoning; instead, they look
to the organization for guidance but don’t find much help. A
weak ethical infrastructure also produces more unethical
behavior than a strong ethical infrastructure. In both cases, the
individual looks to the organization for guidance. However, by
definition, in a strong ethical infrastructure, unlike in a weak
structure, the organization is clearly conveying the importance
of ethical principles. Consequently, when an organization has a
strong ethical infrastructure, they engage in more ethical
behavior than when an organization has a weak ethical
infrastructure because the organization has sent a signal that
ethical behavior is important. While the reason for this ethical
behavior is fundamentally different for a strong ethical
infrastructure (“I am doing this because the organization has
told me it is important”) than for a nonexistent ethical
infrastructure (“I am doing this because it is the right thing to
do”), the end result is the same. Ethical behavior is therefore
higher when a surveillance and sanctioning system is either
nonexistent or strong than when such a system is weak, thus
producing the curvilinear relationship.
• Tenbrunsel and Messick (1999) provide an illustration of this
phenomenon in the domain of formal surveillance and
sanctioning systems. They argued and found support for the
proposition that cooperative behavior would be lower when a
weak versus a nonexistent sanctioning system was in place.
Using a prisoner’s dilemma as the context, subjects had the
option to either cooperate by adhering to an industry
agreement to reduce emissions or defect by not adhering to
such an agreement. Half of the subjects were told that there
would be no fines associated with defection (non-existent
surveillance and sanctioning system),whereas the other half
were told that there would be a weak surveillance and
sanctioning system (characterized by a small probability of
being caught and small fines if defection was noted). Results
provided support for the notion that the weak system would
increase undesirable behaviors, with defection rates higher in
the weak sanctioning condition than in the condition in which
no sanctions were present. An additional study extended these
findings, illustrating that a weak sanctioning system produced
less cooperative behavior than both a nonexistent sanctioning
system and a strong sanctioning system.
• Ethical systems vary in the degree to which they reflect an
organization’s commitment to ethical principles, which in turn
influences the degree to which they influence an individual
employee’s ethical behavior. The lower the perceived
commitment to ethical principles, the less salient they are in
the organizational member’s experience and hence the less
influential they are in influencing an individual’s behavior. We
argue that elements that reflect a greater degree of
commitment to ethical values are those that are more inherent
to the organization. True belief in ethical principles is reflected
not so much in what is said but in what is done. In this sense,
we predict that formal elements of the ethical infrastructure
reflect a weaker degree of commitment than informal
elements, which in turn reflect a weaker degree of
commitment than the relevant organizational climates.
• At the base of our proposition is the notion of consistency
between the various elements of the ethical infrastructure. In
order for codes of conduct and ethical training to have an
impact, they must be consistent with more systemic ethical
elements, such as the organization’s informal reinforcements
and the relevant organizational climates. If such congruence is
missing, then employees receive a mixed message,
substantially reducing the impact that these formal systems
might have. For example, imagine a situation in which an
organization engages in extensive ethical training, but has an
informal reward system that promotes individuals based on the
bottom-line, independent of the means used to get there. The
effectiveness of this training would be substantially diminished
in comparison to a situation in which the organization’s
informal system of promotions rewarded individuals who were
ethical.
• Following the strategically-focused climate argument (Smith-
Crowe et al., in press), an organization’s ethical infrastructure
will only be effective to the extent that the elements within it
act in concert. If they are to be effective, formal ethical systems
must reside in informal reinforcements and organizational
climates that are solid. If not, the formal systems act more like
a Band-Aid than an antibiotic, addressing the symptoms, but
not the underlying causes. Similarly, if the informal system is
incongruent with the pertinent climates, the effectiveness of
that informal system is compromised. We therefore argue that
stronger elements, or those which reflect a deeper
commitment to ethical principles and ideals, moderate the
effectiveness of weaker elements
• Practically, our discussion has several implications for
organizations that wish to increase their ethical effectiveness.
First, it suggests that a focus on formal systems—which are the
most visible and the most highly touted—isn’t enough. Rather,
it is important to delve below the ethical exterior to uncover,
other, perhaps more important, elements, such as informal
systems and organizational climates. Second, the relationship
between these elements is complicated, with half-hearted
attempts producing potentially disastrous results. Third, one
must look at the elements of the ethical infrastructure in
conjunction with one another, for it is really the interplay
among them that is critical.
5. “Does Power Corrupt or Enable? When and Why Power
Facilitates Self-Interested Behavior” ,Katherine A. DeCelles, D. Scott
DeRue, Joshua D. Margolis ,Tara L. Ceranic
• The questions of when and why people will advance their
own interests at the expense of the common good are
evident across a wide range of organizational behavior
research. Therefore, we define self-interested behavior as
actions that benefit the self and come at a cost to the
common good. Power presents organizations with a paradox
related to self-interested behavior. On the one hand, there is
a widespread belief and evidence that power corrupts, and
people in positions of power can have a substantial negative
impact on the common good by acting solely in their own self-
interest. Yet, power can increase perspective taking and
interpersonal sensitivity, suggesting that power might
increase the emphasis placed on others’ needs as opposed to
one’s own interests. In parallel to this research on power, it
has been argued that self-interested behaviour is a function
of individuals’ moral identity. Moral identity is the extent to
which an individual holds morality as part of his or her self-
concept and it has been shown to influence the degree to
which people emphasize their own versus others’ needs.
• We expect self-interested behavior to be a function of both
power and moral identity. We expect this interaction between
power and moral identity to manifest itself because
individuals’ traits can increase the accessibility of cognitive
concepts and then influence how people interpret
information, especially in situations where an individual
perceives him- or herself to be autonomous or powerful.
Based on this research, it follows that people with high moral
identities will have more readily available moral concepts in
their accessible mental structures and that when experiencing
feelings of power, they will be more aware of the moral
implications of a situation relative to those with a lower moral
identity. Reynolds (2006) referred to this recognition by an
individual of a situation’s moral content as “moral
awareness.” Individuals with higher moral identities are likely
to have greater moral awareness (Reynolds, 2006), which we
argue should lead them to engage in even less self-interested
behavior when feeling powerful because they are likely to be
especially aware of the moral implications of their actions.
Conversely, feeling powerful, yet having a lower moral
awareness (associated with a lower moral identity), likely
results in individuals not seeing any problem with benefiting
themselves at the expense of others.
• Across two studies, we found that power predicts self-
interested behavior differently depending on moral identity.
In our first study of working adults, there was a negative
association between trait power and self-interested work
behavior when individuals had a high moral identity, yet a
positive relationship between trait power and self-interest
when individuals had a low moral identity.
• Our research has important practical implications. As
organizations look to promote people to more powerful
positions or empower people with greater discretion, our
research suggests, understanding how central morality is to
the person’s self-concept will be a critical consideration for
predicting whether that person will engage in self-serving
behavior. For employees who are already in positions of
power or who exhibit strong trait power, it is important that
organizations work to develop their moral identity.
6. “Managing Unethical Behavior in Organizations: The Need for a
Behavioral Business Ethics Approach”, David De Cremer ,Judge
Business School, University of Cambridge ,Wim Vandekerckhove
University of Greenwich Business School
• A prescriptive approach thus implies that people are rational
human beings, who make conscious decisions about how to
act. As a result, prescriptive approaches to business ethics
assume that bad people do generally bad things and good
people do good things, because they are rational decision
makers. Explaining situations whilst sticking to this rational way
of reasoning is attractive for a variety of reasons (De Cremer,
2009, De Cremer & Tenbrunsel, 2012): (a) it is a simple
assumption that promotes an economic way of thinking about
moral violations, (b) allows to blame a few bad apples for the
emerging violations, and (c) provides a justified ground to
punish those regarded as rationally responsible. However,
many situations exist where good people do bad things - an
observation that has received considerable empirical support.
These observations challenge the accuracy of the prescriptive
approach in predicting the extent to which so-called rational
human beings will display ethical behavior. It seems to be the
case that because of rather irrational, psychological tendencies
humans do not always recognize the moral dilemma at hand
and engage in unethical behaviors without being aware of it.
Indeed, Tenbrunsel and Messick even note that people do not
see the moral components of an ethical decision, not so much
because they are morally uneducated, but because
psychological processes fade the ethics from an ethical
dilemma.
• To make sense of the fact that good people can do bad things
an alternative view point is needed that accounts for people’s
morally irrational behavior. We propose that this alternative
view point is a descriptive approach that examines more closely
how people actually take decisions and why they sometimes do
not act in line with the moral principles that are universally
endorsed. Indeed, it is intriguing to observe that the actors in
many business scandals do not see themselves as having a bad
and ethically flawed personality. They consider themselves as
good people who have slipped into doing something bad. How
can we explain this? An interesting idea put forward by the
behavioral business ethics approach is that many organizational
ethical failures are not only caused by the so-called bad apples.
In fact, closer inspection may reveal that many ethical failures
are in fact committed by people generally considered to be
good apples, but depending on the barrel they are in they may
be derailed from the ethical path.
• Taken together, the assumption that when people are
confronted with moral dilemmas they are automatically aware
of what they should be doing and therefore are in control to do
the good thing is limited in its predictive value because of the
fact that humans seem to deviate from what rational
approaches predict.
• Or, as Tenbrunsel and Smith-Crowe (2008, p. 548) note:
“Behavioral ethics is primarily concerned with explaining
individual behavior that occurs in the context of larger social
prescriptions. The role of behavioral ethics in addressing ethical
failures is to introduce a psychological-driven approach that
examines the role of cognitive, affective and motivational
processes to explain the how, when, and why of individual’s
engagement in unethical behaviour”.
• These two topics illustrate how psychological processes play a
role in shaping people’s moral judgments and actions that are
relevant to business and organizations: (a) the processes and
biases taking place during ethical decision making and (b) the
impact of the social situation on how ethical judgments and
actions are framed and evaluated. Research on these two
topics advocates the view that when it comes down to ethics,
many people are followers, both in implicit and explicit ways.
More precisely, the field of behavioral ethics makes clear that
people are in essence followers of their own cognitive biases
and the situational norms that guide their actions.
• Bounded ethicality includes the workings of our human
psychological biases that facilitate the emergence of unethical
behaviors that do not correspond to our normative beliefs.
Specifically, people develop or adhere to cognitions (biases,
beliefs) that allow them to legitimize doubtful, untrustworthy
and unethical actions. Importantly, these cognitive biases
operate outside our own awareness and therefore in a way
make us blind to the ethical failures we commit. In addition,
this blindness is further rooted in the self-favoring belief
that in comparison to the average person one can be looked
upon as fairer and more honest. These self-favoring
interpretations of who they are in terms of morality, are used
by humans in implicit ways to infer that they will not act
unethically, which as a result lowers their threshold of
monitoring and noticing actual violations of our ethical
standards.
• This concept of bounded ethicality thus literally includes a
blindness component, which can be seen as activating an
ethical fading process, which as Tenbrunsel notes is a fading
process that removes the difficult moral issues from
a given problem or situation, hence increasing unethical
behaviour. Below, we briefly discuss a number of psychological
processes that influence people to show unethical behavior
even if it contradicts their own personal beliefs about ethics.
These processes are: moral disengagement, framing, anchoring
effects, escalation effects, level construal,
and should-want self.
• Moral disengagement: Moral disengagement can be defined as
an individual’s propensity to evoke cognitions which
restructure one’s actions to appear less harmful, minimize
one’s understanding of responsibility for one’s actions, or
attenuate the perception of the distress one causes others
(Moore, 2008, p. 129).
• Framing. Depending on how a situation is cognitively
represented has an effect on how we approach moral
dilemmas and take decisions. Insights building upon the
concept of loss aversion (the notion that people perceive losses
as more negative than they regard gains of an equal magnitude
as positive, suggest that self-interest looms larger when people
are faced with loss. Indeed, losses are considered more
unpleasant than gains are considered pleasurable and hence
invite more risk-taking to avoid the unpleasant situation. Thus,
risk-taking often leads to behavior violating ethical standards.
To avoid making losses, firms can resort to unethical practices.
The 2008 Financial Crisis is one example. Put differently: when
looking at a situation in terms of losses, corruption is never far
away.
• Anchoring effects: This effect holds that our judgments and
decisions are strongly influenced by the information that is
available and accessible. Importantly, this information can be
very arbitrary or even irrelevant to the decision and judgments
one is making. Rumours of sexual harassment by superiors can
bias one’s own sexual harassment of subordinates. Experiment
of African nations in the UN by roulette spin.
• Escalation effects: One important observation concerns the
fact that those showing bad behavior never arrive immediately
at the stage of doing bad. Rather, it seems like bad behavior
emerges slowly and gradually as can be inferred from remarks
like “I never thought I would show this kind of behavior.” In the
literature this effect is referred to as the escalation effect or the
slippery slope effect. The famous social psychology experiment
by Milgram (1974) illustrates this principle. Not 450 watts, but
lower initial watts.. Thus, many unethical decisions and actions
grow slowly into existence and this escalation process itself is
not noticed consciously. For example, research by Cain,
Loewenstein, and Moore (2005) described how auditors are
often blind to client’s internal changes in accounting practices,
but only if the changes appear gradually.
• Level construal: According to this theory, acts that are in the
distant future cannot be experienced directly and therefore are
hypothetical. Hypothetical situations bring their own mental
constructions with it and a consequence of this process is that
more distant events (e.g. events on the long-term) are
represented with it less concrete details. Under such
circumstances, people adhere more easily to moral standards
as guide lines for their decisions and judgments. In contrast,
events that are closer in time are represented in less abstract
and more concrete ways. Under those circumstances people
will rely more on concrete details and relevant contextual
information to make decisions and judgments. Then, egocentric
tendencies will more easily influence the actions one will take.
• Forecasting errors: Participants consistently overestimated
their future emotional reactions to both positive and negative
events (Gilbert et al., 1998;
Wilson, Wheatley, Meyers, Gilbert, & Axsom, 2000). With
respect to what people expect they will do, literature on
behavioral forecasting shows that people overestimate their
tendency to engage in socially desirable behaviors like being
generous or cooperative (Epley & Dunning, 2000), and
underestimate their tendencies toward deviant and cruel
behavior like providing electric shocks (Milgram, 1974).
Moreover, people also overestimate their willingness to forgive
moral transgressions by overvaluing restorative
tactics such as offering apologies (De Cremer, Pillutla, &
Reinders Folmer, 2011). In a similar vein, it also follows that
people are biased in their predictions in such a way that they
will predict to behave more ethically than they actually will do
in the end.
• Should-want Selves: This distinction was introduced by
Bazerman et al. (1998) and is used to describe intrapersonal
conflicts that exist within the human mind; notably conflicts
between what we morally should be doing and what in reality
we want to do. Specifically, people predict that they will act
more morally in situations than they actually do when being
confronted with these situations. These faulty perceptions and
estimates can be explained by the distinction between should
and want selves. The “want” self is a reflection of people’s
emotions and affective impulses. Basically, the want self is
characterized more as “hot-headed”. The “should” self, in
contrast, is characterized as rational and cognitive, and can
thus be looked upon as “cool headed”. Applying this distinction
to our forecasting problem, it follows that the “should” self is
more active when making decisions on the long-term, whereas
the “want” self is doing more of the talking when it concerns
short-term decisions. Morality and ethics as standards to live by
are thus more accessible and guiding when making predictions
towards the future. Moreover, because people are generally
optimistic and have great confidence in their own judgments
they will consider their predictions towards the future as valid
and reliable.
• Social conditions: Finally, in 1971 Zimbardo (2007) conducted
an impressive experiment at the Stanford University campus in
which participants assumed the roles of “prisoner” or “guard”
within an experimentally devised mock prison setting.
Specifically, many of the participants classified as “prisoners”
were in serious distress and many of the participants
classified as “guards” were behaving in ways which brutalized
and degraded their fellow participants. Participants were so
merged into the prisoner’s setting that they took up their roles
too seriously, leading to behavior that was considered
inappropriate and unethical at times. This study shows the
powerful influence of organizational roles and how it can
implicitly influence people’s beliefs and consequently their
actions.
• Moral Distance: This idea of context being a powerful
determinant for people to act in bad and unethical ways
towards others has been central in the work of Bauman on
“Moral Distance” (Bauman, 1991). The notion of moral distance
holds the idea that people will have only ethical concerns about
others that are near to them. If the distance increases, it
becomes easier to behave in unethical ways.
• Organisational Features: A first organizational feature is the
kind of industry people may work in. For example, the LIBOR
scandal where traders manipulated the interest rate known as
Libor illustrates that a context defined in terms of finance
actually encouraged dishonest behavior. Second org feature
can be the structure of the organization that creates more
versus less distance towards others, which can influence the
degree of unethical behaviors. Based on the idea of Bauman
(1991, p. 26) that bureaucracy functions as a “moral sleeping
pill”. it stands to reason that mechanistic organization
structures introduce more distance and hence allow for more
unethical behaviors to emerge.
• In the 1990s Miceli, Near and Dworkin conducted extensive
descriptive research on whistleblowers (for an overview see
Miceli, Near & Dworkin, 2008). This work has caused a huge
shift in how prescriptive business ethics discusses
whistleblowing. ( To be read for whistleblowing)
7. MORAL DISENGAGEMENT IN THE CORPORATE WORLD, Moral
Disengagement J. White et al. JENNY WHITE, ALBERT BANDURA and
LISA A. BERO, Accountability in Research, 16:41–74, 2009 Copyright
© Taylor & Francis Group, LLC ISSN: 0898-9621 print / 1545-5815
online.
• In the course of socialization, individuals adopt standards of
right and wrong that serve as guides for conduct. They monitor
their conduct, judge it in relation to their moral standards and
the conditions under which it occurs, and regulate their actions
accordingly. They do things that give them satisfaction and a
sense of self-worth, and they refrain from behaving in ways
that violate their moral standards because such conduct will
bring self-condemnation. However, moral standards do not
function as unceasing internal regulators of conduct. Self-
regulatory mechanisms do not operate unless they are
activated. Many psychosocial manoeuvres can be used to
selectively disengage moral self-sanctions. Indeed, large-scale
inhumanities are typically perpetrated by people who can be
considerate and compassionate in other areas of their lives.
8. Ethically Adrift: How Others Pull Our Moral Compass from True
North, and How We Can Fix It, Moore, C., and F. Gino,
http://nrs.harvard.edu/urn-3:HUL.InstRepos:10996801
• The fact that human survival depends on finding ways to live
together in peaceful, mutually supportive relations created an
evolutionary imperative for fundamental moral behaviors such
as altruism, trust, and reciprocity. In other words, we are moral
because we are social.
• However, much of our immorality can also be attributed to the
fact that we are social animals. In other words, this chapter is
about why we are immoral because we are social. When he
was sentenced to six years in prison for fraud and other
offenses, former Enron CFO Andy Fastow claimed, “I lost my
moral compass and I did many things I regret” (Pasha, 2006).
Fastow’s statement implies that if his moral compass had been
in his possession, he would have made better choices. In
contrast, we argue that unethical behavior stems more often
from a misdirected moral compass than a missing one. Given
the importance of morality to our identities, we would notice if
our moral compass went missing. However, a present but
misdirected moral compass could seduce us with the belief that
we are behaving ethically when we are not, while allowing us to
maintain a positive moral self-image. The idea that one’s moral
compass can veer away from “true North” has a parallel with
navigational compasses.
Conditions within a local environment, such as the presence of
large amounts of iron or steel, can cause the needle of a
navigational compass to stray from magnetic North, a
phenomenon called magnetic deviation. Explorers who are
aware of this phenomenon can make adjustments that will
protect them from going astray, but laymen can veer wildly off
course without being aware they are doing so.
• What forces are both powerful and subtle enough to cause
people to believe their actions are morally sound when in fact
they are ethically adrift? Existing research has offered two main
explanations for this phenomenon. The first considers
individuals who are ethically adrift to be “bad apples” whose
moral compasses are internally damaged. This explanation for
ethical drift harkens back to Aristotelian notions of human
virtue and persists in contemporary discussions of character as
the foundation of morality (cf., Doris, 2002).
• Consistent with this explanation, scholars have identified some
(relatively) stable individual differences that demagnetize
moral compasses, leading to unethical behavior. According to
this view, a deviated moral compass is evidence of an
individual’s faulty human nature. Indeed, the idea that
psychometric tests can identify “bad apples” before they are
hired underlies the common practice of integrity testing among
employers.
• In this chapter we focus instead on an increasingly dominant
alternative view, grounded in moral psychology and behavioral
ethics, that suggests that individuals‟ morality is malleable
rather than stable. This alternative perspective proposes two
main reasons why we can become ethically adrift: intrapersonal
reasons (caused by human cognitive limitations) and
interpersonal reasons (caused by the influence of others). We
describe both of these reasons briefly below, before turning
the rest of our attention to the latter of these two.
• Social processes that facilitate neglect:
The research overviewed in this section suggests that social
norms and social categorization processes can lead us to
neglect the true moral stakes of our decisions, dampening
our moral awareness and increasing immoral behavior. Rather
than driving our own destiny, we look for external cues that
allow us to relinquish control of the wheel. Put another way,
“one means we use to determine what is correct is to find out
what other people think is correct”, a concept known as social
proof.
• We are more likely to engage in altruistic behavior if we see
others doing so and more likely to ignore others‟ suffering if
others near us are similarly indifferent. We are even more likely
to laugh at jokes if others are also laughing at them. In general,
the more people engage in a behavior, the more compelling it
becomes, but the actions of one person can still influence our
behavior. Some of Bandura‟s classic studies showed how
children exposed to an aggressive adult were considerably
more aggressive toward a doll than were children who were
not exposed to the aggressive model.
Together, this research suggests that others—either in groups
or alone—help to establish a standard for ethical behavior
through their actions or inaction. These “local” social norms
provide individuals with the proof they need to categorize
behavior as appropriate or inappropriate. Repeated exposure
to behavioral norms that are inconsistent with those of society
at large (as is the case, for example, with the subcultures of
juvenile delinquents) may socialize people to alter their
understanding of what is ethical, causing broader moral norms
to become irrelevant. Thus, when a local social norm neglects
morally relevant consequences, it dampens moral awareness,
and through this dampening, will increase unethical behavior.
• Social categorization: Social categorization is the psychological
process by which individuals differentiate between those who
are like them (in-group members) and those who are unlike
them (out-group members). Social categorization amplifies the
effect of social norms, as norms have a stronger effect on our
behavior when we perceive those enacting them to be similar
to ourselves. Unfortunately, this means that if we socially
identify with individuals who engage in unethical behavior, our
own ethical behavior will likely degrade as well. In one study,
college students were asked to solve simple math problems in
the presence of others and had the opportunity to cheat by
misreporting their performance and leave with undeserved
money. Some participants were exposed to a confederate who
cheated ostentatiously (by finishing the math problems
impossibly quickly), leaving the room with the maximum
reward. Unethical behavior in the room increased when the
ostentatious cheater was clearly an in-group member (a
member of the same university as the participants) and
decreased when he was an out-group member (a student at a
rival university).
• These findings suggest an intersection between social norm
theory and social identity theory. Essentially, people copy the
behavior of in-group members and distance themselves from
the behavior of out-group members, and then use this behavior
to maintain or enhance their self-esteem, but in two different
ways. In-group members‟ transgressions are perceived to be
representative of descriptive norms (those that specify how
most people behave in a given situation) and thus as less
objectionable than the same behavior by an out-group
member. In contrast, when assessing the immoral behavior of
an out-group confederate, people highlight injunctive norms
(those that refer to behaviors that most people approve or
disapprove of) and distance themselves from this “bad apple.”
Highlighting the different types of norms, depending on
whether an in-group or out-group member is modeling the
behavior helps individuals maintain a distinctive and positive
social identity for their in-group.
• Another consequence of social categorization is out-group
mistreatment. Categorizing individuals as members of an out-
group allows us to dehumanize them, to exclude them from
moral considerations, or to place them outside our “circle of
moral regard”, and thus mistreat them without feeling (as
much) distress. At a fundamental level, we conceive of out-
group members as less human and more object-like than in-
group members. Recent neurophysiological research has even
found that individuals process images of extreme out-group
members, such as the homeless or drug addicts, without many
of the markers that appear when they look at images of other
people (Harris & Fiske, 2006). Brain-imaging data even show
that individuals manifest fewer signs of cognitive or emotional
distress when they are asked to think about sacrificing the lives
of these extreme out-group members than when they
contemplate sacrificing the lives of in group members.
• Finally, social categorization also leads us to feel psychologically
closer to those whom we have categorized as members of our
in-group than to those we have categorized as out-group
members. When people feel connected to others, they notice
and experience others‟ emotions, including joy,
embarrassment and pain”. As individuals grow close, they take
on properties of each other and psychologically afford each
other “self” status. Indeed, copycat crimes are often
perpetrated by individuals who feel a psychological connection
to the models they are emulating. In other words, having a
psychological connection with an individual who engages in
selfish or unethical behavior can influence how one‟s own
moral compass is oriented.
• The enhancement options being discussed include radical extension of human health-span,
eradication of disease, elimination of unnecessary suffering, and augmentation of human
intellectual, physical, and emotional capacities
What Are The Human Enhancements So Far?
• The Bionic Man-Jesse Sullivan
• Braingate--allows a person to manipulate objects in the world using only the
mind
• Cochlear Implants and Night Vision and Silent Talk
• Affective BCIs: Electrocorticography (ECoG) and Electroencephalography (EEG)
• Exoskeletons and Flexible Battlesuits-MIT’s Soldier Nanotechnologies
• Reciprocyte-an artificial nano-red blood vessel
• Pharmacological Enhancements. Stimulant drugs-Ritalin and Adderall, used by
many college students to boost concentration and ward off sleep; Provigil, used
to improve working memory and brighten mood; Anabolic steroids ; Viagra;
Aricept-improves verbal and visual memory; Resvestrol– life extender.
What Are The Human Enhancements So Far?
• Hans Moravec, former director of robotics at Carnegie-Mellon University and
developer of advanced robots for both NASA and the military, popularized the
idea of living perpetually via a digital substrate.
• A machine endowed with the virtue of temperance would not have any desire for
excess of any kind, not even for exponential self-improvement, which might lead
to a superintelligence posing an existential risk for humanity. Since virtues are an
integral part of one’s character, the AI would not have the desire of changing its
virtue of temperance.
Should We Allow AGIs?—The Control Problem
• True AGIs will be capable of universal problem solving and recursive self-improvement.
• Consequently, they have potential of outcompeting humans in any domain essentially making humankind
unnecessary and so subject to extinction.
• Kurzweil holds that “intelligence is inherently impossible to control,” and that despite any human attempts at
taking precautions, by definition . . . intelligent entities have the cleverness to easily overcome such barriers.”
• This presents us with perhaps the ultimate challenge of machine ethics: How do you build an AI which, when it
executes, becomes more ethical than you?
• “AI Safety Engineering” field emerging: A common theme in AI safety research is the possibility of keeping a
superintelligent agent in a sealed hardware so as to prevent it from doing any harm to humankind-- Eric
Drexler
Should We Allow AGIs?—The Control Problem
• Nick Bostrom, a futurologist, has proposed an idea for an Oracle AI (OAI), which would be only capable of
answering questions.
• Finally, in 2010 David Chalmers proposed the idea of a “leakproof” singularity. He suggested that for safety
reasons, AI systems first be restricted to simulated virtual worlds until their behavioral tendencies could be fully
• The Ted Kaczinsky Manifesto: ….What we do suggest is that the human race might easily permit itself to drift into
a position of such dependence on the machines that it would have no practical choice but to accept all of the
machines decisions…. we will be so dependent on them that turning them off would amount to suicide.”
• Technological slavery.
The Ethics of AI
• The Ethics of Economic Inequality—Between nations and within nations. Need for
a Universal Minimum Wage Solution—Thomas Picketty
• The ethics of human substitution by industrial robots in employment-not a
problem in economies with declining birthrates and shrinking populations
• Since AGI can outsmart human cognitive and emotional intelligence, then they
are sapient and especially sentient and capable of robot suffering. Using such
robots as a means would then be unethical
• The issue of collateral damage and trigger-happiness in AI Warfare
• The ethics of increasing E-Waste due to robotisisation, including radio-frequency
radiation
• The Ethics of Face Recognition Technology-Loss of Privacy vs Reduction in Crime
• The Ethics of Singularity- Should we allow this?
A Dystopian View of AI
The Ethics of Human
Dignity
What is Human Personhood?
• Immanence : Whereby we are embodied spirits, ends-in-themselves,
with a human and divine destiny.
The great majority of published papers are purely philosophical in nature and do little
more than reiterate the need for machine ethics and argue about which set of moral
convictions would be the right ones to implement in our artificial progeny
(Kantian [33], Utilitarian [20], Jewish [34], etc.). However, since ethical norms
are not universal, a “correct” ethical code could never be selected over others to
the satisfaction of humanity as a whole.
Consequently, we propose that purely philosophical discussions of ethics for
machines be supplemented by scientific work aimed at creating safe machines in
the context of a new field we will term “AI Safety Engineering.” Some concrete
work in this important area has already begun [17, 19, 18]. A common theme in
AI safety research is the possibility of keeping a superintelligent agent in a sealed
hardware so as to prevent it from doing any harm to humankind. Such ideas origi-
nate with scientific visionaries such as Eric Drexler who has suggested confining
transhuman machines so that their outputs could be studied and used safely [14].
Similarly, Nick Bostrom, a futurologist, has proposed [9] an idea for an Oracle AI
(OAI), which would be only capable of answering questions. Finally, in 2010
David Chalmers proposed the idea of a “leakproof” singularity [12]. He suggested
that for safety reasons, AI systems first be restricted to simulated virtual worlds
until their behavioral tendencies could be fully understood under the controlled
conditions.
Roman Yampolskiy has proposed a formalized notion of AI confinement pro-
tocol which represents “AI-Boxing” as a computer security challenge [46]. He de-
fines the Artificial Intelligence Confinement Problem (AICP) as the challenge of
restricting an artificially intelligent entity to a confined environment from which it
can’t exchange information with the outside environment via legitimate or covert
channels if such information exchange was not authorized by the confinement au-
thority. An AI system which succeeds in violating the CP protocol is said to have
escaped [46].
Similarly we argue that certain types of artificial intelligence research fall under
the category of dangerous technologies and should be restricted. Classical AI re-
search in which a computer is taught to automate human behavior in a particular
domain such as mail sorting or spellchecking documents is certainly ethical and
does not present an existential risk problem to humanity. On the other hand, we argue that
Artificial General Intelligence (AGI) research should be considered un-
ethical. This follows logically from a number of observations. First, true AGIs will
be capable of universal problem solving and recursive self-improvement. Conse-
quently they have potential of outcompeting humans in any domain essentially
making humankind unnecessary and so subject to extinction. Additionally, a truly
AGI system may possess a type of consciousness comparable to the human type
making robot suffering a real possibility and any experiments with AGI unethical
for that reason as well.
A similar argument was presented by Ted Kazynsky in his famous
manifesto [26]: “It might be argued that the human race would never be foolish
enough to hand over all the power to the machines. But we are suggesting neither
that the human race would voluntarily turn power over to the machines nor that
the machines would willfully seize power. What we do suggest is that the human
race might easily permit itself to drift into a position of such dependence on the
machines that it would have no practical choice but to accept all of the machines
decisions. As society and the problems that face it become more and more com-
plex and machines become more and more intelligent, people will let machines
make more of their decision for them, simply because machine-made decisions
will bring better result than man-made ones. Eventually a stage may be reached at
which the decisions necessary to keep the system running will be so complex that
human beings will be incapable of making them intelligently. At that stage the ma-
chines will be in effective control. People won't be able to just turn the machines
off, because they will be so dependent on them that turning them off would amount
to suicide. ” ( Kaczynski, T.: Industrial Society and Its Future. The New York Times
(September19, 1995)
Humanity should not put its future in the hands of the machines since it will not
be able to take the power back. In general a machine should never be in a position
to terminate human life or to make any other non-trivial ethical or moral judgment
concerning people.
2. Why and How Should Robots Behave Ethically?, Benjamin KUIPERS 1, Computer
Science & Engineering, University of Michigan, USA, 1University of Michigan, 2260
Hayward Street, Ann Arbor, Michigan 48109 USA, Email: kuipers@umich.edu
For an intelligent robot to function successfully in our society, to cooperate with
humans, it must not only be able to act morally and ethically, but it must also
be trustworthy. It must earn and keep the trust of humans who interact with it.
If every participant contributes their share, everyone
gets a good outcome. But each individual participant may do even better by
optimizing their own reward at the expense of the others. With self-centered utility
functions, each participant “rationally” maximizes their own expected utility,
often leading to bad outcomes for everyone.
• Should you use a sharp knife to cut into the body of a human being? Of
course not, unless you are a qualified surgeon performing a necessary op-
eration. (Deontology: a rule with an exception.)
• Is it OK to throw the switch that saves five lives by directing a runaway trolley onto a
side track, where it will kill one person who would have been safe? Well, . . .
(Deontology says it’s wrong to allow preventable deaths; Utilitarianism says fewer
deaths is better; Virtue ethics says the virtuous person can make hard choices.)
I argue that heuristics based on utilitarianism (decision theory), deontology (rule-
based and constraint-based systems), and virtue ethics (case-based reasoning) are
all important tools in the toolkit for creating artificial agents capable of partic-
ipating successfully in our society. Each tool is useful in certain contexts, and
perhaps less useful in others.
1. The Virtuous Machine -Old Ethics for New Technology? Nicolas Berberich* and Klaus
Diepold, Department of Electrical and Computer Engineering, Technical University of
Munich, Department of Informatics, Technical University of Munich, Munich Center
for Technology in Society,* E-mail: n.berberich@tum.de
Due to the inherent autonomy of these systems, the ethical considerations have to be
conducted by themselves. This means, that these autonomous cognitive machines are in
need of a theory, with the help of which they can, in a specific situation, choose the action
that adheres best to the moral standards.
This discrepancy between what people believe that technology can do, based on its
appearance, and what it
actually can do, would not only elicit a strong uncanny valley effect, but also pose a large
safety risk. Taken together, we predict that this would lead to an acceptance problem of the
technology. If we want to avoid this by jumping over the uncanny valley, we have to start
today by thinking about how to endow autonomous cognitive systems with more human-
like behavior. The position that we argue for in this paper is that the last discrepancy
between the valley and its right shore lies in virtuous moral behavior. In the near future
we will have autonomous cognitive machines whose actions will be akin to human actions,
but without consideration of moral implications they will never be quite alike, leading to
cognitive dissonance and rejection. We believe that taking virtue ethics as the guiding moral
theory for building moral machines is a promising approach to avoid the uncanny valley and
to induce acceptance.
Cybernetics can be seen as a historical and intellectual precursor of artificial intelligence
research. While it had strong differences with the cognitivistic GOFAI (good old-fashioned
AI), cybernetic ideas are highly influential in modern AI. The currently successful field of ar-
tificial neural networks (synonymous terms are connectionism and deep learning) originated
from the research of the cyberneticians McCulloch, Pitts and Rosenblatt. Goal-directed
planning is a central part of modern AI and especially of advanced robotics. In contrast
to other forms of machine learning like supervised or unsupervised learning, reinforcement
learning is concerned with the goal-driven (and therefore teleological) behavior of agents.
Applied to AI ethics this means that a machine cannot have practical wisdom (and thus can’t
act morally) before it has learned from realistic data. Machine learning is the improvement
of a machine’s performance of a task through experience and Aristotle’s virtue ethics is the
improvement of one’s virtues through experience. Therefore, if one equates the task
performance with virtuous actions, developing a virtue ethics-based machine appears
possible.
A closer look at the structure of Aristotle’s ergon-argument allows to break with two
common misconceptions which seem to render a virtue ethical approach in machine ethics
impossible. The first misconception is ethical anthropocentrism, after which only humans
can act morally. This might have been correct in the past, but only because humans have
been the only species capable of higher-level cognition, which, according to Aristotle, is
a requirement for ethical virtues and thus moral action. If there was another species, for
example a machine, with the same capacity for reason and dispositions of character, then it
appears probable that its arete would also lie in excellent use and improvement of those.
The second misconception of Aristotle’s virtue ethics is that it takes happiness to be the goal
and measure of all actions. Since machines are not capable of genuine feelings of happiness,
it is argued, that virtue ethics can’t be applied to them. This argument is based on an
erroneous understanding of eudaimonia. Aristotle does not mean psychological states of
happiness nor maximized pleasure, as John Locke defines ’happiness’. The Greek term
eudaimonia has a much broader meaning and refers mainly to a successful conduct of life
(according to one’s ergon). A virtuous machine programed to pursue eudaimonia would
therefore not be prone to wireheading, which is the artificial stimulation of the brain’s
reward center to experience pleasure.
Out of the three subcategories of machine learning, supervised learning, unsupervised
learning and reinforcement learning (RL), the latter is the lifeworldly approach. In contrast
to the other two, RL is based on dynamic interaction with the environment, of which the
agent typically has only imperfect knowledge.
This partition originated in Aristotle’s soul theory in which he lists virtues of reason
(dianoetic virtues) next to virtues of character (ethical virtues) as properties of the
intelligent part of the soul. The virtues of reason comprise the virtues of pure reason and
the virtues of practical reason. Pure reason includes science (epist ē m ē ), wisdom (sophia)
and intuitive thought (no ̄us). Practical reason on the other hand refers to the virtues of
craftsmanship (techn ̄e), of making (poi ̄esis) and practical wisdom (phron ē sis). According to
this subdivision in pure and practical reason, there exist two ways to lead a good life in the
eudaimonic sense: the theoretical life and the practical life. AI systems can lead a theoretical
life of contemplation, e.g. when they are applied to scientific data analysis, but to lead a
practical life they need the capacity for practical wisdom and morality. This distinction in
theoretical and practical life of an AI somewhat resembles the distinction into narrow and
general AIs, where narrow AI describes artificial intelligence systems that are focused on
performing one specific task (e.g. image classification) while general AI can operate in more
general and realistic situations.
In contrast to deontology and consequentialism, virtue ethics has a hard time giving
reasons for its actions (they certainly exist, but are hard to codify). While deontologists can
point towards the principles and duties which have guided their actions, a consequentialist
can explain why her actions have led to the best consequences. An AMA based on virtue
ethics on the other hand would have to show how its virtues, which gave rise to its actions,
have been formed through experience. This poses an even greater problem if its capability
to learn virtues has been implemented as an artificial neural network, due to it being almost
impossible to extract intuitively understandable reasons from the many network weights. In
this instance, the similarity between virtue ethics and machine learning is disadvantageous.
Without being able to give reasons to one’s actions, one cannot take over responsibility,
which is a concept underlying not only our insurance system but also our justice system. If
the actions of an AMA produce harm then someone has to take responsibility for it and the
victims have a right to explanation. The latter has recently (May 2018) been codified by the
EU General Data Protection Regulation (GDRP) with regards to all algorithmic decisions.
Condensed to the most important ideas, this work has shown that
1. Virtue ethics fits nicely with modern artificial intelligence research and is a promising
moral theory as basis for the field of AI ethics.
2. Taking the virtue ethics route to building moral machines allows for a much broader
approach than simple decision-theoretic judgment of possible actions. Instead it takes
other cognitive functions into account like attention, emotions, learning and actions.
Furthermore, by discussing several virtues in detail, we showed that virtue ethics is a
promising moral theory for solving the two major challenges of contemporary AI safety
research, the control problem and the value alignment problem. A machine endowed with
the virtue of temperance would not have any desire for excess of any kind, not even for
exponential self-improvement, which might lead to a superintelligence posing an existential
risk for humanity. Since virtues are an integral part of one’s character, the AI would not
have the desire of changing its virtue of temperance. Learning from virtuous exemplars
has been a process of aligning values for centuries (and possibly for all of human history),
thus building artificial systems with the same imitation learning capability appears to be a
reasonable approach.
2. Machines That Know Right And Cannot Do Wrong: The Theory and Practice of
Machine Ethics, Louise A. Dennis and Marija Slavkovik
“The fact that man knows right from wrong proves his intellectual superiority to the other
creatures; but the fact that he can do wrong proves his moral inferiority to any creatures
that cannot.”– Mark Twain
Wallach and Allen [35, Chapter 2] distinguish between operational morality, functional
morality, and full moral agency. An agent has operational morality when the moral
significance of her actions are entirely scoped by the agent’s designers. An agent has
functional morality when the agent is able to make moral judgements when choosing an
action, without direct human instructions.
3. What happens if robots take the jobs? The impact of emerging technologies on
employment and public policy By Darrell M. West
In this paper, I explore the impact of robots, artificial intelligence, and machine learning. In
particular, I study the impact of these emerging technologies on the workforce and the
provision of health benefits, pensions, and social insurance. If society needs fewer workers
due to automation and robotics, and many social benefits are delivered through jobs, how
are people outside the workforce for a lengthy period of time going to get health care and
pensions?
Robots are expanding in magnitude around the developed world. Figure 1 shows the
numbers of industrial robots in operation globally and there has been a substantial increase
in the past few years. In 2013, for example, there were an estimated 1.2 million robots in
use. This total rose to around 1.5 million in 2014 and is projected to increase to about 1.9
million in 2017.5 Japan has the largest number with 306,700, followed by North America
(237,400), China (182,300), South Korea (175,600), and Germany (175,200). Overall, robotics
is expected to rise from a $15 billion sector now to $67 billion by 2025.6
In the contemporary world, there are many robots that perform complex functions.
According to a presentation on robots, “the early 21st century saw the first wave of
companionable social robots. They were small cute pets like AIBO, Pleo, and Paro. As
robotics become more sophisticated, thanks largely to the smart phone, a new wave of
social robots has started, with humanoids Pepper and Jimmy and the mirror-like Jibo, as
well as Geppetto Avatars’ software robot, Sophie. A key factor in a robot’s ability to be
social is their ability to correctly understand and respond to people’s speech and the
underlying context or emotion.”
Amazon has organized a “picking challenge” designed to see if robots can “autonomously
grab items from a shelf and place them in a tub.” The firm has around 50,000 people
working in its warehouses and it wants to see if robots can perform the tasks of selecting
items and moving them around the warehouse. During the competition, a Berlin robot
successfully completed ten of the twelve tasks. To move goods around the facility, the
company already uses 15,000 robots and it expects to purchase additional ones in the
future.
In the restaurant industry, firms are using technology to remove humans from parts of food
delivery. Some places, for example, are using tablets that allow customers to order directly
from the kitchen with no requirement of talking to a waiter or waitress. Others enable
people to pay directly, obviating the need for cashiers. Still others tell chefs how much of an
ingredient to add to a dish, which cuts down on food expenses.
There are computerized algorithms that have taken the place of human transactions. We
see this in the stock exchanges, where high-frequency trading by machines has replaced
human decision-making. People submit, buy, and sell orders, and computers match them in
the blink of an eye without human intervention. Machines can spot trading inefficiencies or
market differentials at a very small scale and execute trades that make money for people.15
Some individuals specialize in arbitrage trading, whereby the algorithms see the same stocks
having different market values. Humans are not very efficient at spotting price differentials
but computers can use complex mathematical formulas to determine where there are
trading opportunities. Fortunes have been made by mathematicians who excel in this type
of analysis.
Machine-to-machine communications and remote monitoring sensors that remove humans
from the equation and substitute automated processes have become popular in the health
care area. There are sensors that record vital signs and electronically transmit them to
medical doctors. For example, heart patients have monitors that compile blood pressure,
blood oxygen levels, and heart rates. Readings are sent to a doctor, who adjusts medications
as the readings come in. According to medical professionals, “we’ve been able to show
significant reduction” in hospital admissions through these and other kinds of wireless
devices.
There also are devices that measure “biological, chemical, or physical processes” and deliver
“a drug or intervention based on the sensor data obtained.” They help people maintain an
independent lifestyle as they age and keep them in close touch with medical personnel.
“Point-of-care” technologies keep people out of hospitals and emergency rooms, while still
providing access to the latest therapies.
Implantable monitors enable regular management of symptoms and treatment. For
example, “the use of pulmonary artery pressure measurement systems has been shown to
significantly reduce the risk of heart failure hospitalization.” Doctors place these devices
inside heart failure patients and rely upon machine-to-machine communications to alert
them to potential problems. They can track heart arrhythmia and take adaptive moves as
signals spot troublesome warning signs.
Unmanned vehicles and autonomous drones are creating new markets for machines and
performing functions that used to require human intervention. Driverless cars represent one
of the latest examples. Google has driven its cars almost 500,000 miles and found a
remarkable level of performance. Manufacturers such as Tesla, Audi, and General Motors
have found that autonomous cars experience fewer accidents and obtain better mileage
than vehicles driven by people.
4. CaseWestern Reserve Journal of International Law 47 (2015), Issue 1
The Debate Over Autonomous Weapons Systems Dr. Gregory P. Noone and Dr.
Diana C. Noone
The debate over Autonomous Weapon Systems (AWS) has begun
in earnest with advocates for the absolute and immediate banning of
AWS development, production, and use planting their flag first. They
argue that AWS should be banned because these systems lack human
qualities, such as the ability to relate to other humans and to apply
human judgment, that are necessary to comply with the law. In
addition, the weapons would not be constrained by the capacity for
compassion, which can provide a key check on the killing of civilians.
The opposing viewpoint in this debate articulates numerous
arguments that generally include: it is far too premature and too
speculative to make such a proposal/demand; the Law of Armed
Conflict should not be underestimated in its ability to control AWS
development and future operations; AWS has the potential to
ultimately save human lives (both civilian and military) in armed
conflicts; AWS is as inevitable as any other technology that could
potentially make our lives better; and to pass on the opportunity to
develop AWS is irresponsible from a national security perspective.
Some of the most respected and brilliant lawyers in this field are on
opposite sides of this argument.
6. Robot ethics: Mapping the issues for a mechanized world , Patrick Lin Keith Abney
George Bekey
Bill Gates recently observed that “the emergence of the robotics industry ... is developing in
much the same way that the computer business did 30 years ago” [18]. As a key architect of
the computer industry, his prediction has special weight.
In a few decades—or sooner, given exponential progress forecasted by Moore’s Law—
robots in society will be as ubiquitous as computers are today, he believes; and we would be
hard-pressed to find an expert who disagrees.
In its most basic sense, we define “robot” as an engineered machine that senses, thinks, and
acts: “Thus a robot must have sensors, processing ability that emulates some aspects of
cognition, and actuators.
Surprisingly,
relationships of a more intimate nature are not quite satisfied by robots yet, considering the
sex industry’s reputation as an early adopter of new technologies. Introduced in 2010,
Roxxxy is billed as “the world’s first sex robot” [17], but its lack of autonomy or capacity to
“think” for itself, as opposed to merely respond to sensors, suggests that it is not in fact a
robot, per the definition above.
In some countries, robots are quite literally replacements for humans, such as Japan, where
a growing elderly population and declining birthrates mean a shrinking workforce [35].
Robots are built to specifically fill that labor gap. And given the nation’s storied love of
technology, it is therefore unsurprising that approximately one out of 25 workers in Japan is
a robot. While the US currently dominates the market in military robotics, nations such as
Japan and South Korea lead in the market for social robotics, such as elderly-care robots.
Other nations with similar demographics, such as Italy, are expected to introduce more
robotics into their societies, as a way to shore up a decreasing workforce; and nations
without such concerns can drive productivity, efficiency, and effectiveness to new heights
with robotics.
Like the social networking and email capabilities of the Internet Revolution, robotics may
profoundly impact human relationships. Already, robots are taking care of our elderly and
children, though there are not many studies on the effects of such care, especially in the
long term. Some soldiers have emotionally bonded with the bomb-disposing PackBots that
have saved their lives, sobbing when the robot meets its end (e.g., [38,22]). And robots are
predicted to soon become our lovers and companions [25]: they will always listen and never
cheat on us. Given the lack of research studies in these areas, it is unclear whether
psychological harm might arise from replacing human relationships with robotic ones.
Harm also need not be directly to persons, e.g., it could also be to the environment. In the
computer industry, “e-waste” is a growing and urgent problem (e.g., [31]), given the
disposal of heavy metals and toxic materials in the devices at the end of their product
lifecycle. Robots as embodied computers will likely exacerbate the problem, as well as
increase pressure on rare-earth elements needed today to build computing devices and
energy resources needed to power them. Networked robots would also increase the
amount of ambient radiofrequency radiation, like that created by mobile phones—which
have been blamed, fairly or not, for a decline of honeybees necessary for pollination and
agriculture [37], in ADDITION to human health problems (e.g., [2]).
7. Networks of Social and Moral Norms in Human and Robot Agents, B. F. Malle ∗† M.
Scheutz ∗∗ J. L. Austerweil, Department of Cognitive, Linguistic, and Psychological
Sciences,Brown University, USA∗∗ Department of Computer Science, Tufts
University, USA
The design and construction of intelligent robots has seen steady growth in the past 20
years, and the integration of robots into society is, to many, imminent (Nourbakhsh, 2013;
Sabanovi ̆ c, 2010). Ethical questions about such integration have recently gained
prominence. For example, academic publications on the topic of robot ethics doubled
between 2005 and 2009 and doubled again since then, counting almost 200 as of the time
of this conference (Malle, 2015).
Economic scholars have puzzled for a long time why such free-riding is not more common—
why people cooperate much more often than they “defect,” as game theorists call it, when
defecting would provide the agent with larger utility.
The answer cannot be that humans are “innately” cooperative, because they are perfectly
capable of defecting. The answer involves to a significant extent the power of norms. A
working definition of a norm is the following: An instruction to (not) perform a specific or
general class of action, whereby a sufficient number of individuals in a community (a)
indeed follow this instruction and (b) expect others in the community to follow the
instruction.
8. Moral Machines and the Threat of Ethical Nihilism, Anthony F. Beavers
But, though my cell phone might be smart, I do not take that to mean that it is thoughtful,
insightful, or wise. So, what has become of these latter categories? They seem to be
bygones, left behind by scientific and computational conceptions of thinking and knowledge
that no longer have much use for them.
Respecting
Kantian ethics, the problem is apparent in the universal law formulation of the categorical
imperative, the one that would seem to hold the easiest prospects for rule-based
implementation in a computational system: “act as if the maxim of your action were to
become through your will a universal law of nature ”( Kant [1785] 1981 , 30).
One mainstream interpretation of this principle suggests that whatever rule (or maxim) I
should use to determine my own behavior must be one that I can consistently will to be
used to determine the behavior of everyone else. (Kant ’s most consistent example of this
imperative in application concerns lying promises. One cannot make a lying promise without
simultaneously willing a world in which lying is permissible, thereby also willing a world in
which no one would believe a promise, particularly the very one I am trying to make. Thus,
the lying promise fails the test and is morally impermissible.) Though at first the categorical
imperative looks implementable from an engineering point of view, it suffers from a
problem of scope, since any maxim that is defined narrowly enough (for instance, to include
a class of one, anyone like me in my situation) must consistently universalize. Death by
failure to implement looks imminent; so much the worse for Kant, and so much the better
for ethics.
Classical utilitarianism meets a similar fate, even though, unlike Kant, Mill casts internals,
such as intentions, to the wind and considers just the consequences of an act for evaluating
moral behavior. Here, “actions are right in proportion as they tend to promote happiness;
wrong as they tend to produce the reverse of happiness. By happiness is intended pleasure
and the absence of pain; by unhappiness, pain and the privation of pleasure” That internals
are incidental to utilitarian ethical assessment is evident in the fact that Mill does not
require that one act for the right reasons. He explicitly says that most good actions are not
done accordingly, Thus, acting good is indistinguishable from being good, or, at least, to be
good is precisely to act good; and sympathetically we might be tempted to agree, asking
what else could being good possibly mean. Things again are complicated by problems of
scope, though Mill, unlike Kant, is aware of them. He writes, “again, defenders of utility
often find themselves called upon to reply to such objections as this — that there is not
enough time, previous to action, for calculating and weighing the effects of any line of
conduct on the general happiness” ( [1861] 1979 , 23). (In fact, the problem is
computationally intractable when we consider the ever-extending ripple effects that any act
can have on the happiness of others across both space and time.) Mill gets around the
problem with a sleight of hand, noting that “all rational creatures go out upon the sea of life
with their minds made up on the common questions of right and wrong” (24), suggesting
that calculations are, in fact, unnecessary, if one has the proper forethought and upbringing.
Again, the rule is of little help, and death by failure to implement looks imminent. So much
the worse for Mill; again, so much the better for ethics.
10. Moral Machines: Mindless Morality and its Legal Implications--Andrew Schmelzer
Not all of our devices need moral agency. As autonomy increases, morality becomes more
necessary in robots, but the reverse also holds. Machines with little autonomy need less
ethical sensitivity. A refrigerator need not decide if the amount someone eats is healthy,
and limit access accordingly. In fact, that fridge would infringe on human autonomy.
Ethical sensitivity does not require moral perfection. I do not expect morally perfect
decisions from machines. In fact, because humans are morally imperfect, we cannot base
moral perfection off of humanity by holding machines to human ideals. Our moral
development continues today, and I believe may never finish. Designing an artificial moral
agent bound by the morality of today dooms it to obsolescence: ethical decisions from a
hundred years ago look much more racist, sexist, etc., and less ‘good’ from today’s
perspective; today’s ethics might have the same bias when viewed from the future
(Creighton, 2016). Because the nature of our ethics changes, an agent will stumble
eventually. Instead, we strive for morally human (or even better than human) decisions
from machines. When a machine’s actions reflect those of a human, we will have met the
standards for artificial moral agency.
We can test for artificial moral agency with the Moral Turing Test (Allen, Varner, & Zinser,
2000). In the MTT, where a judge tries to differentiate between a machine and a person by
their moral actions. An agent passes the test when the judge cannot correctly identify the
machine more often than chance. Then, a machine qualifies as a moral agent. In the
comparative Moral Turing Test (cMTT), the judge compares the behaviors of the two
subjects, and determines which action is morally better than the other (Allen, Varner, &
Zinser, 2000). When a machine’s behavior consistently scores morally preferable to a
human’s behavior, then either the agent will have surpassed human standards, or the
human’s behavior markedly strays from those standards.
Frankena (1973) provides a list of terminal values — virtues that are valued for themselves,
rather than their consequences (Yudkowsky, 2011):
Life, consciousness, and activity; health and strength; pleasures and satisfactions of all
or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true
opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in
objects contemplated; aesthetic experience; morally good dispositions or virtues; mutual
affection, love, friendship, cooperation; just distribution of goods and evils; harmony and
proportion in one’s own life; power and experiences of achievement; self-expression;
freedom; peace, security; adventure and novelty; and good reputation, honor, esteem,
etc.
Programming all of those values directly into a single utility function (the method of
determining positive or negative results) is ridiculous. Can engineers or ethicists quantify
each value and agree on a prioritization for each? Yudkowsky (2011) proposes a ‘one-
wrong-number’ problem: a phone number has 10 digits, but dialing one wrong number does
not mean you will connect with someone 90% like the person intended. The same may
apply to virtue-based machines.
Furthermore, some values we deem worthy of implementation in our machines may
contradict each other, such as compassion and honesty (e.g. a child’s professional baseball
potential). In this way virtue-based systems still require the caveats of a rule-based system
(Allen, Varner, & Zinser, 2000). But what about non-terminal virtues, that is, virtues we
value for their repercussions?
The three methods of bottom-up development I will discuss here are neural network
learning, genetic algorithms, and scenario analysis systems.
Neural networks function similarly to neurons: connections between inputs and outputs
make up a system that can learn to do various things, from playing computer games to
running bipedally in a simulation. By using that learning capability on ethical endeavors, a
moral machine begins to develop. From reinforcement of positive behaviors and penalty of
negative ones, the algorithm learns the pattern of our moral systems. Eventually, engineers
place the algorithm in charge of a physical machine, and away it goes. One downside to this
is the uncertainty regarding what the algorithm learned. When the army tried to get a
neural net to recognize tanks hidden in trees, what looked like a distinction between trees,
tanks, and partly concealed tanks turned out to be a distinction between a sunny and cloudy
day (Dreyfus & Dreyfus, 1992). Kuang (2017) writes about Darrel’s potential solution: having
two neural networks working side by side. The first learns the correlation between input
and output, challenging situation and ethically right decision, respectively. The second
algorithm focuses on learning language and connects tags or captions from an input and
explains what cues and ideas the second algorithm used to come up with a course of action.
The second weak point stems from allowing mistakes: no amount of learning can verify that
the machine will act morally in all situations in the real world, including those not tested and
learned from.
Genetic algorithms operate on a somewhat similar principle. Large numbers of simple digital
agents run through ethically challenging simulations. The ones that return the best scores
get “mated” with each other, blending code with a few randomizations, and then the test
runs again (Fox, 2009). After the best (or acceptably best) scores based on desired outcomes
are achieved, a new situation is added to the repertoire that each program must surpass. In
this way, machines can learn our moral patterns. Once thoroughly evolved, we implement
the program, and the machine operates independently in the real world. Alternatively to
direct implementation, we could evolve the program to learn patterns quickly and
efficiently, and then run it through the neural network training. This method suffers the
same downsides as neural networking: we cannot tell what it learned or whether it will
make mistakes in the future.
The final approach involves scenario analysis. Parkin (2017) describes a method of teaching
AI by having it read books and stories and learn the literature’s ideas and social norms.
While this may not apply to machines we do not intend to behave as humans, the idea still
applies to niche or domain-specific machines. Instead of using literature as learning input,
we provide a learning program with records of past wrongdoings and successful outcomes
of ethically-blind machines in its niche. Then the program could infer the proper behaviors
for real world events it may encounter in the future. After analyzing the settings and events
of each scenario, the program would save the connections it made for later human
inspection. If the program’s connections proved ‘good,’ it would then receive a new batch of
scenarios to test through, and repeat the cycle. One downside to this approach involves
painstaking human analysis. A new program would have to go through this cycle for every
machine niche that requires a moral agent, and a human evaluator would have to carefully
examine every connection and correlation the program develops. Darrel’s (2017) explaining
neural net could work in tandem with a scenario analysis system to alleviate the human
requirement for analysis. This approach does get closer to solving the issue of working in
new environments than the previous two approaches, but may nonetheless stumble once
implemented in reality.
Bottom-up approaches utilize continued development to reach an approximation of moral
standards. Neural networks develop connections and correlations to create a new output,
but we struggle to know why the system comes to a decision. Genetic algorithms refine
themselves by duplicating the most successful code into the next generation of programs,
with a little mutation for adaptation. A genetic algorithm’s learning process also remains
obscured without careful record of iterations, which may be beyond human comprehension.
Scenario analysis systems can learn the best conduct historically shown as ethically right,
but still retains potential for error. As of yet, we do not have a reliable method to develop
an artificial moral agent.
To build an artificial moral agent, DeBaets (2014) argues that a machine must have
embodiment, learning, teleology toward the good, and empathy.
DeBaets (2014) claims that moral functioning requires embodiment because if a machine
acts in and influences the physical world, it must have a physical manifestation.
“Embodiment [requires] that a particular decision-making entity be intricately linked to a
particular concrete action; morality cannot solely be virtual if it is to be real” (DeBaets,
2014). They can work from a distance, have multiple centers of action, and have distributed
decision-making centers, but each requires physical form. Embodiment constrains machines
to a physical form, so this definition of moral agency excludes algorithms and programs that
do not interact with the physical world.
Ethical machines need learning capacity so they can perform as taught by ethical and moral
rules and extrapolate what they have learned into new situations. This requirement
excludes any top-down approach that does not involve frequent patches and updates.
Hybrid systems combine rule sets and learning capacities, and so fulfil this requirement
since they can adjust to new inputs and refine their moral behavior.
Teleology toward the good and empathy both face a sizable complication: they both require
some form of consciousness. For a machine to empathize with and understand emotions of
others, it must have emotion itself. Coeckelbergh (2010) claims that true emotion requires
consciousness and mental states in both cognitivist theory and feeling theory. Thus, if
robots do not have consciousness or mental states, they cannot have emotions and
therefore cannot have moral agency. Additionally, if a machine innately desires to do good,
it must have some form of inner thoughts or feeling that it is indeed doing good, so
teleology also requires consciousness or mental states. Much of human responsibility and
moral agency relies on this theory of mind. In court, the insanity or state of mind defence
can counter criminal charges.
However, no empirical way to test for state of mind or consciousness in people exists today.
Why require those immeasurable characteristics in our robots?
Emotionless Machinery
We interpret other humans’ behaviors as coming from or influenced by emotion, but we
have no way to truly determine emotional state. Verbal and nonverbal cues give us insights
to emotions others feel. They may imitate or fake those cues, but we interact with them just
the same as long as they maintain their deception (Goffman, 1956). We measure other
people by their display or performance of emotion.
Since the appearance of emotion in people regulates social interaction and human morality,
we must judge robots by that same appearance. Even today, machines can read breathing
and heart rate (Gent, 2016), and computers do not need to see an entire face to determine
emotion displayed (Wegrzyn, Vogt, Kireclioglu, Schneider, & Kissler, 2017). Soon enough, a
machine could learn to display human emotion by imitating the cues they’re designed to
measure. In theory, a robot could imitate or fake emotional cues as well as humans display
them naturally. People already tend to anthropomorphize robots, empathize with them,
and interpret their behavior as emotional (Turkle, 2011). For consistency in the way we treat
human display of emotion and interpret it as real, we must also treat robotic display of
emotion as real.
If the requirement for empathy changes from true emotion to functional emotion — as is
consistent with how we treat people — then an imitating robot fulfills all the requirements
for empathy, effectively avoiding the issue regarding consciousness and mental state.
Compassion could be the reason an autonomous car veers into a tree rather than a line of
children, but the appearance of compassion could also serve the same effect.
Additionally, a robot can have an artificial teleology towards good, granted that all of the
taught responses programmed into the machine are ‘good.’ Beavers’ (2011) discussion of
classical utilitarianism, referencing Mill (1979), claims that acting good is the same as being
good. The same applies to humans, as far as we can tell from the outside. Wallach and Allen
(2009) note that “values that emerge through the bottom-up development of a system
reflect the specific causal determinates of a system’s behavior”. In other words, a ‘good’ and
‘moral’ robot is one that takes moral and good actions. Thus, while we may not get true
teleology, functional teleology can suffice.
11. Human Rights and Artificial Intelligence--An Urgently Needed Agenda. Mathias Risse
Algorithms can do anything that can be coded, as long as they have access to data they
need, at the required speed, and are put into a design frame that allows for execution of
the tasks thus determined. In all these domains, progress has been enormous. The
effectiveness of algorithms is increasingly enhanced through “Big Data:” availability of
an enormous amount of data on all human activity and other processes in the world
which allow a particular type of AI known as “machine learning” to draw inferences
about what happens next by detecting patterns. Algorithms do better than humans
wherever tested, even though human biases are perpetuated in them: any system
designed by humans reflects human bias, and algorithms rely on data capturing the
past, thus automating the status quo if we fail to prevent them. 2 But algorithms are
noise-free: unlike human subjects, they arrive at the same decision on the same problem
when presented with it twice.
Also, philosophers have long puzzled about the nature of the mind. One question is if
there is more to the mind than the brain. Whatever else it is, the brain is also a complex
algorithm. But is the brain fully described thereby, or does that omit what makes us
distinct, namely, consciousness? Consciousness is the qualitative experience of being
somebody or something, its “what-it-is-like-to-be-that”-ness, as one might say. If there
is nothing more to the mind than the brain, then algorithms in the era of Big Data will
outdo us soon at almost everything we do: they make ever more accurate predictions
about what book we enjoy or where to vacation next; drive cars more safely than we do;
make predictions about health before our brains sound alarms; offer solid advice on
what jobs to accept, where to live, what kind of pet to adopt, if it is sensible for us to be
parents and whether it is wise to stay with the person we are currently with – based on a
myriad of data from people relevantly like us. Internet advertisement catering towards
our preferences by assessing what we have ordered or clicked on before is a mere
shadow of what is to come.
Future machines might be composed and networked in ways that no longer permit easy
switch-off. More importantly, they might display
emotions and behavior to express attachment: they might even worry about being
turned off, and be anxious to do something about it. Or future machines might be
cyborgs, partly composed of organic parts, while humans are modified with non-organic
parts for enhancement. Distinctions between humans and non-humans might erode.
Ideas about personhood might alter once it becomes possible to upload and store a
digitalized brain on a computer, much as nowadays we can store human embryos.
Already in 2007, a US colonel called off a robotic land-mine-sweeping exercise
because he considered the operation inhumane after a robot kept crawling along losing
legs one at a time. 5 Science fiction shows like Westworld or The Good Place anticipate
what it would be like to be surrounded by machines we can only recognize as such by
cutting them open. A humanoid robot named Sophia with capabilities to participate in
interviews, developed by Hanson Robotics, became a Saudi citizen in October 2017.
Later Sophia was named UNDP’s first-ever Innovation Champion, the first non-human
with a UN title.6 The future might remember these as historic moments. The pet world
is not far behind. Jeff Bezos recently adopted a dog called SpotMini, a versatile robotic
pet capable of opening doors, picking himself up and even loading the dishwasher. And
SpotMini never needs to go outside if Bezos would rather shop on Amazon or enjoy
presidential tweets.
If there indeed is more to the mind than the brain, dealing with AI including humanoid
robots would be easier. Consciousness, or perhaps accompanying possession of a
conscience, might then set us apart. It is a genuinely open question how to make sense
of qualitative experience and thus of consciousness. But even though considerations
about consciousness might contradict the view that AI systems are moral agents, they
will not make it impossible for such systems to be legal actors and as such own property,
commit crimes and be accountable in legally enforceable ways. After all, we have a
history of treating corporations in such ways, which also do not have consciousness.
Perhaps T. M. Scanlon’s ideas about appropriate responses to values would help.10 The
superintelligence might be “moral” in the sense of reacting in appropriate ways towards
what it observes all around. Perhaps then we have some chance at getting protection, or
even some level of emancipation in a mixed society composed of humans and machines,
given that the abilities of the human brain are truly astounding and generate capacities
in human beings that arguably should be worthy of respect.11 But so are also the
capacities of animals, which has not normally led humans to react towards them, or
towards the environment, in an appropriately respectfully way. Instead of displaying
something like an enlightened anthropocentrism, we have too often instrumentalized
nature. Hopefully a superintelligence would simply outperform us in such matters, and
that will mean the distinctively human life will receive some protection because it is
worthy of respect. We cannot know that for sure but we also need not be pessimistic.
There is an urgency to making sure these developments get off to a good start. The
pertinent challenge is the problem of value alignment, a challenge that arises way
before it will ever matter what the morality of pure intelligence is. No matter how
precisely AI systems are generated we must try to make sure their values are aligned
with ours to render as unlikely as possible any complications from the fact that a
superintelligence might have value commitments very different from ours. That the
problem of value alignment needs to be tackled now is also implied by the UN Guiding
Principles on Business and Human Rights, created to integrate human rights into
business decisions. These principles apply to AI. This means addressing questions such
as "What are the most severe potential impacts?", "Who are the most vulnerable
groups?" and "How can we ensure access to remedy?"
However, these laws have long been regarded as too unspecific. Various efforts have
been made to replace them, so far without any connection to the UN’s Principles on
Business and Human Rights or any other part of the human-rights movement. Among
other efforts, in 2017 the Future of Life Institute in Cambridge, MA founded around
MIT physicist Max Tegmark and Skype co-founder Jaan Tallinn, held a conference on
Beneficial AI at the Asilomar conference center in California to come up with principles
to guide further development of AI. Of the resulting 23 Asilomar Principles, 13 are listed
under the heading of Ethics and Values. Among other issues, these principles insist that
wherever AI causes harm, it should be ascertainable why it does, and where an AI
system is involved in judicial decision making its reasoning should be verifiable by
human auditors. Such principles respond to concerns that AI deploying machine
learning might reason at such speed and have access to such a range of data that its
decisions are increasingly opaque, making it impossible to spot if its analyses go astray.
The principles also insist on value alignment, urging that “highly autonomous AI
systems should be designed so that their goals and behaviors can be assured to align
with human values throughout their operation” (Principle 10). The ideas explicitly
appear in Principle 11 (Human Values) include “human dignity, rights, freedoms, and
cultural diversity.”
Russian manipulation in elections is a wake-up call; much worse is likely to come. Judicial
rights could be threatened if AI is used without sufficient transparency and possibility for
human scrutiny. An AI system has predicted the outcomes of hundreds of cases at the
European Court of Human Rights, forecasting verdicts with accuracy of 79%; and once
that accuracy gets yet higher it will be tempting to use AI also to reach decisions. Use of
AI in court proceedings might help generate access to legal advice to the poor (one of the
projects Amnesty pursues, especially in India); but it might also lead to Kafkaesque
situations if algorithms give inscrutable advice.
Any rights to security and privacy are potentially undermined not only through drones
or robot soldiers, but also through increasing legibility and traceability of individuals in
a world of electronically recorded human activities and presences. The amount of data
available about people will likely increase enormously, especially once biometric sensors
can monitor human health. (They might check up on us in the shower and submit their
data, and this might well be in our best interest because illness becomes diagnosable
way before it becomes a problem.) There will be challenges to civil and political rights
arising from the sheer existence of these data and from the fact that these data might
well be privately owned, but not by those whose data they are. Leading companies in
the AI sector are more powerful than oil companies ever were, and this is presumably
just the beginning of their ascension.
The Cambridge-Analytica scandal is a wake-up call here, and
Mark Zuckerberg’s testimony to US senators on April 10, 2018 revealed an astonishing
extent of ignorance among senior lawmakers about the workings of internet companies
whose business model depends on marketing data. Such ignorance paves the path to
power for companies. Or consider a related point: Governments need the private sector
to aid in cyber security. The relevant experts are smart, expensive, and many would
never work for government. We can only hope that it will be possible to co-opt them
given that government is overextended here. If such efforts fail, only companies will
provide the highest level of cyber security.
This takes me to my last topic: AI and inequality, and the connection between that topic
and human rights. To begin with, we should heed Thomas Piketty’s warning that
capitalism left to its own devices in times of peace generates ever increasing economic
inequality. Those who own the economy benefit from it more than those who just work
there. Over time life chances will ever more depend on social status at birth. We also
see more and more how those who either produce technology or know how to use
technology to magnify impact can command higher and higher wages. AI will only
reinforce these tendencies, making it ever easier for leaders across all segments to
magnify their impact. That in turn makes producers of AI ever more highly priced
providers of technology. More recently, we have learned from Walter Scheidel that,
historically, substantial decreases in inequality have only occurred in response to
calamities such as epidemics, social breakdowns, natural disasters or war. Otherwise it
is hard to muster effective political will for change.
Before this background we must worry AI will drive a widening technological wedge into
societies that leaves millions excluded, renders them redundant as market participants
and thus might well undermine the point of their membership in political community.
When wealth was determined by land ownership, the rich needed the rest because the
point of land ownership was to charge rent. When wealth was determined by ownership
of factories the owners needed the rest to work the machines and buy stuff. But those
on the losing side of the technological divide may no longer be needed at all. In his 1926
short story “The Rich Boy,” F. Scott Fitzgerald famously wrote, “Let me tell you about
the very rich. They are different from you and me.” AI might validate that statement in a
striking way.
12. From Machine Ethics To Machine Explainability and Back∗Kevin Baum, Holger
Hermanns and Timo Speith. Saarland University, Department of Philosophy
An important question arises: How should machines be constrained, such that they act
morally acceptable towards humans? This question concerns Machine Ethics – the search
for formal, unambiguous, algorithmizable and implementable behavioral constraints for
systems, so as to enable them to exhibit morally acceptable behavior.
We instead feel the need to supplement Machine Ethics with means to ascertain justified
trust in autonomous systems – and other desirable properties. After pointing out why this is
important, we will argue that there is one feasible supplement for Machine Ethics: Machine
Explainability – the ability of an autonomous system to explain its actions and to argue for
them in a way comprehensible for humans. So Machine Ethics needs Machine Explainability.
This also holds vice versa: Machine Explainability needs Machine Ethics, as it is in need of a
moral system as a basis for generating explanations.
13. The Ethics of Artificial Intelligence--Nick Bostrom, Future of Humanity Institute,
Eliezer Yudkowsky, Machine Intelligence Research Institute
AI algorithms play an increasingly large role in modern society, though usually not
labeled “AI.” The scenario described above might be transpiring even as we write. It
will become increasingly important to develop AI algorithms that are not just powerful
and scalable, but also transparent to inspection—to name one of many socially important
properties. Some challenges of machine ethics are much like many other challenges
involved in designing machines. Designing a robot arm to avoid crushing stray humans is no
more morally fraught than designing a flame-retardant sofa. It involves new programming
challenges, but no new ethical challenges. But when AI algorithms take on cognitive work
with social dimensions-cognitive tasks previously performed by humans—the AI algorithm
inherits the social requirements.
Transparency is not the only desirable feature of AI. It is also important that AI algorithms
taking over social functions be predictable to those they govern.
It will also become increasingly important that AI algorithms be robust against manipulation.
Robustness against manipulation is an ordinary criterion in information security; nearly the
criterion. But it is not a criterion that appears often in machine learning journals, which are
currently more interested in, e.g., how an algorithm scale up on larger parallel systems.
Responsibility, transparency, auditability, incorruptibility, predictability, and a tendency to
not make innocent victims scream with helpless frustration: all criteria that apply to humans
performing social functions; all criteria that must be considered in an algorithm intended to
replace human judgment of social functions; all criteria that may not appear in a journal of
machine learning considering how an algorithm scales up to more computers. This list of
criteria is by no means exhaustive, but it serves as a small sample of what an increasingly
computerized society should be thinking about.
Artificial General Intelligence (AGI)-- As the name implies, the emerging consensus is that
the missing characteristic is generality. Current AI algorithms with human-equivalent or
superior performance are characterized by a deliberately programmed competence only in
a single, restricted domain. Deep Blue became the world champion at chess, but it cannot
even play checkers, let alone drive a car or make a scientific discovery. Such modern AI
algorithms resemble all biological life with the sole exception of Homo sapiens.
To build an AI that acts safely while acting in many domains, with many consequences,
including problems the engineers never explicitly envisioned, one must specify good
behavior in such terms as “X such that the consequence of X is not harmful to humans.” This
is non-local; it involves extrapolating the distant consequences of actions.
A rock has no moral status: we may crush it, pulverize it, or subject it to any treatment we
like without any concern for the rock itself. A human person, on the other hand, must be
treated not only as a means but also as an end. Exactly what it means to treat a person as an
end is something about which different ethical theories disagree; but it certainly involves
taking her legitimate interests into account—giving weight to her well-being—and it may
also involve accepting strict moral side-constraints in our dealings with her, such as a
prohibition against murdering her, stealing from her, or doing a variety of other things to
her or her property without her consent. Moreover, it is because a human person counts in
her own right, and for her sake, that it is impermissible to do to her these things. This can be
expressed more concisely by saying that a human person has moral status.
It is widely agreed that current AI systems have no moral status. We may change, copy,
terminate, delete, or use computer programs as we please; at least as far as the programs
themselves are concerned. The moral constraints to which we are subject in our dealings
with contemporary AI systems are all grounded in our responsibilities to other beings, such
as our fellow humans, not in any duties to the systems themselves.
While it is fairly consensual that present-day AI systems lack moral status, it is unclear
exactly what attributes ground moral status. Two criteria are commonly proposed as being
importantly linked to moral status, either separately or in combination: sentience and
sapience (or personhood). These may be characterized roughly as follows:
Sentience: the capacity for phenomenal experience or qualia, such as the capacity to feel
pain and suffer
Sapience: a set of capacities associated with higher intelligence, such as self- awareness and
being a reason-responsive agent.
Others propose additional ways in which an object could qualify as a bearer of moral status,
such as by being a member of a kind that normally has sentience or sapience, or by standing
in a suitable relation to some being that independently has moral status.
Principle of Substrate Non-Discrimination: If two beings have the same functionality and
the same conscious experience and differ only in the substrate of their implementation,
then they have the same moral status.
Principle of Ontogeny Non-Discrimination: If two beings have the same functionality and
the same consciousness experience, and differ only in how they came into existence, then
they have the same moral status.
Parents have special duties to their child which they do not have to other children, and
which they would not have even if there were another child qualitatively identical to their
own. Similarly, the Principle of Ontogeny Non-Discrimination is consistent with the claim
that the creators or owners of an AI system with moral status may have special duties to
their artificial mind which they do not have to another artificial mind, even if the minds in
question are qualitatively similar and have the same moral status.
Even if we accept this stance, however, we must confront a number of novel ethical
questions which the aforementioned principles leave unanswered. Novel ethical questions
arise because artificial minds can have very different properties from ordinary human or
animal minds. We must consider how these novel properties would affect the moral status
of artificial minds and what it would mean to respect the moral status of such exotic minds.
a. Does a sapient but non-sentiet robot (zombie) have the same moral ststus as a full
AMA?
b. Another exotic property, one which is certainly metaphysically and physically
possible for an artificial intelligence, is for its subjective rate of time to deviate
drastically from the rate that is characteristic of a biological human brain. The
concept of subjective rate of time is best explained by first introducing the idea of
whole brain emulation, or “uploading.” “Uploading” refers to a hypothetical future
technology that would enable a human or other animal intellect to be transferred
from its original implementation in an organic brain onto a digital computer.
Principle of Subjective Rate of Time: In cases where the duration of an experience is
of basic normative significance, it is the experience’s subjective duration that counts.
c. For example, human children are the product of recombination of the genetic
material from two parents; parents have limited ability to influence the character of
their offspring; a human embryo needs to be gestated in the womb for nine months;
it takes fifteen to twenty years for a human child to reach maturity; a human child
does not inherit the skills and knowledge acquired by its parents; human beings
possess a complex evolved set of emotional adaptations related to reproduction,
nurturing, and the child-parent relationship. None of these empirical conditions need
pertain in the context of a reproducing machine intelligence. It is therefore plausible
that many of the mid-level moral principles that we have come to accept as norms
governing human reproduction will need to be rethought in the context of AI
reproduction.
To illustrate why some of our moral norms need to be rethought in the context of AI
reproduction, it will suffice to consider just one exotic property of AIs: their capacity
for rapid reproduction. Given access to computer hardware, an AI could duplicate
itself very quickly, in no more time than it takes to make a copy of the AI’s software.
Moreover, since the AI copy would be identical to the original, it would be born
completely mature, and the copy could begin making its own copies immediately.
Absent hardware limitations, a population of AIs could therefore grow exponentially
at an extremely rapid rate, with a doubling time on the order of minutes or hours
rather than decades or centuries.
But if the population grows faster than the economy, resources will run
out; at which point uploads will either die or their ability to reproduce will be
curtailed.
This scenario illustrates how some mid-level ethical principles that are suitable in
contemporary societies might need to be modified if those societies were to include
persons with the exotic property of being able to reproduce very rapidly.
The general point here is that when thinking about applied ethics for contexts that
are very different from our familiar human condition, we must be careful not to
mistake mid-level ethical principles for foundational normative truths. Put
differently, we must recognize the extent to which our ordinary normative precepts
are implicitly conditioned on the obtaining of various empirical conditions, and the
need to adjust these precepts accordingly when applying them to hypothetical
futuristic cases in which their preconditions are assumed not to obtain. By this, we
are not making any controversial claim about moral relativism, but merely
highlighting the commonsensical point that context is relevant to the application of
ethics—and suggesting that this point is especially pertinent when one is considering
the ethics of minds with exotic properties.
Superintelligence
Good (1965) set forth the classic hypothesis concerning superintelligence: that an AI
sufficiently intelligent to understand its own design could redesign itself or create a
successor system, more intelligent, which could then redesign itself yet again to
become even more intelligent, and so on in a positive feedback cycle. Good called
this the “intelligence explosion.”
Kurzweil (2005) holds that “intelligence is inherently impossible to control,” and that
despite any human attempts at taking precautions, by definition . . . intelligent
entities have the cleverness to easily overcome such barriers.” Let us suppose that
the AI is not only clever, but that, as part of the process of improving its own
intelligence, it has unhindered access to its own source code: it can rewrite itself to
anything it wants itself to be. Yet it does not follow that the AI must want to rewrite
itself to a hostile form.
Humans, the first general intelligences to exist on Earth, have used that intelligence
to substantially reshape the globe—carving mountains, taming rivers, building
skyscrapers, farming deserts, producing unintended planetary climate changes. A
more powerful intelligence could have correspondingly larger consequences.
This presents us with perhaps the ultimate challenge of machine ethics: How do
you build an AI which, when it executes, becomes more ethical than you? If we are
serious about developing advanced AI, this is a challenge that we must
meet. If machines are to be placed in a position of being stronger, faster, more
trusted, or smarter than humans, then the discipline of machine ethics must commit
itself to seeking human-superior (not just human-equivalent) niceness.
14. Towards the Ethical Robot by James Gips, Computer Science Department, Fulton
Hall 460 Boston College, Chestnut Hill, MA 02467
Asimov’s three laws are not suitable for our magnificent robots. These are laws for slaves.
We want our robots to behave more like equals, more like ethical people. (See Figure
1) How do we program a robot to behave ethically? Well, what does it mean for a
person to behave ethically?
2) On what type of ethical theory can automated ethical reasoning be based? At first
glance, consequentialist theories might seem the most "scientific", the most
amenable to implementation in a robot. Maybe so, but there is a tremendous
problem of measurement. How can one predict "pleasure", "happiness", or "well-
being" in individuals in a way that is additive, or even comparable?
3) Deontological theories seem to offer more hope. The categorical imperative might
be tough to implement in a reasoning system. But I think one could see using a moral
system like the one proposed by Gert as the basis for an automated ethical
reasoning system. A difficult problem is in the resolution of conflicting obligations.
Gert's impartial rational person advocating that violating the rule in these
circumstances be publicly allowed seems reasonable but tough to implement.
4) The virtue-based approach to ethics, especially that of Aristotle, seems to resonate
well with the modern connectionist approach to AI. Both seem to emphasize the
immediate, the perceptual, the non-symbolic. Both emphasize development by
training rather than by the teaching of abstract theory.
5) Knuth [1973, p.709] put it well: It has often been said that a person doesn't really
understand something until he teaches it to someone else. Actually a person doesn't
really understand something until he can teach it to a computer, i.e., express it as an
algorithm. ... The attempt to formalize things as algorithms leads to a much deeper
understanding than if we simply try to understand things in the traditional way. That
as we build the artificial ethical reasoning systems we will learn how to behave more
ethically ourselves.
15. Can robots be responsible moral agents? And why should we care? Amanda
Sharkey, Department of Computer Science, University of Sheffield, Sheffield, UK
Patricia Churchland (2011) discusses the basis for morality in living beings, and argues that
the basis for caring about others lies in the neurochemistry of attachment and bonding in
mammals. She explains that it is grounded in the extension of self-maintenance and
avoidance of pain in mammals to their immediate kin. Neuropeptics, oxytocin and arginine
vasopressin underlie mammals’ extension of self-maintenance and avoidance of pain to
their immediate kin. Humans and other mammals feel anxious about their own well-being
and that of those to whom they are attached. As well as attachment and empathy for
others, humans and other mammals develop more complex social relationships, and are
able to understand and predict the actions of others. They also internalise social practices,
and experience ‘social pain’ triggered by separation, exclusion or disapproval. As a
consequence, humans have an intrinsic sense of justice.
By contrast, robots are not concerned about their own self-preservation or avoidance of
pain, let alone the pain of others. In part, this can be explained by means of arguing that
they are not truly embodied, in the way that a living creature is. Parts of a robot could be
removed from a robot’s body without it suffering any pain or anxiety, let alone it being
concerned about damage or pain to a family member or to a human. A living body is an
integrated autopoeietic entity (Maturana and Varela, 1980) in a way that a man-made
machine is not. Of course, it can be argued that the robot could be programmed to behave
as if it cared about its own preservation or that of others, but this is only possible through
human intervention.
16. Can machines be people? Reflections on the Turing Triage Test, Dr Rob Sparrow,
School of Philosophical, Historical & International Studies, Monash University.
Finally, imagine that you are again called to make a difficult decision. The battery system
powering the AI is failing and the AI is drawing on the diminished power available to the
rest of the hospital. In doing so, it is jeopardising the life of the remaining patient on life
support. You must decide whether to ̳switch off‘ the AI in order to preserve the life of
the patient on life support. Switching off the AI in these circumstances will have the
unfortunate consequence of fusing its circuit boards, rendering it permanently inoperable.
Alternatively, you could turn off the power to the patient‘s life support in order to allow
the AI to continue to exist. If you do not make this decision the patient will die and the
AI will also cease to exist. The AI is begging you to consider its interests, pleading to be
allowed to draw more power in order to be able to continue to exist.
My thesis, then, is that machines will have achieved the moral status of persons when
this second choice has the same character as the first one. That is, when it is a moral
dilemma of roughly the same difficulty. For the second decision to be a dilemma it must
be that there are good grounds for making it either way. It must be the case therefore that
it is sometimes legitimate to choose to preserve the existence of the machine over the life
of the human being. (He may choose the robot because it is more useful to the hospital than
the human patient. This test still doesn’t make the robot a moral person. Only sapience and
sentience will make a robot a moral person,)
17. Can a Robot Pursue the Good? Exploring Artificial Moral Agency, Amy Michelle
DeBaets, Kansas City University of Medicine and Biosciences, Journal of Evolution
and Technology - Vol. 24 Issue 3 – Sept 2014 – pgs 76-86
What then, might be necessary for a decision-making and acting entity to non-accidentally
pursue the good in a given situation? I argue that four basic components collectively make
up the basic requirements for moral agency: embodiment, learning, empathy, and
teleology.
First, I want to argue that artificial moral agents, like all moral agents, must have some form
of embodiment, as they must have a real impact in the physical world (and not solely a
virtual one) if they are to behave morally. Embodiment not only allows for a concrete
presence from which to act, it can adapt and respond to the consequences of real decisions
in the world. This physical embodiment, however, need not look particularly similar to
human embodiment and action. Robotic embodiment might be localized, having actions
take place in the same location as the decision center, in a discrete, mobile entity (as with
humans), but it might also be remote, where the decision center and locus of action are
distant in space. It could also be distributed, where the decision centers and/or loci of action
take place in. (Not convincing—a server is also an entity and is embodied).
This embodied decision-making and action must also exist in a context of learning. Learning,
in this sense, is not simply the incorporation of new information into a system or the
collection of data. It is adapting both the decision processes themselves and the agent’s
responses to inputs based on previous information. It is this adaptability that allows moral
agents to learn from mistakes as well as successes, to develop and hone moral reasoning,
and to incorporate new factual information about the circumstances of decisions to be
made. several places at once, as with distributed computing or multiple simultaneous
centers of action. The unifying theme of embodiment does require that a particular
decision-making entity be intricately linked to particular concrete action; morality cannot
solely be virtual if it is to be real.
Even if an embodied robot can learn from its own prior actions, it is not necessarily moral.
The complex quality of empathy is still needed for several reasons. First, empathy allows the
agent to recognize when it has encountered another agent, or an appropriate object of
moral reasoning. It allows the A.M.A. to understand the potential needs and desires of
another, as well as what might cause harm to the other. This requires at least a rudimentary
theory of mind, that is, a recognition that another entity exists with its own thoughts,
beliefs, values, and needs. This theory of mind need not take an extremely complex form,
but for an agent to behave morally, it cannot simply act as though it is the only entity that
matters. The moral agent must be able to develop a moral valuation of other entities,
whether human, animal, or artificial. It may have actuators and sensors that give it the
capacity to measure physical inputs from body language, stress signs, and tone of voice, to
indicate whether another entity is in need of assistance and behave morally in accordance
with the needs it measures. It may respond to cries for help, but it needs to be able to
distinguish between a magazine rack and a toddler in rushing in to provide aid. Empathy,
and not merely rationality, is critical for developing and evaluating moral choices; just as
emotion is inherent to human rationality, it is necessary for machine morality. (This is the
ethics of care logic—only emotions lead to action, to empathy)
What is sometimes forgotten in defining a moral agent as such, including in the case of
A.M.A.s, is that the entity must both be designed to be, and desire to be, moral. It must
have a teleology toward the good. Just as human beings have developed a sense of the
moral and often seek to act accordingly, machines could be designed to pursue the good,
even develop a form of virtue through trial and error. They will not, however, do so in the
absence of some design in that direction. A teleology of morality introduced into the basic
programming of a robot would not necessarily be limited to any one particular ethical
theory or set of practices and could be designed to incorporate complex interactions of
decisions and consequences, just as humans typically do when making decisions about what
is right. It could be programmed, in its more advances forms, to seek out the good, to
develop “virtual virtue,” learning from what it has been taught and practicing ever-greater
forms of the good in response to what it learns.
What is Not Required for Artificial Moral Agency?
Popular futurists Ray Kurzweil and Hans Moravec have argued that sheer increases in
computational processing power will eventually lead to superhuman intelligence, and thus,
to agency. But this is not the case. While a certain amount of “intelligence” or processing
power is necessary, it is only functionally useful insofar as it facilitates learning and
empathy, particularly. Having the most processing power does not make one the most
thoughtful agent, and having the most intelligence does not make one particularly moral on
its own.
Likewise, while a certain amount of rule-following is probably necessary for artificial moral
agency, rule-following alone does not make for a moral agent, but rather for a slave to
programming. Moral agency requires being able to make decisions and act when the basic
rules conflict with each other; it also requires being able to set aside “the rules” entirely
when the situation dictates. It has been said that one cannot truly be good unless one has
the freedom to choose not to be good. While I do not want to take on that claim here, I will
argue that agency requires at least some option of which goods to pursue and what
methods to pursue them by. A supposed A.M.A. that only follows the rules, and breaks
down when they come into conflict, is not a moral agent at all.
While a machine must move beyond simple rule-following to be a genuine moral agent
(even if many of its ends and goals are predetermined in its programming), complete
freedom is not necessary in order to have moral agency.
Some have thought that a fully humanoid consciousness is necessary for the development
of moral agency, but this too, may legitimately look quite different in machines than it does
in human beings.
Consciousness is itself elusory, without a clear definition or understanding of its processes.
What can be said for moral agency, though, is that the proof is in the pudding, that decisions
and actions matter at least as much as the background processing that went into them. In
deciding to consistently behave morally, and in learning from behavior in order to become
more moral, a machine can be a moral agent in a very real sense while avoiding the problem
of consciousness entirely.
Just as consciousness is used primarily as a requirement that cannot, by definition, be met
by any entity other than a human moral agent, so the idea of an immaterial soul need not
be present in order to have a moral agent. While the idea of a soul may or may not be useful
when applied in the context of human beings in relation to the Divine, it is unnecessary for
the more limited question of moral agency. A being also need not have a sense of God in
order to be a moral being. Not only is this true in the case of many humans, who may be
atheists, agnostics, or belong to spiritual traditions that do not depend on the idea of a
deity, but it is not necessary for moral action and the development of virtue. It may be
practically helpful in some cases for a robot to believe in a deity in order to encourage its
moral action, but it is by no means a requirement.
Yet, while the robots we build will not be subject to many of the same temptations as
human moral agents, they will still be subject to the limitations of their human designers
and developers. Robots will not be morally perfect, just as humans, even in the best of
circumstances, are never morally perfect.