Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 14

PART I: Artificial Intelligence

An Intensive Essay Presenting Artificial Intelligence and Discrimination

Is discrimination properly addressed in AI implementations?

Kaylee Lynn Sweet

Barr: Dual Enrollment English Composition II – Period 3

Thursday, April 1, 2021


“A ‘black-sounding name’ was 25% more likely to get an ad suggestive of an

arrest record.”

(Wellner, 2020)
Technology is advancing at an exponential rate, but with significant advances in

artificial intelligence (AI), there has been cause for concern. While AI proposes

substantial benefits and solutions to a wide range of topics, those benefits are

counteracted by several negative effects and dangers such as discrimination. The

question is not whether or not discrimination is present in AI, as there are numerous

examples to be elaborated on further. We also already know the root causes of

discrimination to be in the developer, the data training sets, and/or the algorithm. The

fact that there are several causes to discrimination in AI is just one reason why this

issue is such a daunting one; bias in AI is often a result of implicit bias--an automatic

stereotype or prejudice that affects our opinions (Lin, 2020). Because bias in AI reflects

real-world implicit bias--whether the main cause be from the data training set or the

developer--, it is not likely that it will ever see an absolute solution. However, this does

not mean that developers and companies that implement AI should stop striving to

create ethical machines. The question at hand is whether discrimination is being

properly addressed in AI implementations. There are existing solutions to discrimination

in AI, and several companies have already proposed guidelines to ensure ethical

implementation of AI within the company. However, the effectiveness of these AI

guidelines are questionable, and there is a possibility that technology is expanding

faster than our solutions will be able to keep up with.

With AI having ever-increasing applications, the stakes become much higher. For

example, AI is being used in the judicial system as a prediction technique that predicts

future criminal activity, and court decisions are made with that in mind. However, if the

AI system unrightfully predicts future crime based on a person’s race or gender, this will
ruin lives for those subject to its decisions, and their families. Examples of these biases

are found present in not only the most extreme cases, but even in small day-to-day

uses of AI such as search engines: “A ‘black-sounding name’ was 25% more likely to

get an ad suggestive of an arrest record.” (Wellner and Rothman, 2020). Gender

discrimination was also found present; an algorithm translating from Turkish to English

associated the phrase “hard working” with “he”, and “lazy” with “she”, Amazon’s AI

algorithm for the hiring process preferred men, and an artificial heart turned out to fit

86% of men but only 20% of women. (Wellner and Rothman, 2020). As for Natural

Language Processing AI, while it is true that biases can be manifested in the language

itself, translation AIs should work to neutralize those biases, offering gender-neutral

terms or both gender possibilities to the user. Borenstein (2018) lists several other

examples of AI implementations, and how bias makes its way into each one: face

recognition, voice recognition, search engines, the justice system, robot peacekeepers,

the self-driving car, and the medical robot. Face recognition applications were reported

to have difficulty identifying non-Caucasian faces, and voice recognition systems could

not recognize a woman’s voice as accurately as a man’s. Search engines are one of the

more well-known perpetrators of different biases across the board including gender and

racial bias. AI is expected to be applied to robotics in multiple areas as well, including

police robots, self-driving cars, and robot caretakers. Should discrimination remain an

issue in these systems, the dangers will increase dramatically.

Just a decade ago, the discussion of discrimination in AI was much less heard of.

Since then, numerous companies have proposed ethical guidelines, to ensure that AI is

implemented fairly and without bias in their company. These guidelines discuss a range
of topics, often including bias mitigation softwares being implemented, explainability,

and transparency policies. Bias mitigation techniques are ones that find and correct

biased decisions made by the AI program. Explainability and transparency are two key

features to establishing trust with the AI program (Rossi, 2019). AI often operates as a

black box program, meaning that the users only see the input and output--they have no

way of knowing how the AI came to its decision, or if that decision was made ethically.

With guidelines promoting explainability and transparency, this will reduce the amount

of unreadable processes done by the program, and allow for more opportunities to point

out flaws within the program that challenge its ethicality. AI is already being used to

combat attempts at disinformation, including 194 fact-checking projects active in more

than 60 countries (Kertysova, 2019). Disinformation--purposefully incorrect information--

is a prominent issue especially with social media, and it has been shown to polarize the

public’s opinions, including on matters of discrimination. AI solutions have been very

effective against disinformation because the algorithms can sort through millions more

pieces of information than a human and within a fraction of the time.

Despite the suggested solutions, bias and discrimination in AI is still very present.

In 2020, a grading AI algorithm used in the UK was reported by students as grading

inconsistently and unfairly when compared to their grades before the use of the AI

(Smith, 2020). The reason that bias is still present may be that technology often

advances faster than our solutions can be implemented (Wallach, 2011). The benefits of

AI are appealing, and even severe potential negative effects will not stop its

advancement. However, it is important to consider ethical topics: how many lives lost or

ruined will it take for AI communities to focus on solving ethical problems rather than
expanding AI? With widely applied AI technologies, especially in high-stake cases such

as AI in the judicial branch, allowing harmful bias in AI to continue will amplify those

biases and have detrimental effects. Furthermore, althought ethical guidelines should

help reduce bias and discrimination in theory, research suggests that those guidelines

are very inefficient. Many guidelines do not go into enough specifics to actually change

the approach to many important concerns including explainability and transparency, and

workers under those guidelines have reported that they do not adhere to them strictly

(Hagendorff, 2020). These proposed ethics also serve as an excuse for absolute laws to

be put in place, and allows companies to work around their guidelines as they please.

Not only are the guidelines inefficient, but many of the suggested solutions in them are

difficult to implement and enforce. AI algorithms require several teams of people all

working on different parts of the whole, so if an algorithm is found to be biased, it cannot

easily be determined which developer neglected to abide by the ethical guidelines. In

cases where discrimination in AI leads to the court of law, it’s difficult to pinpoint liability.

Even if developers of AI were able to narrow it down to who is at fault, increasing liability

will discourage companies from using AI, ultimately thwarting the advancement of AI

technology.

As the issue of whether or not AI communities are doing enough to keep

discrimination in check, there are possible solutions to encourage further attention to

ethics in AI. These solutions target the developer end, user end, and/or the technology

itself. A solution focusing more on the developer's side is to require ethics courses in

technology degrees, or include ethics in training for technology-related jobs. This will

educate the developer on the how and why of bias in AI, and they will be more likely to
factor in a broader ethical context when developing AI technologies. For user-end

solutions, companies can increase explainability and transparency so that if any

problem were to arise after the release of the technology, the users can identify

discrimination issues more easily. This solution also addresses the challenge with

predicting all possible negative outcomes of an AI, which is difficult because of the

sheer number of possibilities. Kertysova (2018) suggests establishing “techplomacy”

which would entail setting up “tech ambassadors” to collaborate with industry and

governments on disinformation, bias, cybersecurity, and several other areas. Another

solution is using AI for common good; AI can play a major role in fixing pressing issues

like today’s cybersecurity threats (Timmers, 2019). One promising social application of

AI is using deepfake technology, which is AI that can create artificial images, videos, or

even pictures. Deepfake, of course, also has its benefits and dangers. The complex

deepfake algorithms are already being used to spread disinformation and attempt at

defaming public figures. However, like many applications of AI, correcting the problem

of discrimination and bias will allow for several positive applications of deepfake such as

healthcare, entertainment, and tourism (Kwok and Koh, 2020). A possible advancement

to consider is the development of AI algorithms that are able to make decisions based

on moral factors. This solution is especially appealing in robotics, as we could teach the

AI itself about ethics (by including moral information and situations in the data training

set). A limitation to making moral AI algorithms is the question of whose morals do we

tailor the algorithm to (Wallach, 2011). Finding and implementing efficient solutions to

discrimination in AI now will enable us to advance future AI projects and innovations

with much more ease and benefits. It will create a more informed public, a safer
technological community for minorities, and more accurate AI programs. We will not be

able to harness the full potential of AI until we have consistently found solutions to its

bias and discrimination problem.

In conclusion, discrimination and bias is a pressing issue in AI, especially with

the expanding applications of AI. Several algorithms have been found to make biased

decisions based off of race, gender, and other factors without this being the intention of

the developers. Bias in AI is a difficult problem to solve in AI as it reflects the implicit

biases of society, and there are multiple different possible causes to the bias in each

algorithm (data, algorithm, and developer). As AI is implemented in situations with

increasingly higher stakes, it is important to analyze whether the solutions to

discrimination in AI are sufficient. On one hand, several developers and companies who

use AI are proposing ethical guidelines to be incorporated in the development of the

algorithms. These ethics cover important topics that will work towards eliminating bias

such as explainability, transparency, and bias mitigation. On the other hand, it has been

shown that these ethical guidelines are not effective, and even after their

implementation, biased outputs from these algorithms persist. Other solutions include

educating developers on ethics and technology (whether it be at college level or on the

job), increasing explainability and transparency to allow the public to call out biased

decisions, and increasing human involvement in various stages of the AI process. AI

technologies can and will be applied to almost every field, and will increase the

efficiency and accuracy of many processes. However, discrimination and bias remains

an issue that threatens the accuracy of AI, and present a danger to those at the hands

of such biases.
Works Cited

Cover Page Picture

“Facial Recognition Is Now Rampant. The Implications for Our Freedom Are

Chilling | Stephanie Hare.” The Guardian, Guardian News and Media, 18 Aug.

2019, www.theguardian.com/commentisfree/2019/aug/18/facial-recognition-is-

now-rampant-implications-for-our-freedom-are-chilling.

References

Andrei O. J. Kwok & Sharon G. M. Koh (2020): Deepfake: a social construction of

technology perspective, Current Issues in Tourism, doi:

10.1080/13683500.2020.1738357

Hagendorff, Thilo. “The Ethics of AI Ethics: An Evaluation of Guidelines.” Minds &

Machines, vol. 30, no. 1, Mar. 2020, pp. 99–120. EBSCOhost,

doi:10.1007/s11023-020-09517-8.

Howard, Ayanna, and Jason Borenstein. “The Ugly Truth About Ourselves and Our

Robot Creations: The Problem of Bias and Social Inequity.” Science &

Engineering Ethics, vol. 24, no. 5, Oct. 2018, pp. 1521–1536. EBSCOhost,

doi:10.1007/s11948-017-9975-2.

Kertysova, Katarina. “Artificial Intelligence and Disinformation: How AI Changes the

Way Disinformation Is Produced, Disseminated, and Can Be Countered.”


Security & Human Rights, vol. 29, no. 1–4, 2019 Special Issue 2019, pp. 55–81.

EBSCOhost, doi:10.1163/18750230-02901005.

Lin, Ying-Tung, et al. “Engineering Equity: How AI Can Help Reduce the Harm of

Implicit Bias.” Philosophy & Technology, July 2020, pp. 1–26. EBSCOhost,

doi:10.1007/s13347-020-00406-7.

Rossi, Francesca. “Building Trust in Artificial Intelligence.” Journal of International

Affairs, vol. 72, no. 1, Fall/Winter2019 2019, pp. 127–133. EBSCOhost,

search.ebscohost.com/login.aspx?

direct=true&db=a9h&AN=134748798&site=ehost-live&scope=site.

Timmers, Paul. “Ethics of AI and Cybersecurity When Sovereignty Is at Stake.” Minds &

Machines, vol. 29, no. 4, Dec. 2019, pp. 635–645. EBSCOhost,

doi:10.1007/s11023-019-09508-4.

Smith, Helen. “Algorithmic Bias: Should Students Pay the Price?” AI & Society, vol. 35,

no. 4, Dec. 2020, pp. 1077–1078. EBSCOhost, doi:10.1007/s00146-020-01054-

3.

Wallach, Wendell. “From Robots to Techno Sapiens: Ethics, Law and Public Policy in

the Development of Robotics and Neurotechnologies.” Law, Innovation &

Technology, vol. 3, no. 2, Dec. 2011, pp. 185–207. EBSCOhost,

doi:10.5235/175799611798204888.
Wellner, Galit, and Tiran Rothman. “Feminist AI: Can We Expect Our AI Systems to

Become Feminist?” Philosophy & Technology, vol. 33, no. 2, June 2020, pp.

191–205. EBSCOhost, doi:10.1007/s13347-019-00352-z.


BIBLIOGRAPHY

Cover Page Picture

“Facial Recognition Is Now Rampant. The Implications for Our Freedom Are

Chilling | Stephanie Hare.” The Guardian, Guardian News and Media, 18 Aug.

2019, www.theguardian.com/commentisfree/2019/aug/18/facial-recognition-is-

now-rampant-implications-for-our-freedom-are-chilling.

References

Andrei O. J. Kwok & Sharon G. M. Koh (2020): Deepfake: a social construction of

technology perspective, Current Issues in Tourism, doi:

10.1080/13683500.2020.1738357

Hagendorff, Thilo. “The Ethics of AI Ethics: An Evaluation of Guidelines.” Minds

& Machines, vol. 30, no. 1, Mar. 2020, pp. 99–120. EBSCOhost,

doi:10.1007/s11023-020-09517-8.

Howard, Ayanna, and Jason Borenstein. “The Ugly Truth About Ourselves and

Our Robot Creations: The Problem of Bias and Social Inequity.” Science &

Engineering Ethics, vol. 24, no. 5, Oct. 2018, pp. 1521–1536. EBSCOhost,

doi:10.1007/s11948-017-9975-2.

Kertysova, Katarina. “Artificial Intelligence and Disinformation: How AI Changes

the Way Disinformation Is Produced, Disseminated, and Can Be Countered.”


Security & Human Rights, vol. 29, no. 1–4, 2019 Special Issue 2019, pp. 55–81.

EBSCOhost, doi:10.1163/18750230-02901005.

Lin, Ying-Tung, et al. “Engineering Equity: How AI Can Help Reduce the Harm of

Implicit Bias.” Philosophy & Technology, July 2020, pp. 1–26. EBSCOhost,

doi:10.1007/s13347-020-00406-7.

Rossi, Francesca. “Building Trust in Artificial Intelligence.” Journal of

International Affairs, vol. 72, no. 1, Fall/Winter2019 2019, pp. 127–133.

EBSCOhost, search.ebscohost.com/login.aspx?

direct=true&db=a9h&AN=134748798&site=ehost-live&scope=site.

Timmers, Paul. “Ethics of AI and Cybersecurity When Sovereignty Is at Stake.”

Minds & Machines, vol. 29, no. 4, Dec. 2019, pp. 635–645. EBSCOhost,

doi:10.1007/s11023-019-09508-4.

Smith, Helen. “Algorithmic Bias: Should Students Pay the Price?” AI & Society,

vol. 35, no. 4, Dec. 2020, pp. 1077–1078. EBSCOhost, doi:10.1007/s00146-020-

01054-3.

Wallach, Wendell. “From Robots to Techno Sapiens: Ethics, Law and Public

Policy in the Development of Robotics and Neurotechnologies.” Law, Innovation

& Technology, vol. 3, no. 2, Dec. 2011, pp. 185–207. EBSCOhost,

doi:10.5235/175799611798204888.
Wellner, Galit, and Tiran Rothman. “Feminist AI: Can We Expect Our AI Systems

to Become Feminist?” Philosophy & Technology, vol. 33, no. 2, June 2020, pp.

191–205. EBSCOhost, doi:10.1007/s13347-019-00352-z.

You might also like