Professional Documents
Culture Documents
2021 Spring Research Proj Essay Part 1
2021 Spring Research Proj Essay Part 1
arrest record.”
(Wellner, 2020)
Technology is advancing at an exponential rate, but with significant advances in
artificial intelligence (AI), there has been cause for concern. While AI proposes
substantial benefits and solutions to a wide range of topics, those benefits are
question is not whether or not discrimination is present in AI, as there are numerous
discrimination to be in the developer, the data training sets, and/or the algorithm. The
fact that there are several causes to discrimination in AI is just one reason why this
issue is such a daunting one; bias in AI is often a result of implicit bias--an automatic
stereotype or prejudice that affects our opinions (Lin, 2020). Because bias in AI reflects
real-world implicit bias--whether the main cause be from the data training set or the
developer--, it is not likely that it will ever see an absolute solution. However, this does
not mean that developers and companies that implement AI should stop striving to
in AI, and several companies have already proposed guidelines to ensure ethical
With AI having ever-increasing applications, the stakes become much higher. For
example, AI is being used in the judicial system as a prediction technique that predicts
future criminal activity, and court decisions are made with that in mind. However, if the
AI system unrightfully predicts future crime based on a person’s race or gender, this will
ruin lives for those subject to its decisions, and their families. Examples of these biases
are found present in not only the most extreme cases, but even in small day-to-day
uses of AI such as search engines: “A ‘black-sounding name’ was 25% more likely to
discrimination was also found present; an algorithm translating from Turkish to English
associated the phrase “hard working” with “he”, and “lazy” with “she”, Amazon’s AI
algorithm for the hiring process preferred men, and an artificial heart turned out to fit
86% of men but only 20% of women. (Wellner and Rothman, 2020). As for Natural
Language Processing AI, while it is true that biases can be manifested in the language
itself, translation AIs should work to neutralize those biases, offering gender-neutral
terms or both gender possibilities to the user. Borenstein (2018) lists several other
examples of AI implementations, and how bias makes its way into each one: face
recognition, voice recognition, search engines, the justice system, robot peacekeepers,
the self-driving car, and the medical robot. Face recognition applications were reported
to have difficulty identifying non-Caucasian faces, and voice recognition systems could
not recognize a woman’s voice as accurately as a man’s. Search engines are one of the
more well-known perpetrators of different biases across the board including gender and
police robots, self-driving cars, and robot caretakers. Should discrimination remain an
Just a decade ago, the discussion of discrimination in AI was much less heard of.
Since then, numerous companies have proposed ethical guidelines, to ensure that AI is
implemented fairly and without bias in their company. These guidelines discuss a range
of topics, often including bias mitigation softwares being implemented, explainability,
and transparency policies. Bias mitigation techniques are ones that find and correct
biased decisions made by the AI program. Explainability and transparency are two key
features to establishing trust with the AI program (Rossi, 2019). AI often operates as a
black box program, meaning that the users only see the input and output--they have no
way of knowing how the AI came to its decision, or if that decision was made ethically.
With guidelines promoting explainability and transparency, this will reduce the amount
of unreadable processes done by the program, and allow for more opportunities to point
out flaws within the program that challenge its ethicality. AI is already being used to
is a prominent issue especially with social media, and it has been shown to polarize the
effective against disinformation because the algorithms can sort through millions more
Despite the suggested solutions, bias and discrimination in AI is still very present.
inconsistently and unfairly when compared to their grades before the use of the AI
(Smith, 2020). The reason that bias is still present may be that technology often
advances faster than our solutions can be implemented (Wallach, 2011). The benefits of
AI are appealing, and even severe potential negative effects will not stop its
advancement. However, it is important to consider ethical topics: how many lives lost or
ruined will it take for AI communities to focus on solving ethical problems rather than
expanding AI? With widely applied AI technologies, especially in high-stake cases such
as AI in the judicial branch, allowing harmful bias in AI to continue will amplify those
biases and have detrimental effects. Furthermore, althought ethical guidelines should
help reduce bias and discrimination in theory, research suggests that those guidelines
are very inefficient. Many guidelines do not go into enough specifics to actually change
the approach to many important concerns including explainability and transparency, and
workers under those guidelines have reported that they do not adhere to them strictly
(Hagendorff, 2020). These proposed ethics also serve as an excuse for absolute laws to
be put in place, and allows companies to work around their guidelines as they please.
Not only are the guidelines inefficient, but many of the suggested solutions in them are
difficult to implement and enforce. AI algorithms require several teams of people all
cases where discrimination in AI leads to the court of law, it’s difficult to pinpoint liability.
Even if developers of AI were able to narrow it down to who is at fault, increasing liability
will discourage companies from using AI, ultimately thwarting the advancement of AI
technology.
ethics in AI. These solutions target the developer end, user end, and/or the technology
itself. A solution focusing more on the developer's side is to require ethics courses in
technology degrees, or include ethics in training for technology-related jobs. This will
educate the developer on the how and why of bias in AI, and they will be more likely to
factor in a broader ethical context when developing AI technologies. For user-end
problem were to arise after the release of the technology, the users can identify
discrimination issues more easily. This solution also addresses the challenge with
predicting all possible negative outcomes of an AI, which is difficult because of the
which would entail setting up “tech ambassadors” to collaborate with industry and
solution is using AI for common good; AI can play a major role in fixing pressing issues
like today’s cybersecurity threats (Timmers, 2019). One promising social application of
AI is using deepfake technology, which is AI that can create artificial images, videos, or
even pictures. Deepfake, of course, also has its benefits and dangers. The complex
deepfake algorithms are already being used to spread disinformation and attempt at
defaming public figures. However, like many applications of AI, correcting the problem
of discrimination and bias will allow for several positive applications of deepfake such as
healthcare, entertainment, and tourism (Kwok and Koh, 2020). A possible advancement
to consider is the development of AI algorithms that are able to make decisions based
on moral factors. This solution is especially appealing in robotics, as we could teach the
AI itself about ethics (by including moral information and situations in the data training
tailor the algorithm to (Wallach, 2011). Finding and implementing efficient solutions to
with much more ease and benefits. It will create a more informed public, a safer
technological community for minorities, and more accurate AI programs. We will not be
able to harness the full potential of AI until we have consistently found solutions to its
the expanding applications of AI. Several algorithms have been found to make biased
decisions based off of race, gender, and other factors without this being the intention of
biases of society, and there are multiple different possible causes to the bias in each
discrimination in AI are sufficient. On one hand, several developers and companies who
algorithms. These ethics cover important topics that will work towards eliminating bias
such as explainability, transparency, and bias mitigation. On the other hand, it has been
shown that these ethical guidelines are not effective, and even after their
implementation, biased outputs from these algorithms persist. Other solutions include
job), increasing explainability and transparency to allow the public to call out biased
technologies can and will be applied to almost every field, and will increase the
efficiency and accuracy of many processes. However, discrimination and bias remains
an issue that threatens the accuracy of AI, and present a danger to those at the hands
of such biases.
Works Cited
“Facial Recognition Is Now Rampant. The Implications for Our Freedom Are
Chilling | Stephanie Hare.” The Guardian, Guardian News and Media, 18 Aug.
2019, www.theguardian.com/commentisfree/2019/aug/18/facial-recognition-is-
now-rampant-implications-for-our-freedom-are-chilling.
References
10.1080/13683500.2020.1738357
doi:10.1007/s11023-020-09517-8.
Howard, Ayanna, and Jason Borenstein. “The Ugly Truth About Ourselves and Our
Robot Creations: The Problem of Bias and Social Inequity.” Science &
Engineering Ethics, vol. 24, no. 5, Oct. 2018, pp. 1521–1536. EBSCOhost,
doi:10.1007/s11948-017-9975-2.
EBSCOhost, doi:10.1163/18750230-02901005.
Lin, Ying-Tung, et al. “Engineering Equity: How AI Can Help Reduce the Harm of
Implicit Bias.” Philosophy & Technology, July 2020, pp. 1–26. EBSCOhost,
doi:10.1007/s13347-020-00406-7.
search.ebscohost.com/login.aspx?
direct=true&db=a9h&AN=134748798&site=ehost-live&scope=site.
Timmers, Paul. “Ethics of AI and Cybersecurity When Sovereignty Is at Stake.” Minds &
doi:10.1007/s11023-019-09508-4.
Smith, Helen. “Algorithmic Bias: Should Students Pay the Price?” AI & Society, vol. 35,
3.
Wallach, Wendell. “From Robots to Techno Sapiens: Ethics, Law and Public Policy in
doi:10.5235/175799611798204888.
Wellner, Galit, and Tiran Rothman. “Feminist AI: Can We Expect Our AI Systems to
Become Feminist?” Philosophy & Technology, vol. 33, no. 2, June 2020, pp.
“Facial Recognition Is Now Rampant. The Implications for Our Freedom Are
Chilling | Stephanie Hare.” The Guardian, Guardian News and Media, 18 Aug.
2019, www.theguardian.com/commentisfree/2019/aug/18/facial-recognition-is-
now-rampant-implications-for-our-freedom-are-chilling.
References
10.1080/13683500.2020.1738357
& Machines, vol. 30, no. 1, Mar. 2020, pp. 99–120. EBSCOhost,
doi:10.1007/s11023-020-09517-8.
Howard, Ayanna, and Jason Borenstein. “The Ugly Truth About Ourselves and
Our Robot Creations: The Problem of Bias and Social Inequity.” Science &
Engineering Ethics, vol. 24, no. 5, Oct. 2018, pp. 1521–1536. EBSCOhost,
doi:10.1007/s11948-017-9975-2.
EBSCOhost, doi:10.1163/18750230-02901005.
Lin, Ying-Tung, et al. “Engineering Equity: How AI Can Help Reduce the Harm of
Implicit Bias.” Philosophy & Technology, July 2020, pp. 1–26. EBSCOhost,
doi:10.1007/s13347-020-00406-7.
EBSCOhost, search.ebscohost.com/login.aspx?
direct=true&db=a9h&AN=134748798&site=ehost-live&scope=site.
Minds & Machines, vol. 29, no. 4, Dec. 2019, pp. 635–645. EBSCOhost,
doi:10.1007/s11023-019-09508-4.
Smith, Helen. “Algorithmic Bias: Should Students Pay the Price?” AI & Society,
01054-3.
Wallach, Wendell. “From Robots to Techno Sapiens: Ethics, Law and Public
doi:10.5235/175799611798204888.
Wellner, Galit, and Tiran Rothman. “Feminist AI: Can We Expect Our AI Systems
to Become Feminist?” Philosophy & Technology, vol. 33, no. 2, June 2020, pp.