Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Machine Learning

Reading Assignment 1
Fall 2021

Group Members
Omer Khalid - 19030026

Iqra Latif - 20030019

Mustafa Alam - 19030004

Mahrukh Zubair - 19030012

Areebah Mushtaq - 19030023

Ismail Shabbir - 22100321


Paper 1: The Racist Algorithm? Anupam Chander - University of California, Davis School of Law

In today’s world of automation and data centred algorithms, the problem of discrimination and racism
discussed in this paper, The Racist Algorithm, is significant because it is important that people trust the
decisions of algorithms with no doubts of them being biased or discriminative. The problem is also important
in the face of increasing scrutiny on social media giants for promoting or turning a blind eye to discrimination
and biased news on social media. The author motivated the problem well by talking about its importance and
impacts.

The author first explains the narrative of Frank Pasquale about decision-making by algorithms. Then he
discusses how racism can get embedded in these algorithms and ways to eliminate it. Author cites multiple
researches into this topic and also gives real world examples.

Summary
The paper reviews the book “The Black Box Society: The Secret Algorithms That Control Money and
Information”, by Frank Pasquale and discusses his thoughts about decision making algorithms. The paper
also provides an alternate solution which the author deems more adequate. Pasquale proposes that Black
Box algorithms are taking control of the information we consume without any scrutiny or policing. Pasquale
wants transparency in the way algorithms work to eliminate their discriminative(racial) decisions. Whereas the
author claims that the black box itself is neutral, racism comes from real world data it learns from, which is
generated by humans. So instead of the algorithms, it is the data which should be transparent and open to
scrutiny. The author divided his review into three parts.

In part, I, Pasquale’s argument on increased discrimination by replacing human decisions with algorithms is
reviewed and it is argued that algorithms are used to minimize the discriminations in the decisions made by
humans. Both Pasquale and Anupam agree that programmers do not intentionally embed racism into the
algorithms but Pasquale’s concerns can be interpreted as otherwise. Even if a programmer wants to do that,
they can be pointed out by the other members of the team, in the process of code reviews and debugging. At
the same time, the author does not deny the existence of racist and sexist programmers. Pasquale believes
that algorithms being logical entities should not be infected by human biases. Racism and discriminatory
behaviour come from the subconscious so it can be hidden in the inputs of Black Box impacting on its
decisions.

In part II (Viral Discrimination), presence of discrimination in the algorithms is acknowledged using an


example of google’s autocomplete function producing sexist results because it ingests real-world data in
which discrimination exists. Real world data containing discriminatory information can get real-world
inequalities replicated into algorithms. Classification algorithms learn from data from the past and are used to
predict the unknown labels in the future. Historical data can be biased towards a certain group replicating its
biases into the algorithm. Anupam showed the routes of infection (embedding of racism) in an algorithm
through a figure.

In part III the author proposes solutions for the above-mentioned problem and disagrees with Pasquale’s
solution of transparency of “black box” in which he proposed making the algorithms transparent and open to
scrutiny. The author proposes that to know what is causing algorithms to make biased decisions, we need to
make the inputs of the algorithms transparent instead. Anupam argues that transparency of algorithms can
compromise trade secrets and invite manipulations. Or they might be too complex to understand. Author
emphasizes on rectifying the problems instead of finger pointing the sources of the problem. Affirmative
actions should not focus on identifying “The How” of discrimination. Instead, it should only focus on eliminating
discrimination.

Author concludes that algorithms don't make completely neutral decisions just because they are logical
entities. But it doesn’t make them unreliable, Affirmative actions can help eliminate the biases.

Discussion
The author’s proposed approach, in contrast to Pasquel’s transparency of algorithms, is to instead look
closely at the data these algorithms are using. More precisely, look at inputs and outputs of these algorithms.

Specifically, the author proposes two solutions. First, Interventions through affirmative actions, e.g not
showing pictures of drivers in the Uber app, and Second embedding data being used with discriminatory
behaviors so that affirmative actions can be taken. The first approach is appropriate and implementable as we
see social media corporations using such techniques to stop the propagation of fake news (Twitter blocking
content). However, the second solution might be tricky because usually the data taken as input in the systems
is unlabeled and unstructured. Typically this data would not have any racial bias or any other discriminatory
bias in it. In this case, combined with how large the datasets are today, this might become an exercise as
complex as reviewing the algorithms. Overall, the solutions proposed by the author are relevant and one of
the many ways to solve the problem of racism and discrimination. However, there were no experiments and
no scientific solution was presented which can be replicated.

Although the proposed approach is relevant, it cannot be completely discounted that algorithms might also be
biased. Pasquel’s approach was entirely focused on transparency of algorithms while the author has made a
case for transparency of data. Although it is mentioned that programmers might be biased, the point that
these biases might make their way into algorithm design is missed. Hence, a balanced approach between
scrutiny of data and algorithms both is required.

The paper also doesn't discuss the privacy issues that may arise because of the transparency of data. The
data gathered for every single user on the internet today is massive, and should not be able to be tracked
back to a person. Also, the author does not address the kind of data that should be open to transparency.
Should all data be available for review and scrutiny or should this be restricted.

It is also pertinent to understand that what might be considered bias in one culture, might not be considered
discriminatory with the same degree or at all. So any transparency of data should also cater to religious and
cultural contexts.

The government needs to do efforts to bring equality in society in general and put in effort to educate its
people, control sources that inculcate discrimination in minds of people through media, etc. Corporates also
need to facilitate minorities and women by providing necessary facilities that are a hindrance in their way to
the mainstream.

Comments:
Although the author raises good points about Pasquale’s arguments regarding transparency of algorithms,
there are a few points that need to be discussed more:
● We think that there needs to be a more balanced approach between transparency of algorithms and
data. The point that algorithms can be biased cannot be completely disregarded.
● Affirmative actions is one of the solutions the author talks about. But there are more questions about
affirmative actions that need to be answered. Who will be responsible for deciding and exercising
affirmative actions? How will it be made sure that the authority which decides affirmative actions isn’t
itself biased? How will the scope and jurisdiction of these affirmative actions be decided? How will these
affirmative actions cater to specific culture needs?
● These questions also open discussion about whether there needs to be policies and laws about the data
these algorithms and systems ingest and should a government or other entities be involved in this
process.
● We also believe that once a system has been corrected and biases have been removed, there needs to
be corrective measures taken for when a biased decision by the algorithm has affected any individual or
community.

You might also like