Professional Documents
Culture Documents
Ethics Argument Paper
Ethics Argument Paper
Jordan Guzman
CST 300 Writing Lab
October 8, 2020
Racial Bias in AI-Driven Policing Technology
Artificial intelligence has been introduced in industries, such as healthcare. The promises
of efficiency and superior accuracy in AI has led to its appeal. Levels of efficiency and accuracy
that surpass those of humans can benefit our species. AI would provide a higher level of
accuracy when diagnosing patients, find the correct formula for vaccines at a faster rate, and
discover more effective treatments. However, there has been controversy in some applications
of artificial intelligence, such as in certain applications used in policing. Even though AI has
beneficial applications, it is incapable of being advanced enough to be fair and unbiased, and
Artificial intelligence becomes capable of making decisions on its own through machine
learning. Machine learning consists of exposing a computer to vast amounts of data so that it
learns to make judgements based on patterns it recognizes in the data that it processes (Heilweil,
2020). The computer is initially provided with a set of metrics that describe the object(s) that it
will learn to recognize. The new data that the AI is exposed to are checked against the metrics,
thus allowing it to gain a better understanding of that which it is studying. These metrics serve
as the foundational knowledge on which the AI builds upon for further learning.
The issue of artificial intelligence used in policing lies within the metrics that are
provided to the AI. For instance, in predictive policing algorithms, such as those used in crime-
prediction technology, the metrics do not account for the implicit biases that have historically
affected black Americans. Research has been conducted in using facial recognition software to
find patterns that identify potential criminals through facial features. The metrics used were data
consisting of arrest records and convictions, which ultimately reflect on the choices of who
Guzman 2
police choose to arrest and how judges choose to rule (Medium, 2020). These metrics are
scrutinized for being flawed due to research on how people of color and white people are treated
differently by police. According to these studies, the findings showed that people of color are
Criminal justice reform has been an issue raised, especially this year. Several videos of
violent, and sometimes fatal, interactions between the police and people of color have circulated
throughout social media and the news at a high frequency. The events depicted in the videos are
said to be motivated by racial bias from the police. It has led to increasing concern
that a failure to hold police accountable for their actions will reinforce racially motivated
encounters with people of color. Demands for defunding the police have been made.
Nationwide protests to police brutality have been persisting for months and tensions between
protestors and police remain high. Artificial intelligence is presented as a potential, but
can be the defining factor that wrongfully incarcerates an innocent person. On the other hand, it
can potentially increase public safety by deterring people from committing crimes, especially
those of a violent nature. It can also potentially keep police safe while on duty. The police can
either benefit from using AI or it can make things worse for them, such as leading to lawsuits or
cultivating a higher level of distrust from the public. Both the police and people of color stand
with something to gain or lose if AI is further implemented in policing and are therefore
The police have a job to enforce the law and to protect the people. Police officers value
protecting the people and bringing criminals to justice. Pulling over motorists for a traffic
violation, arresting violent people, or responding to a call of domestic violence are just a few
examples of ways in which the police – directly and indirectly – protect others. The police work
to dismantle organized crime syndicates and to bring those involved to justice. An example
would be drug trafficking. Drug trafficking has been, and continues to be, an issue that has
claimed the lives of many through shootouts, violent executions, or overdoses of drug users. The
With the many duties of police officers, the utilization of artificial intelligence to help
them fulfill these duties seems promising. The capacity for AI to assist in various areas and
enhance efficiency has led police to see it as an essential factor in law enforcement (Walch,
2019). They embrace this technology and the many areas in which they can apply it, such as
surveillance and crime forecasting. For example, the use of facial recognition software helped
Chinese officers identify – and eventually arrest – a person of interest that was in a large crowd
at a stadium (Walch, 2019). Cases, such as the one in China, are important in garnering support
from legislation to legally implement AI to help the police work more effectively.
The benefits the police see in using AI to assist them with their work is met with scrutiny
by claims of racially biased algorithms guiding the AI that they use to justify their actions.
However, the police refer to current implementations to advocate for using AI in policing. There
are important applications – such as facial and character recognition and data extraction from
digital evidence – that currently support police in safeguarding the public from online human sex
trafficking and child exploitation (Norris, 2019). The use of predictive policing technology will
Guzman 4
help police to respond more effectively to incidents, prevent threats, stage interventions, divert
There has been a history of racial discrimination and unfair policing of people of color in
the U.S. Their values of justice and equal treatment have been especially molded by their
historical and ongoing fight for equality. Countless organizations, such as Data for Black Lives
and Civil Rights organizations, have made it their goal to advocate for black Americans and fight
against the injustices and unequal treatment motivated by underlying racial biases, implicit or
otherwise. Protestors also take to the streets to demand police accountability and reform within
the criminal justice system. It is important to the advocates of justice and equality that progress
is not undone.
The applications of AI in policing have raised concern for people of color. For this
reason, people of color are opposed to allowing AI in policing. Organizations, such as Data for
Black Lives, have addressed the possibility of AI being used to perpetuate discriminatory
practices especially within the field of law enforcement. ProPublica, a newsroom that performs
crime forecasting tool was in accurately determining who – of 7,000 people arrested in Broward
County, Florida – would commit violent crimes in the future (Angwin et al., 2016). Examples,
such as this one in Broward County, stress the importance for people of color to resist the
implementation of AI in policing.
Discrepancies that imply racial bias within policing tools have made it important to the
people of color that these tools are not permitted to be used. Using race as a variable has been
outlawed in the U.S., but other metrics, such as socioeconomic background, education, and zip
Guzman 5
code are used (Heaven, 2020). In an MIT Technology Review article, author Will Douglas
Heaven states that although it is illegal to use race as a variable for AI to use, it is racist to use
metrics such as socioeconomic background, education, and zip code. However, he falls short of
color have been systemically oppressed – through being subjected to a lower socioeconomic
status and a lower quality education that is correlated with the zip code – would add merit to the
In Support of AI
Both sides have arguments for their position on the issue. Their positions can be
understood through the lens of ethical frameworks. For instance, the police would argue through
a Utilitarian framework, which stresses that the right course of action is the one that produces the
“greatest happiness of the greatest number.” This framework was originally developed by
Jeremy Bentham and later refined by his student, John Stuart Mill. Utilitarianism sets out to
achieve an ethical standard that can guide people to take actions that will aim to do good for the
Police would argue that implementing artificial intelligence tools in law enforcement will
provide a greater good by more effectively protecting the public and bringing criminals to justice
or deterring them from committing crimes. Through its capacity to endlessly search through the
vast amounts of data generated from cameras, video, and social media, AI can increase public
safety by detecting crimes that would likely go unnoticed by humans and by investigating
potential criminal activity (Rigano, 2018). Those that are in opposition of AI being used in
policing argue that the algorithms themselves are biased and disproportionately rate people of
color as being more likely to commit crimes. In defense of predictive policing software, the
Guzman 6
police would likely refer to a study conducted by the developers of PredPol – the crime-
forecasting software that some police departments use – which demonstrated that predictive
policing does not reduce or increase existing discriminatory practices of patrols (Benbouzid,
2019). So, the AI would not be the core problem that needs to be addressed and abolishing it
would potentially compromise the potential for increased public safety, which is the greater good
If this software is implemented nationwide, then the police can perform better and
potentially help more people. This helps them meet their goal of protecting and serving.
However, there is great risk to using these tools. It can potentially lead to more lawsuits. Police
might lose their jobs if held accountable for acting on the suggestions of a possibly fallible
system. It can also create a greater level of distrust between the police and people of color.
In Opposition of AI
The people of color would argue through an ethical framework of equality as advocated
by John Locke. He believed that equality was a natural thing that transitioned from man in the
state of nature to their state in society, and his philosophy advocated equal rights for all. Locke
famously wrote that man has the right to life, liberty, and property. These were the words that
inspired the Declaration of Independence – a document which was foundational in the fight for
the abolishment of slavery. He also emphasized that people should not trade their liberty for
more security – which fundamentally underlies the issue of whether to allow artificial
It has been argued that using AI-driven tools in policing will reinforce, and probably
that the algorithm made mistakes between black and white defendants when it came to predicting
Guzman 7
who would most likely re-offend (Angwin et al., 2016). Tools such as this one will not grant
people of color equal treatment with the law. Abolishing the use of AI-driven tools used for
policing will prevent this problem from happening. Those that support the abolishment of these
tools will argue that it is the right course of action because it will prevent the introduction of a
People of color stand to potentially lose the progress that has been made in achieving
equal treatment if these tools are allowed in law enforcement. Problems of racial discrimination
and unequal treatment persist despite the use of bodycams. It is more difficult to prove that
algorithms perpetuate racial biases due to many factors, such as the companies creating the AI
not willing to give access to the entire codebase and contending that forcing them to do so would
violate their rights. However, people of color do have something to gain, and that is more safety
from crime and a possible end to discriminatory practices from police if racial bias within the
algorithms is eradicated. If this happens, then the AI can be a catalyst that corrects for the
implicit – or explicit – biases of officers when conducting police work, and thus fostering more
My Position
The applications of artificial intelligence in aiding police in their work to protect and
serve the public is enticing. However, I do not think it is the right course of action. There are
too many risks that have implications for the lives of falsely convicted people. The algorithms
are based on metrics that have historically discriminated against people of color. For instance, a
quality education and desirable zip codes are mainly associated with white neighborhoods, which
is in stark contrast to the associations made with black neighborhoods. This can be attributed to
white flight, which is the term used to describe the migration of white people away from urban
Guzman 8
areas and to the suburbs to avoid living near black Americans. Socioeconomic standards rely
heavily on a quality education. The metrics will not allow the AI to account for this
metainformation. Artificial intelligence is currently unable to correct for racial biases, and until
I align most with the people of color because I value equality for all. I do not believe it
should be justified to sacrifice the freedoms and rights of potentially innocent people for the sake
of the promise of more safety. Liberty should not be compromised for more security. A
Utilitarian approach to this problem leaves room for some to suffer the consequences so that the
majority can benefit. Even though it seems unlikely, there is a potential solution.
One possible solution to allow for the use of AI in policing while lessening the presence
of discriminatory algorithms is to make it illegal for law enforcement to use the tools created by
private companies. The private sector is primarily concerned with earning profit, so this can
likely dissuade them from taking more rigorous and time-consuming approaches to ensure that
the algorithms are free of any biases. The creation of these tools should be assigned to the public
sector. The public sector must create diverse teams to work on these algorithms rather than a
team consisting mostly of a race that does not face discrimination the way people of color do. A
board – consisting of people trained in artificial intelligence and machine learning – should
review the codebase and discuss the potential ethical implications of their findings. From there,
they can decide to either release the tools to the police or continue to perfect them.
Guzman 9
References
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine Bias. ProPublica.
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Benbouzid, B. (2019). To predict and to manage. Predictive policing in the United States. SAGE
Journals. https://doi.org/10.1177%2F2053951719861703
Heaven, W.D. (2020, July 17). Predictive policing algorithms are racist. MIT Technology
Review. https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-
algorithms-racist-dismantled-machine-learning-bias-criminal-justice/
Heilweil, R. (2020, February 18). Why Algorithms can be racist and sexist. Vox.
https://www.vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-
recognition-transparency
https://medium.com/@CoalitionForCriticalTechnology/abolish-the-techtoprisonpipeline-
9b5b14366b16
Norris, D. (2019, June 12). Artificial Intelligence and Community-Police Relations. Police Chief
Magazine. https://www.policechiefmagazine.org/ai-community-police-relations/
Rigano, C. (2018, September 30). Using Artificial Intelligence to Address Criminal Justice
intelligence-address-criminal-justice-needs
Guzman 10
Walch, K. (2019, June 26). The Growth Of AI Adoption In Law Enforcement. Forbes.
https://www.forbes.com/sites/cognitiveworld/2019/07/26/the-growth-of-ai-adoption-in-
law-enforcement/#477822e5435d