Professional Documents
Culture Documents
Research Paper Ai
Research Paper Ai
Research Paper Ai
Spring 2022
Gender and Racial Bias in Artificial Intelligence 2
Introduction
know it. As with everything, there have been positive and negative effects from the development
of new technologies. An example of the negative impact of technology can be seen within the
field of artificial intelligence. Artificial Intelligence can be, in some situations, controversial and
one concerning aspect is the potential for bias in regards to gender and race. Numerous reports
have indicated that bias is detected when processing images of people of certain races or
genders. The social impact of this phenomenon is concerning. This is due to the fact that large
corporations are introducing artificial intelligence into their products and their processes. A bias
found within one of these processes, for example, the portion that handles hiring, could cause
people within marginalized groups to be further marginalized and reduce their chances of
getting hired. This, in addition to other concerns in the field of artificial intelligence, is what we
Literature Review
There are many recorded cases that display this issue in full effect. This is fully detailed
in the review written by Daneshjou, Smith, and Sun. Through their research, they discover the
levels of transparency for the data sets being used to train various artificial intelligences. They
discover that in over seventy unique studies, only fourteen of them describe a person’s ethnicity
or race within the data set. In only seven of them was the data labeled with a person’s skin tone
(Daneshjou et al., 2021). This causes a sparsity of data set characterization and a lack of
transparency. Another study focused on the use of artificial intelligence in the healthcare
industry.
Nelson (2019) wrote about the potential hazards that artificial intelligence could bring in
the future. What if there was an artificial intelligence used to hire potential nurses but it only
chose females? This could happen based on the current model training methods. Another
relevant topic covered by Nelson was what if there was a revolutionary test done by artificial
Gender and Racial Bias in Artificial Intelligence 3
intelligence that detects skin cancer, but what if it didn’t work for African Americans due to the
training received by the artificial intelligence? The argument certainly has been raised that
artificial intelligence and facial recognition is a newer technology and perhaps it is just
inaccurate in general. While this at face value sounds like a compelling case, research shows
that if the user is a white male, the software is accurate ninety percent of the time (Lohr, 2018).
This creates cause to wonder, what level of bias is being programmed into facial recognition
technology?
These are important questions that demonstrate the influence that artificial intelligence
currently has. Walters and Novak write about artificial intelligence’s rapid development and how
eventually, every office, home, and business will contain some kind of artificial intelligence
(Nelson, 2019) . More specifically, they discuss the impact that artificial intelligence could have
in the area of law and technology. Currently, marginalized communities are being negatively
affected by the way laws are structured. As artificial intelligence evolves, there is the potential to
Research Question
How do data algorithms influence the level of race and gender bias in artificial
intelligence, how do the subsequent results affect marginalized groups, and lastly, how can this
problem be solved?
Research Design
● Have you had any experiences with Artificial intelligence technology? If so, how well did
● Would you trust Artificial Intelligence technology to correctly train the dogs in your shelter
● Do you plan on using AI in the future and what are you worried about when it comes to
● Do you think AI should be used in the future with non profit organizations and
● Do you believe AI could be used to make distribution decisions for who receives the
● In your opinion, is there any inherent bias with AI, and do you think it could have a
Our target research audience is anyone that is aware of or involved with the field of
technology, preferably minorities such as people of color and women. We plan to recruit these
Our interviews will be held through email. Each team member will be responsible for
interviewing someone from our sites. If we can’t find someone in our target audience from our
site that is willing to volunteer then we will branch out and find other people to interview that
better fit our research target audience. Our deadline for interviews will be before the end of
week 5 in order to efficiently analyze the data in time for the final paper.
Service Organizations
One of the service learning sites our group is volunteering for is Baja Dog Rescue in the
Baja and San Diego California area. Their mission is to give homeless dogs a second chance at
life by saving them from the streets and finding them homes. They have helped over 11,000
dogs to date and have a vet on duty 7 days a week, 24 hours a day. This site isn’t as closely
linked to our research topic as the other sites our group is volunteering for but they do utilize
technology often to conduct their volunteer work. Their mission is about helping and being good-
hearted so although they’re not closely linked with our research topic they may have opinions or
experience that can bring insight when writing our research paper. If not, we will branch out and
ask peers on social media if they’ve had issues with AI technology working for them in the past.
Gender and Racial Bias in Artificial Intelligence 5
These people will be perfect to interview because they’ll only reach out to answer the questions
Another one of the organizations that our group is volunteering for is Computers 2 Kids,
a non profit organization in San Diego. Their mission is to provide computer access to families
in need and promote computer literacy. The organization hopes to increase equal opportunities
laptops, monitors, and more for the purpose of refurbishing and then distributing them to
disadvantaged families with school aged children. While the organization is not directly related
to artificial intelligence, it does address the inequalities present in technology and works to
correct them. The volunteers and employees that work at Computers 2 Kids have experience
with technology and would be able to provide their perspective on how AI would work within the
organization. I believe their input would be valuable in getting accurate data for this project.
Another possible option would be to interview fellow students or anyone in the technology field.
Their experience with technology would be useful in getting an idea of how AI will work in
The last organization on the list that our service learning group is volunteering for is the
San Diego Sheriff’s Department. In recent years law enforcement activities have been deemed
evil in the eyes of some and artificial intelligence will probably end up making things worse if it
isn’t handled properly. More specifically, people of color have been having historical conflicts
with the police so they’re more at risk from suffering from artificial intelligence bias in the hands
of law enforcement. Volunteering with this organization will give us a good insight into how
artificial intelligence is currently being used in a law enforcement setting. It will also be a great
opportunity to ask questions about the issues that artificial intelligence is expected to bring in
Conduct Research
One of the people interviewed for the research portion of this project was an IT
Supervisor for the San Diego Sheriff’s Department. He was a good candidate for an interview
because he’s in a position where he can utilize AI in a way that affects over a million people.
(See Appendix A)
Another one of the people interviewed was a supervisor from an animal shelter called
Baja Dog Rescue. His name is Adam and he was an ideal interviewee to speak on our topic
because he uses technology often to help run the shelter and raise awareness. He’s also a
white male like the majority of people in technology so it was interesting to see where he stood
The last person interviewed has a background in political science and is currently
working in customer service. He was a promising option for an interviewee because he provided
a different perspective regarding AI. He could look at both the business side of the issue as well
Findings
In the interviews, we chose research questions that we hoped would provide us with an
overall idea of how each person felt about artificial intelligence. Although the sample size is
small, we were optimistic that this could give us a better understanding of the attitudes society
has towards technology as a whole. From the interviews, there were a few recurring themes that
presented themselves.
While each person interviewed came from differing fields, there seemed to be a common
consensus that artificial intelligence can be beneficial to both non profit organizations as well as
businesses. However, the important caveat being that the AI would need to be carefully
selected and monitored to ensure there is no bias. Please see Appendix A for more information
on artificial technology and potential bias and Appendix C for potential solutions for neutrality.
Gender and Racial Bias in Artificial Intelligence 7
The main theme overall is that this technology would have a positive impact on any
organization that implements it but AI should be shackled to a certain degree. There need to be
limits to AI in addition to human supervision. The clients we interviewed also were optimistic
about the potential benefits of AI. Please see Appendix B to see a client that’s inexperienced but
trusting of AI enough to implement and rely on it. This interviewee differed from our other two
A major selling point for AI seemed to be its efficiency and its ability to cut down on costs
for these organizations and businesses. The main concern did not appear to be with the actual
abilities of AI but more of the beliefs of the person behind it. An AI program is impacted by the
programmer. Two of the interviewees felt that this is an issue that needs to be recognized prior
to implementing this technology. Overall, based on the data collected we have found that all
interviewees see the benefits of AI and would like to implement it into their businesses.
However, some are skeptical of relying on the technology too heavily due to the potential biases
the AI technology could have picked up during the development and training process.
Conclusions
From the findings in this report, one could make the case that a portion of the population
does not have significant experience with artificial intelligence. That being said, it seems that
there is a cautious but positive attitude towards artificial intelligence. While there are many
movies and novels outlining the potential disastrous consequences of using artificial intelligence
in society, people seem to be more hopeful about the future benefits and ways in which this
technology could improve lives. However, the time constraint and people available for us to
interview didn’t allow for an opinion from a woman or someone in the minority. We did answer
the research question and acknowledged a bias in artificial intelligence based on our interviews
but were unable to answer the question to its fullest extent without first hand opinions and
Despite this, the interviews we conducted did provide a lot of insight. We noticed the
main concern seems to be more with the person responsible for creating the artificial
intelligence’s software than the actual artificial intelligence itself. It would be irresponsible to
ignore the biases, even unintentional ones, that could come into play. Artificial intelligence
should be implemented at a slow pace in order to ensure that it is capable of making the right
decisions for these organizations and businesses. Additionally, at least for a certain period of
time, there should be close supervision to make sure that the artificial intelligence is performing
effectively and as expected. Without this much needed supervision we will continue to find
Artificial intelligence is a technology that is being used more every day. As discussed
above, without the right supervision it can turn into something that will end up going against
every view of social justice due to its inherent bias. This is something that should be in the back
Recommendations
Through our research we discovered that artificial intelligence will end up reflecting in
some way the person that created it. This could potentially introduce a bias into the artificial
intelligence that was not planned. That is why we recommend that in the future every artificial
intelligence should be supervised by someone other than its creator. Ideally the people
supervising would be a homogenous group that can perform checks and balances on each
other.
One thing that our government could do for us to prevent things such as artificial
intelligence bias would be to set up a framework for overseeing AI. This framework could be
unbiased group. The board would work together and observe the results generated by artificial
intelligence. Then they would be able to make decisions together on whether the results
Gender and Racial Bias in Artificial Intelligence 9
produced by the AI are within the bounds of what is acceptable or whether the results are
biased or not.
Currently, the state of California is doing something similar with the RIPA (Racial and
Identity Profiling Advisory) board. The board was created for the purpose of eliminating racial
and identity profiling, and improving diversity and racial and identity sensitivity in law
enforcement. The RIPA board collaborates with agencies, community stakeholders, and
academic researchers to prevent bias within law enforcement. Something like this being set up
References
Daneshjou, R., Smith, M. P., Sun, M. D., Rotemberg, V., & Zou, J. (2021). Lack of transparency
and potential bias in artificial intelligence data sets and algorithms. JAMA Dermatology,
157(11), 1362.
https://doi.org/10.1001/jamadermatol.2021.3129
Lohr, S. (2018, February 9). Facial recognition is accurate, if You're a White Guy. Duke
https://courses.cs.duke.edu/spring20/compsci342/netid/readings/facialrecnytimes.pdf
Nelson, G. S. (2019, July 1). Bias in artificial intelligence. North Carolina Medical Journal.
utm_source=TrendMD&utm_medium=cpc&utm_campaign=North_Carolina_M
edical_Journal_TrendMD_1
Gender and Racial Bias in Artificial Intelligence 11
Appendix A
Interview Transcripts
because he’s in a position where he can utilize AI in a way that affects millions of people.
F: Do you plan on using AI in the future and what are you worried about when it comes to the AI
S: The biggest concern should be transparency about how AI is used and communicate that
publicly. Further, AI models may have inherent biases based on the data that is used to train the
models. This is especially a potential issue if any AI systems are employed to identify race and
gender. Understanding the training data of specific models would help mitigate issues.
F: Do you think AI should be used in the future with non profit organizations and businesses
S: Yes. AI is a very broad term, but if we define it as a set of tools that refer to systems that
mimic human intelligence then there are many ways a non-profit organization can benefit.
These types of systems can augment tasks that people do every day to accomplish more, better
and faster. Ultimately this can lead to the more efficient spending of public funds and a safer
public. An example of this may be a computer vision system that can work with other event-
based data from sensors to inform decisions a 9-1-1 dispatcher needs to make. This could lead
Appendix B
I decided to interview my site supervisor. His name is Adam and he was an ideal
interviewee to speak on our topic because he uses technology often to help run the
shelter and raise awareness. He’s also a white male like the majority of people in
M: Have you had any experiences with Artificial intelligence technology? If so, how well did it
A: No, none.
M: Would you trust Artificial Intelligence technology to correctly train the dogs in your shelter
A: Yes.
Gender and Racial Bias in Artificial Intelligence 13
Appendix C
Interview Transcript
service and has a background and a Master’s Degree in political science. I was unable to
interview someone from my site due to the increased traffic their organization has been
receiving because of the pandemic. Both my site supervisor and volunteers had too
H: Do you plan on using AI in the future and what are you worried about when it comes to the AI
K: I believe in the future we will implement AI as a means for increasing customer satisfaction
and efficiency. I am slightly concerned about the potential for bias within AI which is why I think
that regardless of how AI is used in business, there needs to be a group or individual in charge
of overseeing the AI and reviewing the results for accuracy. I don’t think AI should be completely
relied upon and instead should just be seen as another tool to be used to improve customer
service.
H: Do you think AI should be used in the future with nonprofit organizations and businesses
K: I think both businesses and nonprofits will eventually be using artificial intelligence to some
degree. It seems inevitable at this point and I think it will help cut down on inefficiency. I believe
there may be some job loss but hopefully, more people will be hired to make more important
H: Do you believe AI could be used to make distribution decisions for who receives the
K: While I do think AI could be used to make distribution decisions, I don’t see it as the best use
of AI. There are too many variables in that kind of decision making to leave it up to artificial
intelligence which, at the end of the day, is software that was written by someone who has their
own beliefs and biases. I believe artificial intelligence could be better used to examine
computers as they come in. It could then make decisions on the computer’s components and
H: In your opinion, is there any inherent bias with AI, and do you think it could have a negative
K: I think there is a certain amount of bias with any kind of software. Someone had to write the
program to create the AI. That being said, in my opinion, there is more likely to be a positive