Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 5

Megan Escurra 

Professor McAdams 

ENC 1102 

8 December 2019 

Why is There a Demand for Diversity in Artificial Intelligence?

The development of artificial intelligence (AI) was for the purpose of creating new and

advanced technology that performs operations like a human but much faster. The idea around AI

was to allow what used to be more complicated tasks such as looking up information on the

internet, communicate with other people, or watching a movie simpler and more accessible. For

example, when someone wanted to know information about an event that happened in the past,

they would have to go to a library or have access to books that contained the specific information

being searched for. Now when people need information for a paper or to settle a debate, they just

go on their electronic devices and search the information up on Google, or any other major

search engine. The advancements of AI within recent years have caused major controversy for

major technology companies that are based around AI. The major controversy as mentioned is

the lack of diversity within the employees of these major tech companies. The concern with this

controversy is the fact that these programmers are the ones that develop the software and data

input that create AI systems. Since the people behind the data and make-up of most AI systems

lack the diversity necessary, it has caused racial and gender bias within these systems. 

Just by looking at large companies, such as Microsoft, Google, Amazon, and many more

technologically based companies, multiple studies have proved that most of the employees

consist of white men. A survey “published by the AI Now Institute, [found] more than 150

studies and reports” (1) that showed AI systems causing gender and racial bias. The reason as to
why a lack of diversity causes these biases is because when these programmers are creating these

AI systems, most of the time they do not take into consideration how the computer will react to

people of various races, ethnicities, and genders. The largest offenders of participating in the

creation of bias AI are large technology companies that create basically what humans live off of. 

Some examples of different ways these companies participated in causing gender and racial bias

are through “image recognition services making offensive classifications of minorities, chatbots

adopting hate speech, and Amazon technology failing to recognize users with darker skin colors”

(1). All these instances prove that there is an inadequacy to the data being inputted into the

programs.  

The way AI programs work are

through two types of algorithms: generative

and discriminator. The generative algorithm

is “responsible for generative new thinking,”

while the discriminator algorithm is

“responsible for deciding whether new data

belongs to the training dataset” (2). The parts of the algorithms that can create gender and racial

bias is the discriminator algorithm because it relies on programmers to input data. Depending on

the data inputted, it will be the basis to determine what types of data do not fit the criteria. For

example, when it comes to facial recognition software for a filter on Snapchat, the programmers

will include example data of faces the AI system should be able to recognize. Therefore, since

the filters are for humans, the people creating the dataset will exclude animals and only include

people. In order to create an accurate and inclusive facial recognition filter, the dataset will have

to include faces of people of all ages, races, and genders. If the programmer were to not have
included all races and ethnicities, then the filter would be unable to recognize their faces as a

person. The downfall and affect this would have on users of the software would not allow them

to perform the same tasks or be included along with the majority of people.

Although the only major change that would actually benefit in limiting the amount of

gender and racial bias within AI is diversifying the type of people responsible for developing the

algorithms. Even though the change has to do with the employees, it is an issue that demands the

attention and support from people not directly related to the problem. In a way, everyone is

connected to the problem because artificial intelligence is a primary role in the everyday life of

majority of the world. As of recent advancements in AI, there has been more consideration and

inclusion of gender, age, race, and ethnicity.

Except, there are some parts of inclusion

that some people tend to look over, which is

“diversity’s most vital elements: culture,

tradition, and religion” (3). Diversity does

not only define to the color of someone’s

skin or their gender but also people’s

differing morals, practices, and beliefs. The types of AI technology that usually includes these

other types of diversity are household items. In order to accommodate to various functioning

households, these tech items need to be programmed to have the capability to adjust. Ways these

items are programmed to accommodate is by having numerous languages or accents.

While most of the highly influential technology companies have admitted to their

products aiding in gender and racial bias, some companies, such as Google, tried to sugarcoat

and excuse their participation. In a video created by Google, they try to explain the definition of
machine learning to people that do not have any detailed or previous knowledge about artificial

intelligence. While the video is informative, it ignores all the major problems and reasons as to

why bias is heavily seen in current technology. Companies like Google develop and post these

videos in order to justify and comfort the people with lack of experience in the ways artificial

intelligence and machine learning operate within society. Instead of allowing these multi-

billionaire companies to treat people like their concerns are meaningless, gaining knowledge,

even if its basic knowledge, about the ways gender and racial bias happens can allow more

people to advocate for a change. Since there is not a specific group or organization that

advocates against artificial intelligence, the main non-engaged stakeholder would be normal

people who use AI but ignore the exclusion of a diverse dataset and programmers that create the

systems. Other ways the non-engaged stakeholders can help improve and increase diversity in AI

is by sending frequent feedback to major tech companies about instances where they were

excluded. For example of ways programmers and users of AI can work together to create

diversity is when it comes to facial recognition software, programmers can ask for pictures of

peoples faces to include into the datasets in order to include as many races and ethnicities as

possible so everyone is included. There will always be flaws in new advancements and constant

changes in culture and social norms but that leaves no excuse for artificial intelligent technology

to not continuously keep up and develop new ways for inclusion of everyone.
Works Cited 

1. https://www.theguardian.com/technology/2019/apr/16/artificial-intelligence-lack-

diversity-new-york-university-study#:~:targetText=Lack%20of%20diversity%20in

%20the,New%20York%20University%20research%20center.&targetText=The

%20biases%20of%20systems%20built,field%20itself%2C%20the%20report%20said.

2. https://www.vice.com/en_us/article/8xzwgx/racial-bias-in-ai-isnt-getting-better-and-

neither-are-researchers-excuses

3. https://www.autodesk.com/redshift/diversity-in-artificial-intelligence/

4. https://youtu.be/59bMh59JQDo

You might also like