Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

Guzman III1

Francisco Guzman III

ENGL 1302 – 207

Ana Mendoza

3 February 2024

De, S.J., Imine, A. Consent for targeted advertising: the case of Facebook. AI & Soc 35, 1055–

1064 (2020). https://doi.org/10.1007/s00146-020-00981-5

The authors of this article argue that under normal circumstances, the amount of data processed

by a user of any given website is kept to a minimum due to the website having to follow GDRP

requirements that require the website to give the user full control over what data they can use.

Popular websites like Facebook, however, cross this line through consent mechanisms that are

ambiguous and trick the user into providing more personal data than they normally would.

Facebook does this due to it being an ad-based platform and the data collected is a lot more

personalized than other platforms such as Facebook being able to collect facial ID, user interest,

behaviors, recently searched terms, and many other personal details about the user. All of this is

fed into an algorithm that eats up this information to show tailor-made advertisements to its users.

Although Facebook does have consent mechanisms that ask the user for permission to gather all

of this data, it is not clear to the user how much data they are allowing the company to have

which could be dangerous as it may cause the user to be categorized into an echo chamber of

political ideals as well as provide the user with inappropriate advertisements. Data leaks are also

a risk when the algorithm is allowed to have so much of a user's data. Seeing how this article is

related to how the AI algorithm is used to shape and form opinions and beliefs of people who use

social media platforms and the internet, my paper will benefit from the information provided

about Facebook and how it uses a consent mechanism that is questionable to feed users data into

an algorithm.
Guzman III2

Du, Ying Roselyn. “Personalization, Echo Chambers, News Literacy, and Algorithmic Literacy:

A Qualitative Study of AI-Powered News App Users.” Journal of Broadcasting & Electronic

Media, vol. 67, no. 3, 2023, pp. 246–273, https://doi.org/10.1080/08838151.2023.2182787.

Ying Roselyn Du argues that with the increasing usage of algorithms in news apps and

recommendations, the personalized information the AI is fed tends to form biased news and could

in fact form echo-chamber communities where they are only shown the news and opinions that

they like. This can lead to the absence of important information in none algorithm-aware people

as well as a one-sided viewpoint on the news that is provided to the user. The articles then

provide responses from a survey that asks the participants many questions on algorithmic news

sites such as if they feel more informed, if they feel more in control, their experience with the

app, if the users feel less proactive in information-seeking, if the users trust the recommended

content, as well as if the users know about the algorithm and how it impacts the information they

are provided. It is for all of these reasons that I believe that this article will further prove my point

that the algorithm used in many popular websites on the internet can form people's opinions and

beliefs for better or for worse.

Hajli, Nick, et al. “Social Bots and the Spread of Disinformation in Social Media: The Challenges

of Artificial Intelligence.” British Journal of Management, vol. 33, no. 3, 2022, pp. 1238–1253,

https://doi.org/10.1111/1467-8551.12554.

This article seeks to see the role of AI and social bots and their harmful effects on users on the

internet by analyzing the amount of disinformation present on social media sites. The article also

shows how the rise of social bots has accelerated the spread of true and false news on social

media platforms. The article also discovers that although social bots found on social media tend

to be less credible than actual human sources, they still have had a great impact on society and

public opinion. The author came to this conclusion about the negative effects of social AI bots
Guzman III3

through the usage of Actor-Network Theory which was “a toolbox to study meaning production

going from abstract structures- actants, to concrete ones – actos (1239). This article serves as

another example of how AI influences our behaviors and beliefs through a variety of means such

as chatbots on social media sites.

Yahoo!, Ayman Farahat, et al. “How Effective Is Targeted Advertising?: Proceedings of the 21st

International Conference on World Wide Web.” ACM Other Conferences, 1 Apr. 2012,

dl.acm.org/doi/pdf/10.1145/2187836.2187852?casa_token=v57FeVvbFcgAAAAA

%3Asd1St_LfPyjld5MKFciug8UyleJTaNYfhjfXgBmc1WlYaXMdy1DPjFDmAEgjU9QbSGGkj

aeEJDYMbA.

This article provides numerical evidence of the effect of targeted advertisements in many fields

such as Credit Card advertisements, Reality TV, Adventure Movies, and College advertisements.

The quantitative data provided by this study proves that certain biases must be accounted for

when trying to determine the effectiveness of targeted advertisements. Through the usage of

click-through rates (CTR) the paper found that contrary to advertisers' beliefs, a more complex

targeting algorithm may harm the advertiser which hints at the possibility that people are

becoming more aware of the many ways AI and more specifically algorithms online are used to

try and mold peoples beliefs through the collection of user data and tailor-made advertisements.

This goes to show that although AI algorithms have gotten good at manipulating the users beliefs

or behaviors through precise targeted advertising, people have become aware of the fact

decreasing the negative effects of the harmful consequences of such AI usage.


Guzman III4

Ledwich, Mark, and Anna Zaitsev. “Algorithmic Extremism: Examining YouTube’s Rabbit Hole

of Radicalization.” arXiv.Org, 24 Dec. 2019, arxiv.org/abs/1912.11211.

This study/article tries to study if the claim that YouTube’s algorithm leads to echo chambers of

radicalized viewpoints through the usage of quantitative data from studies conducted by the

authors of the article such as internet traffic found on the site as well as channel views and

channel clusters. The study concludes that YouTube’s algorithm has the opposite effect as it tries

to steer users away from radicalized content as well as banned users who push those viewpoints

on their site. While YouTube takes protective measures to ensure that radicalized bubbles don’t

form on their site, by enacting these measures, such as the banning of radical users, they actually

force these users onto other platforms which may not have as friendly as an algorithm as their

causing the radicalize beliefs to inadvertently form into bubbles in these new sites. While

YouTube, as shown by this study, does not have its AI algorithm force these views and beliefs

onto its users, other sites may take the place of this which would ultimately lead to the spread of

misinformation and radical beliefs on the internet.

Cavallo, D., Lim, R., Ishler, K., Pagano, M., Perovsek, R., Albert, E., . . . Flocke, S. (2020).

Effectiveness of social media approaches to recruiting young adult cigarillo smokers: Cross-

sectional study. Journal of Medical Internet Research, 22(7) doi:https://doi.org/10.2196/12619

This article shows the effect of social media and AI algorithms in affecting the populous of young

adults and encouraging them to smoke. This study uses surveys in order to quantify the amount of

young adults affected by the social media recruitment platform as well as AI recommended

algorithm in shaping the beliefs and viewpoints of young adults when it comes to smoking. The

study shows that social media platforms are effective methods for companies such as tobacco

companies to reach their targeted audiences, that being young adults, as the platforms help target
Guzman III5

this demographic via the usage of targeted advertisements through AI algorithms that use data

collected from each individual users to make personalized ads appear in their feeds and

recommended tabs. This article overall contributes to my paper by showing the negative effects

that AI algorithms have on the social populus by negatively influencing their beliefs and

viewpoints through ads and through echo-chambers.

Galaz, Victor, et al. “AI Could Create a Perfect Storm of Climate Misinformation.” ArXiv.Org,

2023, https://doi.org/10.48550/arxiv.2306.12807.

This article argues that with the release of recent AI software such as ChatGPT and its ability to

make human-like written text and realistic images comes the rise of misinformation on an

unprecedented scale on social media and the internet in general. The study goes into the study of

the neuroscience of false beliefs and why people adopt false beliefs which tie in with how

algorithms diffuse and perpetuate misinformation on social media. This all ultimately leads to

misinformation of climate change on the internet through the rise of AI and its ability to create

believable misinformation to spread to the public. With the rise of new popular platforms, it

becomes increasingly difficult for misinformation research to respond to the increasing

development of technological and social developments. This article goes to prove how the rise of

AI has made misinformation easier than ever to spread which goes to show the negative effects

AI algorithms have on the general population.


Guzman III6

Gabarron, E., Oyeyemi, S. O., & Wynn, R. (2021). COVID-19-related misinformation on social

media: A systematic review. [Desinformation liée a la COVID-19 sur les réseaux sociaux: revue

systématique Ложные сведения о COVID-19 в социальных сетях: систематический обзор

Desinformación relacionada con la COVID-19 en las redes sociales: una revisión

sistemática] World Health Organization.Bulletin of the World Health Organization, 99(6), 455-

463,463A. doi:https://doi.org/10.2471/BLT.20.276782

This article goes into detail about the misinformation found in popular social media sites about

the COVID-19 pandemic as well as the disease itself. For the studies used in the article, the

author reviewed publications of COVID-19-related misinformation found on popular social

media sites that appeared during the beginning of the pandemic. They searched for popular key

terms and concluded that a small proportion of the studies reported that social media sites had

misinformation about COVID-19. The study also proposed solutions to the spread of

misinformation on social media sites such as government interference on sites as well as fact

checks to be implemented on the sites. They also propose that social media users should check

the validity of the information they find before they spread misinformation unwillingly on the

site. This study/article goes to prove that AI algorithms on social media sites can sometimes show

users misinformation based on the data collected on the said users which can lead to negative

consequences for the population.


Guzman III7

Bojić, Ljubiša M, et al. “The Scary Black Box: AI Driven Recommender Algorithms as The Most

Powerful Social Force.” Issues in Ethnology and Anthropology, vol. 17, no. 2, 2022, pp. 719–744,

https://doi.org/10.21301/eap.v17i2.11.

This journal article argues that AI algorithms can shape the population by personalizing the amount of

information individual users are exposed to online due to their goal of selling products to people

online as well as increasing the amount of time the user spends on the website to increase network

traffic. This, the article argues, can lead to the formation of echo chambers online through the

perpetuation of misinformation and radical ideals in a group as the algorithm continues to

recommend this viewpoint to the user as a way to increase their usage on the website. This leads to

confirmation bias in these communities as they are only shown what they want to be shown further

pushing their false beliefs they may have about a certain topic. This all leads to AI algorithms

having a vital role in shaping our current social climate through the use of the internet which can

lead to negative effects on the population.

Bojić, Ljubiša M, et al. “Worrying Impact of Artificial Intelligence and Big Data Through the Prism of

Recommender Systems.” Issues in Ethnology and Anthropology, vol. 16, no. 3, 2021, pp. 935–957,

https://doi.org/10.21301/eap.v16i3.13.

This article goes into how internet addictions and echo chambers are the negative consequences

recommender algorithms have on the social population. They also show that due to the recent

COVID pandemic, the amount of people who use the internet has increased drastically in turn

increasing the risk of people being exposed to the negative aspects of AI-recommended algorithms

due to a lack of knowledge on the internet and AI algorithms in general. They emphasize that there

should be more effort put into researching the phenomenon of echo chambers and misinformation

that is ever more present in our current online ecosystem. They also point out that due to the recent

controversy involving Facebook and the US regulation authorities for data and privacy breaches,
Guzman III8

more people have become aware of the dangers that AI recommendation algorithms pose on the

uneducated public. This article helps further prove the dangers of AI algorithms and their effect on

the general population.

You might also like