Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

1

Machine Learning Essay


Online Echo Chambers, Extremist viewpoints, and the Curation of The Digital-Self

Student Name: Kevin Singpurwala


Student Number: 16359146

Module: IS30350 The Digital Self


Module Coordinator: Dr Stefanie Havelka

University College Dublin, College of Social Sciences & Law

December 8, 2023
2

Contents

Introduction 3
Social Construction of Identity . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
The Role of Machine Learning Algorithms . . . . . . . . . . . . . . . . . . . . 4

Research Design and Theoretical Framework 5


Filter Bubbles and Echo Chambers . . . . . . . . . . . . . . . . . . . . . . . . 5
Social Cohesion and Group think . . . . . . . . . . . . . . . . . . . . . . . . 5

Unveiling the Perils: Risks in the Misuse of Machine Learning Algorithms 8


Minimal human intervention . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Manipulation of the Populace . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Conclussion 9
3

Introduction
Question 9: “Machine learning is a method of data analysis that automates analytical

model building. It is a branch of artificial intelligence based on the idea that systems can

learn from data, identify patterns, and make decisions with minimal human intervention”.

Critically explore how machine learning can regulate the curation of the digital self.

Focus Point: Online echo chambers, Extremist viewpoints, and the curation of the

digital-self using machine learning algorithms

Social Construction of Identity

In Stets and Burke (2000), the authors define identity using identity theory and
social identity - “social identity theory and identity theory, the self is reflexive in that it
can take itself as an object and can categorise, classify, or name itself in particular
ways in relation to other social categories or classifications. This process is called
self-categorisation in social identity theory ”.
We will define the digital self as one’s online social media account profile. This
persona can often be very different from one’s real-life persona. Belk (2013) stated
that the digital self is an extended self - “As with other aspects of the digital extended
self, the battle is to both adapt to and control all of the new possibilities for
self-presentation. And in a visible shared digital world, such control becomes
increasingly difficult”.
One can also have multiple social media accounts, each of which can have
vastly different personas. The ability to remain anonymous creates an environment
for one to act without judgement and backlash. This allows one to explore oneself
and create a new persona unbounded by judgement, expectation, and appearance
(Belk, 2013). I may be very nice and professional on LinkedIn whilst being rude and
cynical on Reddit. Both of these, we will classify as different personas. Both reside
within the same person. These multiple personalities are discussed by Zhong et al.
(2017). This research displays the observed phenomenon of multiple online
personas. Each of these online personas and real-life personas collectively combine
4

to create your full self. In this essay, we will focus solely on one’s online persona as it
relates to their Instagram profile. This will very likely only be a fraction of one’s online
persona, as we ignore one’s other online personas. In doing so, more specific
insights can be gained on one’s digital self, as it relates to their Instagram social
media profile. Instagram was a good social media platform to use for this study as it
has a large user base and readily available data (Agung & Darma, 2019).

The Role of Machine Learning Algorithms

As you create your online social media persona, data is gathered about your
likes and preferences, for example, we will use Instagram. identify patterns, Patterns
are identified about one’s likes, and ideas. More content in line with your preferences
is shown using machine learning pattern recognition. make decisions(curation of
material). Over time a more and more accurate representation of one’s online
persona is established. This leads to a heightened risk of extremely curated material
and a polarised population Agung and Darma (2019).
5

Research Design and Theoretical Framework

Filter Bubbles and Echo Chambers

Social media platforms use machine learning algorithms extensively. In the


work of Balaji et al.(2021), the authors discuss how machine learning algorithms are
increasingly being used to curate user experience on social media platforms.
There is a common misconception that as machine learning models
increasingly curate data, they contribute to societal division and extremism. This is
thought to be a catalyst for societal polarization and the dissemination of extreme
content. This can alter and shape one’s persona over time; both their digital self and
real-life self. The media one consumes changes one’s thoughts and thus one’s
ideologies and one’s digital self (Shcherbakova & Nikiforchuk, 2022). Bruns (2019)
claims that machine learning algorithms only play a minor role in promoting extremist
viewpoints. Regarding online bubbles, (Bruns, 2019, page 8) states that we should
not attempt to “absolve ourselves from the mess we are in by simply blaming
technology. Because they tap into apparently common sense...”.
While machine learning algorithms and content curation might seem like major
contributors to the spread of extremist views, in reality, several other factors play a
more substantial role in this phenomenon. This is demonstrated in the work of
Tollefson (2023), where a user’s standard Instagram feed was compared against a
feed curated by a machine learning algorithm. Tweaking the machine learning
algorithm only resulted in a marginal reduction of online echo chambers.

Social Cohesion and Group think

So, if it is not social media algorithms that are the main cause of the formation
of online echo chambers, then what is?
6

Figure 1
Inter-group Bias

In the research of Batalha (2008), inter-group bias is found to be the main cause of
echo chambers. This is depicted in figure 1. Here we observe that echo chambers
have more to do with our innate in-group favouritism, rather than the biased curation
of content by algorithms. To understand this properly, we ought to first ponder the
goal of social media companies. They want to keep you on the platform for as long as
possible. This is coined as “the attention economy” (Batalha, 2008). Humans are
more responsive to negative content, this keeps them engaged on the platform for
longer. One has more of a social bubble in real life. Online, one is confronted with a
wide range of opposing viewpoints. Negative interactions keep us on the platform for
a longer period. As demonstrated by Bruns (2019), we have long had the problem of
echo chambers, even before machine learning algorithms emerged. This type of
in-group bias is excellently depicted by figure 2. This kind of social-identity-based
psychological phenomenon can give a platform for extremist ideologies to flourish
(Mcleod, 2023).
7

Figure 2
In-group Favouritism
8

Unveiling the Perils: Risks in the Misuse of Machine Learning Algorithms

Although it is clear that social identity plays a massive role in the creation of
echo chambers and extremist viewpoint generation, as seen in section , we ought to
also focus on the potential risks these machine learning models pose. We can use
these algorithms to help mitigate extremist views and create a more inclusive and
cooperative society, rather than exacerbating existing in-group favouritism. When our
bias is human-created, we can change policy. However, with AI, it can often be a
black box where no one is exactly sure what the cause of the bias is, or how to
change the algorithm curation methods (Reviglio & Agosti, 2020).

Minimal human intervention

In the research of Alvarado and Waern (2018), artificial Intelligence systems


are seen to be semi-autonomous. These algorithms do not require human input to
function. They can independently curate content for social media. There is a false
assumption that this means there is no bias in the curated content. As discussed in
section , this biased content curation is only a minor contributor to the increase in
online echo chambers and more extremist viewpoints.

Manipulation of the Populace

In the work of Woolley and Howard (2016), the authors demonstrate how
social media algorithms can be leveraged to disseminate propaganda. Curated
content can be used to manipulate the minds of the people. Adherence to the status
quo of one’s in-group, and adherence to authority leads to stubborn loyalty.
Friend suggestions are made based on who is most similar to you. This leads
to more in-group favouritism. These models recommend who you should connect
with by your current groups, preferences, and connections. This can further dictate
who you talk to and create more thought-isolation (Eslami et al., 2014).
Social media algorithms can perform mood manipulation. This can be used as
a form of psychological warfare. Experimental algorithms were implemented to
manipulate the mood of users by Meta (parent company of Instagram and Facebook)
(Hill, 2014).
9

These algorithms can also influence what you buy. A study done by Amira and
Nurhayati (2019), found that these targeted advertisements reinforce what you buy
and thus affect your interests, habits, and your digital self.

Conclussion
10

References

Agung, N. F. A., & Darma, G. S. (2019). Opportunities and challenges of instagram


algorithm in improving competitive advantage. International Journal of
Innovative Science and Research Technology, 4(1), 743–747.
Alvarado, O., & Waern, A. (2018). Towards algorithmic experience: Initial efforts for
social media contexts. Proceedings of the 2018 chi conference on human
factors in computing systems, 1–12.
Amira, N., & Nurhayati, I. K. (2019). Effectiveness of instagram sponsored as
advertising/promotion media (study of tiket. com advertisement with epic model
method). JCommsci-Journal Of Media and Communication Science, 2(2).
Balaji, T., Annavarapu, C. S. R., & Bablani, A. (2021). Machine learning algorithms for
social media analysis: A survey. Computer Science Review, 40, 100395.
Batalha, L. (2008). Intergroup relations : When is my group more important than
yours?
Belk, R. W. (2013). Extended self in a digital world. Journal of consumer research,
40(3), 477–500.
Bruns, A. (2019). Are filter bubbles real? John Wiley & Sons.
Eslami, M., Aleyasen, A., Moghaddam, R. Z., & Karahalios, K. (2014). Friend
grouping algorithms for online social networks: Preference, bias, and
implications. Social Informatics: 6th International Conference, SocInfo 2014,
Barcelona, Spain, November 11-13, 2014. Proceedings 6, 34–49.
Hill, K. (2014). Facebook manipulated 689,003 users’ emotions for science. Forbes.
Mcleod, S. (2023). Social identity theory in psychology (tajfel & turner, 1979).
Retrieved from Simply Psychology: https://www. simplypsychology.
org/social-identity-theory. html.
Reviglio, U., & Agosti, C. (2020). Thinking outside the black-box: The case for
“algorithmic sovereignty” in social media. Social Media+ Society, 6(2),
2056305120915613.
11

Shcherbakova, O., & Nikiforchuk, S. (2022). Social media and filter bubbles. Scientific
Journal of Polonia University, 54(5), 81–88.
Stets, J. E., & Burke, P. J. (2000). Identity theory and social identity theory. Social
Psychology Quarterly, 63(3), 224–237. Retrieved December 8, 2023, from
http://www.jstor.org/stable/2695870
Tollefson, J. (2023). Tweaking facebook feeds is no easy fix for polarization, studies
find. Nature.
Woolley, S. C., & Howard, P. N. (2016). Automation, algorithms, and politics| political
communication, computational propaganda, and autonomous
agents—introduction. International Journal of Communication, 10, 9.
Zhong, C., Chang, H.-w., Karamshuk, D., Lee, D., & Sastry, N. (2017). Wearing many
(social) hats: How different are your different social network personae?
Proceedings of the International AAAI Conference on Web and Social Media,
11(1), 397–406.

You might also like