Professional Documents
Culture Documents
VU Social Sciences APA7 Paper Template-2
VU Social Sciences APA7 Paper Template-2
December 8, 2023
2
Contents
Introduction 3
Social Construction of Identity . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
The Role of Machine Learning Algorithms . . . . . . . . . . . . . . . . . . . . 4
Conclussion 9
3
Introduction
Question 9: “Machine learning is a method of data analysis that automates analytical
model building. It is a branch of artificial intelligence based on the idea that systems can
learn from data, identify patterns, and make decisions with minimal human intervention”.
Critically explore how machine learning can regulate the curation of the digital self.
Focus Point: Online echo chambers, Extremist viewpoints, and the curation of the
In Stets and Burke (2000), the authors define identity using identity theory and
social identity - “social identity theory and identity theory, the self is reflexive in that it
can take itself as an object and can categorise, classify, or name itself in particular
ways in relation to other social categories or classifications. This process is called
self-categorisation in social identity theory ”.
We will define the digital self as one’s online social media account profile. This
persona can often be very different from one’s real-life persona. Belk (2013) stated
that the digital self is an extended self - “As with other aspects of the digital extended
self, the battle is to both adapt to and control all of the new possibilities for
self-presentation. And in a visible shared digital world, such control becomes
increasingly difficult”.
One can also have multiple social media accounts, each of which can have
vastly different personas. The ability to remain anonymous creates an environment
for one to act without judgement and backlash. This allows one to explore oneself
and create a new persona unbounded by judgement, expectation, and appearance
(Belk, 2013). I may be very nice and professional on LinkedIn whilst being rude and
cynical on Reddit. Both of these, we will classify as different personas. Both reside
within the same person. These multiple personalities are discussed by Zhong et al.
(2017). This research displays the observed phenomenon of multiple online
personas. Each of these online personas and real-life personas collectively combine
4
to create your full self. In this essay, we will focus solely on one’s online persona as it
relates to their Instagram profile. This will very likely only be a fraction of one’s online
persona, as we ignore one’s other online personas. In doing so, more specific
insights can be gained on one’s digital self, as it relates to their Instagram social
media profile. Instagram was a good social media platform to use for this study as it
has a large user base and readily available data (Agung & Darma, 2019).
As you create your online social media persona, data is gathered about your
likes and preferences, for example, we will use Instagram. identify patterns, Patterns
are identified about one’s likes, and ideas. More content in line with your preferences
is shown using machine learning pattern recognition. make decisions(curation of
material). Over time a more and more accurate representation of one’s online
persona is established. This leads to a heightened risk of extremely curated material
and a polarised population Agung and Darma (2019).
5
So, if it is not social media algorithms that are the main cause of the formation
of online echo chambers, then what is?
6
Figure 1
Inter-group Bias
In the research of Batalha (2008), inter-group bias is found to be the main cause of
echo chambers. This is depicted in figure 1. Here we observe that echo chambers
have more to do with our innate in-group favouritism, rather than the biased curation
of content by algorithms. To understand this properly, we ought to first ponder the
goal of social media companies. They want to keep you on the platform for as long as
possible. This is coined as “the attention economy” (Batalha, 2008). Humans are
more responsive to negative content, this keeps them engaged on the platform for
longer. One has more of a social bubble in real life. Online, one is confronted with a
wide range of opposing viewpoints. Negative interactions keep us on the platform for
a longer period. As demonstrated by Bruns (2019), we have long had the problem of
echo chambers, even before machine learning algorithms emerged. This type of
in-group bias is excellently depicted by figure 2. This kind of social-identity-based
psychological phenomenon can give a platform for extremist ideologies to flourish
(Mcleod, 2023).
7
Figure 2
In-group Favouritism
8
Although it is clear that social identity plays a massive role in the creation of
echo chambers and extremist viewpoint generation, as seen in section , we ought to
also focus on the potential risks these machine learning models pose. We can use
these algorithms to help mitigate extremist views and create a more inclusive and
cooperative society, rather than exacerbating existing in-group favouritism. When our
bias is human-created, we can change policy. However, with AI, it can often be a
black box where no one is exactly sure what the cause of the bias is, or how to
change the algorithm curation methods (Reviglio & Agosti, 2020).
In the work of Woolley and Howard (2016), the authors demonstrate how
social media algorithms can be leveraged to disseminate propaganda. Curated
content can be used to manipulate the minds of the people. Adherence to the status
quo of one’s in-group, and adherence to authority leads to stubborn loyalty.
Friend suggestions are made based on who is most similar to you. This leads
to more in-group favouritism. These models recommend who you should connect
with by your current groups, preferences, and connections. This can further dictate
who you talk to and create more thought-isolation (Eslami et al., 2014).
Social media algorithms can perform mood manipulation. This can be used as
a form of psychological warfare. Experimental algorithms were implemented to
manipulate the mood of users by Meta (parent company of Instagram and Facebook)
(Hill, 2014).
9
These algorithms can also influence what you buy. A study done by Amira and
Nurhayati (2019), found that these targeted advertisements reinforce what you buy
and thus affect your interests, habits, and your digital self.
Conclussion
10
References
Shcherbakova, O., & Nikiforchuk, S. (2022). Social media and filter bubbles. Scientific
Journal of Polonia University, 54(5), 81–88.
Stets, J. E., & Burke, P. J. (2000). Identity theory and social identity theory. Social
Psychology Quarterly, 63(3), 224–237. Retrieved December 8, 2023, from
http://www.jstor.org/stable/2695870
Tollefson, J. (2023). Tweaking facebook feeds is no easy fix for polarization, studies
find. Nature.
Woolley, S. C., & Howard, P. N. (2016). Automation, algorithms, and politics| political
communication, computational propaganda, and autonomous
agents—introduction. International Journal of Communication, 10, 9.
Zhong, C., Chang, H.-w., Karamshuk, D., Lee, D., & Sastry, N. (2017). Wearing many
(social) hats: How different are your different social network personae?
Proceedings of the International AAAI Conference on Web and Social Media,
11(1), 397–406.