Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Social Network Analysis and Mining (2024) 14:138

https://doi.org/10.1007/s13278-024-01303-z

ORIGINAL ARTICLE

Exploring the prevalence of homophily among classes of hate speech


Seema Nagar1 · Kalyani Naik2 · Ferdous Ahmed Barbhuiya1 · Kuntal Dey1

Received: 28 June 2024 / Accepted: 8 July 2024


© The Author(s), under exclusive licence to Springer-Verlag GmbH Austria, part of Springer Nature 2024

Abstract
In this paper, we investigate the phenomenon of homophily in hate speech generation on Twitter, aiming to deepen our
understanding of online hate dynamics. Given the vast amount of information available on Twitter, computing familiarity
and similarity–essential for discovering homophily–poses significant challenges. To address this, we introduce novel meas-
ures for computing familiarity and similarity on the platform. Hate speech on social media can manifest in various forms,
including hate against gender, race, ethnicity, politics, and nationalism. Consequently, we propose methods to detect multiple
forms of hate speech. Utilizing an empirical dataset from Twitter, we demonstrate the prevalence of homophily and explore
its variations across different categories of hate speech.

Keywords Hate speech detection · Graph convolutional network · Social context

1 Introduction Consequently, the imperative to investigate hate speech


within the realm of social media has intensified, prompting
The emergence of social media platforms has ushered in the need for the formulation of effective strategies to detect
unparalleled avenues for communication and the exchange and combat its prevalence.
of information. User-generated content on these platforms Homophily on social networks was first proposed by
is imbued with a spectrum of emotions, ranging from irony McPherson et al. (2001), using the assortative mixing
and puns to expressions of hate. Amidst this diverse array hypothesis. Homophily is defined as the tendency of like-
of emotions, these platforms have also transformed into minded (similar) people to connect with/befriend (famil-
fertile grounds for the propagation of hate speech, carry- iar) each other. A key observation about homophily is that
ing profound and adverse implications for both individu- it structures the ego networks of individuals and impacts
als and societies at large. The proliferation of hate speech their communication behaviour. Therefore, many works
across social media can inflict psychological distress, fuel have studied the role of homophily in information diffu-
instances of harassment, promote discrimination, and even sion, dissemination, topic or community formation and life
incite acts of violence (Matamoros-Fernández et al. 2019). cycle (Aral et al. 2009; De Choudhury et al. 2010; Halber-
stam and Knight 2016; Starbird and Palen 2012; Dey et al.
2018; Paik et al. 2023). Very recently, Xu and Zhou (2020),
* Seema Nagar utilized hashtag-based homophily for re-marketing, a practi-
nagar.seema@gmail.com cal application with immense effect.
Kalyani Naik The importance of homophily in information diffusion
kalyaninaik007@gmail.com motivated us to assess the presence of it in hate speech gen-
Ferdous Ahmed Barbhuiya eration empirically. By understanding the tendencies towards
ferdous@iiitg.ac.in homophily within hateful discourse, we can gain insights
Kuntal Dey into the nature and extent of these ideologies, potentially
u2ckuntal@gmail.com enabling the development of more effective interventions
1
Computer Science and Engineering, Indian Institute
and countermeasures. Works such as Ribeiro et al. (2017),
of Information Technology Guwahati, Bongora, Guwahati, Mathew et al. (2019) study the positional aspect of hateful
Assam 781015, India users in the social network. However, the literature has not
2
Computer Science, SVKM’s NMIMS MPSTME, Mumbai, explored homophily, a crucial aspect. We further investigate
Maharashtra 400056, India

Vol.:(0123456789)
138 Page 2 of 16 Social Network Analysis and Mining (2024) 14:138

the strength of the homophilic phenomenon for different familiarity between pairs can lead to improved performance
types of hate such as, hate against gender, race and nation. in tasks such as link prediction and recommendation.
Familiarity and similarity are the two essential factors in Hate speech on social media can take on various types,
homophily. Familiarity captures the phenomenon of users such as hate against gender, race, ethnicity, politics and
becoming friends with (or, following) other users. The fea- nationalism. Identifying these hateful classes is crucial in
ture to become friends or follow other users is a signal of mitigating the spread of hate speech on social media. Tra-
familiarity. While the content produced, profile information ditional methods, such as clustering-based unsupervised
or any other meta-data information present are signals for approaches and Latent Dirichlet Allocation (LDA), have
similarity computation. been developed to detect different forms of hate. However,
Similarity computation is not straightforward due to the these methods have been found to have limitations in deter-
myriad of information available about users on social media mining the number of topics, interpreting topics, and captur-
platforms. We believe the similarity between a pair of users ing context and temporal dynamics. To address these issues,
should take multiple aspects of the content generated in we propose two complementary automated techniques and
addition to meta-data for the profiles. The multiple aspects methods to detect various hateful forms in a tweet corpus.
we explore in this paper are semantic, syntactic, stylometric The first approach involves using Task Aware Representa-
and topical. We empirically investigate homophily along tion of Sentences (TARS) (Halder et al. 2020) based clas-
with the multiple aspects of hate speech generation. We use sification for few-shot learning to detect hateful forms, par-
word embeddings to compute semantic features for a user. ticularly in cases where corpora lack labels. This approach
The word embeddings are aggregated in a time-decaying enables the system to learn from a small set of labelled
manner to get a complete semantic representation of the examples, making it particularly useful for detecting new
user-generated content. We utilize the important features and emerging forms of hate speech.
needed in authorship attribution (Bhargava et al. 2013) and In the second approach, we use hashtags to detect hate-
some other features designed by us to derive syntactical and ful forms. Hashtags have emerged as a powerful tool for
stylometric features. Additionally, we also include readabil- categorizing tweets and associating them with specific top-
ity (Kincaid 1975) related features. Lastly, we unearth the ics or events. Hashtags can provide valuable insights into
hidden thematic structure of a document along topics and the underlying topics and events that drive hateful con-
categories in two ways, (a) using latent topic modelling to tent. In our proposed automated technique, we first iden-
construct a topic affinity vector and (b) categories using tify seed hashtags for each hateful form and then expand
Empath (Fast et al. 2016) to construct category score vector. these hashtags to include other relevant hashtags. We use
The familiarity between pairs is another critical aspect of the expanded hashtags to identify the hateful forms label
homophily computing and is measured by how well users for each tweet.
know each other. Social media platforms use social network These two methods complement each other and can pro-
graphs consisting of nodes and edges to capture familiarity. vide more comprehensive coverage of detecting various
Relationships can be directed or undirected depending on forms of hate speech. The few-shot learning approach using
whether mutual consent is required. Traditional familiarity Task Aware Representation of Sentences (TARS) is useful
computations rely on explicit features such as edge exist- when labelled data is scarce or unavailable. It can effectively
ence or mutual friend count, but figuring out what to use for learn from a small set of labelled examples and adapt to
familiarity computation is time-consuming and experimen- new and emerging forms of hate speech. This approach can
tal (Dey et al. 2018). Graph embedding-based techniques, be particularly helpful in identifying previously unknown
such as deep learning-based graph neural networks (GNNs), or unclassified forms of hate speech. On the other hand,
have gained significant attention for encoding nodes in a the hashtag-based approach can provide valuable insights
vector latent space and aggregating feature information into the underlying topics and events that drive hateful con-
from local neighbours via neural networks. Unlike previ- tent. By identifying the hashtags associated with different
ous graph embedding models, GNNs aggregate feature forms of hate speech, we can understand the motivations
information from the node’s local neighbours, enabling bet- and ideologies behind hate speech. Moreover, combining
ter representation of the rich neighbourhood information these two approaches can provide a more comprehensive
(Gao et al. 2018; Hamilton et al. 2017; Kipf and Welling and accurate identification of hateful forms. The few-shot
2016a; Liu et al. 2018; Schlichtkrull et al. 2018; Veličković learning approach can identify new and emerging forms of
et al. 2017). We propose using graph convolution network hate speech, while the hashtag-based approach can provide
(GCN) based embedding called variation graph autoencod- insights into the underlying motivations and topics driving
ers (VGAE) (Kipf and Welling 2016b) to capture familiar- the hate speech. Overall, proposing two different approaches
ity. The VGAE-based embedding has shown effectiveness in for detecting hateful forms provides a more flexible and
various downstream tasks, and its application to capturing comprehensive approach to detecting hate speech on social
Social Network Analysis and Mining (2024) 14:138 Page 3 of 16 138

Finally, the hateful forms detection module uses natural


language processing techniques to analyze the text content
and detect hateful speech. This module helps to identify and
categorize different types of hate speech, providing a com-
prehensive understanding of the nature and extent of hate
speech in social media.
The three modules work together to completely under-
stand homophily in hate speech. The combination of the
similarity and familiarity computation modules assists in
defining similarity and familiarity between users, while the
hateful forms detection module assists in identifying the type
of hate speech.
Defining similarities between users on social media plat-
forms is a complex task due to the vast amount of informa-
tion produced by each individual. The sheer volume of data
makes it challenging to determine the similarity between
two users accurately. A user can have the following types of
information available on social media platforms:

• Profile information comprising of demographics, age,


gender, date of birth, time of joining, interests, and com-
Fig. 1  Overall architecture diagram: an illustration of the intercon- munities of interest among many others. The profile
nected components and their relationships
information is usually static and less frequently updated.
• User-generated content, the posts, reactions, and com-
media. We plan to investigate the ensemble of these two ments a user is involved in. The user-generated content
methods in future work. is full of rich information about the user. We can use it
In summary, we make the following contributions: to infer the user’s interests, writing style, frequency of
posting, and readability-related parameters to name a few
• We propose a novel way to compute familiarity using a for writing-related constructs
graph embedding technique
• We explore a slew of similarity features to capture mul- The task of defining similarity between two users is a chal-
tiple aspects of user-generated content lenging one, given the wide range of signals available that
• We empirically demonstrate the effectiveness of the can be used as indicators. This complexity suggests that
newly proposed metrics in establishing familiarity and similarity is a multifaceted and fascinating phenomenon,
similarity against the existing metrics, using homophily with many dimensions. Consequently, to define similarity
as the benchmark of comparison between users, we must consider the multiple dimensions
• We put forward an automated approach to identify and signals available and inferred. In the following sections,
hashtags associated with various hateful forms. we elaborate on the various metrics used to infer similarity.
• We do an in-depth analysis of variations in homophily Similarly, the readily available familiarity metrics fail
strength across different hateful forms to capture the iterative nature of familiarity (familiarity
between two users should factor in the neighbourhood of
Our system is composed of three interrelated modules, both users) between a pair of users. Graph representation
namely the similarity computation module, the familiarity derived using a deep neural network has become popular
computation module, and the hateful forms detection mod- to learn vectorized node embeddings. These learnt node
ule, as depicted in Fig. 1. These modules work together to embedding can then facilitate a multitude of downstream
achieve the overall goal of the system. The similarity compu- graph mining tasks. We use variational graph autoencoder
tation module further comprises defining metrics capable of (VGAE) based embeddings of users in the social network to
capturing multiple facets of similarity aspects. The familiar- compute familiarity. VGAE-based embeddings are capable
ity computation module aims to measure the level of famili- of capturing the iterative nature of node embedding (a node’s
arity between two users based on their previous interactions embedding is a function of its neighbours’ embeddings).
and communication history on the platform. This module is Using VGAE, we get a vector representation of a node (user)
crucial in capturing the relationship dynamics between users in the social network. And then use various distance-based
and understanding their behavioural patterns.
138 Page 4 of 16 Social Network Analysis and Mining (2024) 14:138

metrics such as Cosine, Euclidean, and Manhattan distance


to compute familiarity between a pair of users.
We develop the Hateful Forms Detection Module to cate-
gorize tweets into hateful forms. To achieve this, we propose
two complementary techniques that make use of Task-aware
representation of sentences (TARS) and Hashtag encoding.
These methods are flexible and can be easily extended to
incorporate additional categories for text classification. In
the following sections, we provide a comprehensive descrip-
tion of the modules that are utilized in the Hateful Forms
Detection Module.

2 A novel familiarity computation approach


Fig. 2  Variational graph auto encoder: a model for representing graph
structures with latent variables
We aim to capture a user’s unified view from a position in
their social network. The existing familiarity metrics such
as, whether an edge exists or not, mutual friends count, or if distribution. The decoder in the VGAE framework is rep-
users are part of the same community or not, fail to incor- resented as p𝜃 (A ∣ Z) , a variational approximation that
porate the unified familiarity for a pair of users. We propose transforms the embedding Z into an output estimate  .
to use a graph encoder, to encode a user’s position and then
use these encoding vectors to compute familiarity between Encoder or inferencing moxdel
two users. The encoder comprises two graph convolutional layers.
It transforms the social network graph into a latent repre-
2.1 Graph encoder sentation using a convolutional network. The first layer
inputs the adjacency matrix A and the identity matrix, and
The graph encoder is designed to capture a user’s latent posi- outputs the latent variable Z. This first GCN layer pro-
tion within a social network. It accepts two inputs, which duces a lower-dimensional feature matrix X̄ , as expressed
include the identity matrix and the adjacency matrix rep- in Eq. 1. The formula uses à = D−1∕2 AD−1∕2 , which is a
resenting social connectivity. The encoder is trained in an symmetrically normalized adjacency matrix. The second
unsupervised fashion to learn a vector representation of the GCN layer generates the mean 𝜇 and log of the square of
input graph. The method used for the graph encoder is a the standard deviation 𝜎 2 , as shown in Eqs. 2 and 3.
well-known technique referred to as Variational Graph Auto-
Encoder (VGAE) (Kipf and Welling 2016b). The VGAE 2 X̄ = GCN(X, A) = ReLU(AXW
̃ 0) (1)
is composed of two components: an encoder to encode the
graph and a decoder to decode the graph. The encoder con-
𝜇 = GCN𝜇 (X, A) = Ã XW
̄ 1 (2)
sists of two layers of graph convolutional layers that con-
vert the graph into a lower-dimensional latent representation
referred to as Z. Meanwhile, the decoder optimizes the cross- log𝜎 2 = GCN𝜎 (X, A) = Ã XW
̄ 1 (3)
entropy loss between the original and reconstructed adja-
cency matrix and the KL divergence between the approxi- The two layers of GCN can be combined as shown in Eq. 4
mate and the true posterior (Fig. 2). and Z can be calculated using Eq. 5, where 𝜖 ∈ N(0, 1) ,
wherein N represents a Gaussian (normal) distribution.
2.1.1 Variational graph auto encoder (VGAE) ̃
GCN(X, A) = A.ReLu( ̃ (4)
AXW 0 )W1

The VGAE encoder is represented by q𝜙 (Z ∣ X, A) , which


transforms a data point X into a distribution. The dis-
Z =𝜇+𝜎 ∗𝜖 (5)
tribution is modelled as a multivariate Gaussian and is In summary, the encoder can be written as shown in Eq. 6.
characterized by the predicted means and standard devia-
tions. The lower-dimensional representation, known as the
embedding Z, is obtained by sampling from this Gaussian
Social Network Analysis and Mining (2024) 14:138 Page 5 of 16 138

q(zi X, A) = N(zi _ −�� 𝜇i , diag(𝜎i2 )) (6) 3 Defining novel metrics for similarity
computation
Decoder
The decoder, as part of the generative model, is repre- We propose that the calculation of similarity on social
sented by the inner product of the latent variable Z and its media platforms should consider various aspects of a
transpose Z T . The result of this operation, specified in Eq. 8, user, beyond just direct textual similarity. These aspects
is then reconstructed as the adjacency matrix  as specified could include a user’s profile information, writing-related
in Eq. 7. The logistic sigmoid function 𝜎() is also utilized in nuances such as stylometry, the content generated, and
this process. To summarize, the decoder can be represented topics discussed, among others. Using a Twitter dataset,
mathematically as given in Eq. 8. we demonstrate that homophily exists in the generation of
hate speech along all aspects considered in the similarity
 = 𝜎(ZZ T ) (7) computation. Therefore, highlighting the importance of
considering multiple aspects of a user when evaluating
p(Aij = 1‖zi , zj ) = 𝜎(zTi zj ). (8) similarity, as it offers a more comprehensive understand-
ing of the dynamics of hate speech and could lead to an
Loss in VGAE improvement in detection accuracy.
The loss function for the Variational Graph Autoencoder
(VGAE) is composed of two parts. The first part, referred 3.1 Features for similarity computation
to as the variational lower bound, evaluates the accuracy of
the network’s reconstruction of the data and is modelled as We propose various features to capture the nuances of
the binary cross-entropy between the input and output. The the content generated by a user on online social media
second part, called KL-divergence, measures the similar- platforms. The features are capable of capturing similar-
ity between q(Z ∣ X, A) and p(Z) = N(0, 1). KL-divergence ity along semantic, syntactic, stylometric, and topical
compares the approximation q(Z ∣ X, A) to the true posterior dimensions.
p(Z). The loss function is described in Eq. 9. Algorithm 1  Computing Semantic Features fora User
L = Eq(Z‖X,A [logp(A‖Z)] − KL[q(Z‖X, A)��p(Z)] (9)
Input: User u, Set of Posts P = {p1 , p2 , ..., pM }
with timestamps
2.2 Computing familiarity Output: Semantic Embedding S (u) of the user
1: Compute time span of P as T
The concept of familiarity between a pair of users is deter- 2: Divide T into time-windows T =
mined by calculating the distance between them in the {tn , tn−1 , tn−2 , ..., t1 } of size one week where
graph embedding space. The inverse relationship between
n are the total number of weeks and t1 is the
distance and familiarity is assumed, where lower distances
most recent week
correspond to higher levels of familiarity. To compute the
3: for each time window t in T do
familiarity, existing distance measures are employed, includ-
4: Compute weight(tk ) = 1/k
ing Cosine distance, Manhattan distance, and Euclidean dis-
5: end for
tance. The metrics between two users, u1 and u2, are formally
6: for each post p in P do
defined in Eqs. 10 to 12.
Let ū1 and ū2 represent graph embedding of the users u1 7: for each word w in p do
and u2 respectively. 8: Compute word embedding E (w) using
Glove
(ū1 ⋅ (ū2 ) 9: Compute tweet embedding E (p) as
Cosine Similiarity(u1, u2) = (10)
||ū1 || ⋅ ||ū2 || mean of word embeddings E (w)
10: end for
� n �1∕p 11: Find weight W (p) for p using weight for

Manhattan Distance(u1, u2) = ‖xi − yi ‖p
(11) the time windows it falls in
i=1 12: end for
13: Compute user semantic embedding S (u)
√ 
√ n 14: S (u) = i E (pi ) ∗ W (p)/P
√∑
Euclidean Distance(u1, u2) = √ (xi − yi )2 (12)
i=1
138 Page 6 of 16 Social Network Analysis and Mining (2024) 14:138

3.1.1 Semantic features di = ∪(∀pj ∈P) pj (13)

In our approach, user-generated content is represented Moreover, to further understand the readability of a user’s
through the use of word embeddings, where each post writing style, we compute the Flesch–Kincaid Reading Ease
is transformed into a vector representation. The seman- score for each user. To do this, we first create a document
tic embedding of a user is obtained by aggregating the d for each user u by combining all the tweets, as described
post embeddings using a weighted mean pooling method. in Eq. 13. We then use the Flesch-Kincaid Reading Ease
This aggregation methodology is inspired by the work in formula (Kincaid 1975) to compute the readability score for
Rajadesingan et al. (2015), where the authors emphasize each user. This helps us capture the ease of understanding of
the significance of incorporating the temporal aspect in a user’s writing style, which can provide useful information
user-generated content. Our implementation of time- in understanding the dynamics of hate speech.
decaying aggregation accounts for two crucial factors:
the activity level of the user and the relevance of recent
posts compared to older ones. The tweets are divided into 3.2.1 Topical features
time buckets of one week and assigned weights inversely
proportional to their temporal position. The details In order to compute the topic features, we utilize two meth-
of the time-decay-based aggregation are described in ods. The first method involves performing topic modelling
Algorithm 1." on the user-generated content. The second method involves
using the methodology proposed in Fast et al. (2016) to con-
3.1.2 Syntactic features struct the Empath category scores vector.
For the first method, we employ latent topic modelling
The syntactic features of the user’s tweets play an important techniques called Latent Dirichlet Allocation. The result of
role in understanding the user’s writing style and language the topic modelling is the latent affinity of each user to vari-
use. In order to capture this information, we use the impor- ous topics, which is then used to construct a topic vector for
tant features proposed in the study of Rajadesingan et al. each user. For the second method, we utilize the Empath
(2015) and some additional features designed by us. These library (Fast et al. 2016) to construct the Empath category
features are used to compute a syntactical feature vector to scores vector. The Empath library allows for the computa-
represent the user. The features include the number of capi- tion of a fixed-length vector of category scores that represent
tal words, question marks, exclamations, numbers, URLs, a text document. The categories include emotions, objects,
user mentions, hashtags, and emojis present in a tweet. To events, and more, and capture the content and style of a
obtain the syntactic feature vector for a user, these features user’s writing. The result of this method is a topic vector that
are calculated for each tweet made by the user and then aver- captures the diverse topics discussed by a user.
aged over all the tweets. This provides a comprehensive
understanding of the user’s language use and writing style.
The resulting feature vector can be used as input for further
analysis and modelling tasks.
4 Hateful forms detection using TARS
3.2 Stylometric and readability features
We propose to use Task-aware representation of sentences
(TARS) to detect hateful forms. TARS is introduced by
In the field of authorship attribution, there have been several
Halder et al. (2020) as a simple and effective method for
important features proposed to detect the author of a piece of
few-shot and even zero-shot transfer learning for text clas-
content. In our work, we incorporate these features in cap-
sification. It means we can classify text without (m)any
turing the writing style of a user. Specifically, we adopt fea-
training examples. The pre-trained TARS model can be
tures proposed in Bhargava et al. (2013) and Rajadesingan
used as-is but providing a handful of training examples
et al. (2015). These features include the number of words
improves its performance. TARS pre-trained model can
per tweet, the number of sentences per tweet, the number of
easily be extended to learn new classes remarkably well by
elongated words per tweet, the number of repeated words per
using only a handful of training samples. We supply a few
tweet, and the word length distribution. The latter is a vector
training samples to fine-tune TARS for each hateful form.
of length 19, where each element represents the frequency of
The core idea behind TARS is that it reformulates the
words of a particular length. Additionally, we compute the
text classification problem as a "query". In this query, the
mean, median, and standard deviation of the word length
transformer receives both a sentence and a potential class
distribution.
Social Network Analysis and Mining (2024) 14:138 Page 7 of 16 138

label, and it predicts whether or not the label holds. The analyze a set of tweets that are labelled as hateful and
cross-attention mechanism in the transformer learns to identify the hashtags that are commonly used in these
combine the representation of the text and the label, and tweets.
this allows for the transfer of the full model. The advantage Manual grouping of the hashtags into various hateful
of TARS over previous models is that it preserves both the forms We then group the identified hashtags into the five
decoder layer and the semantic information present in the different categories based on the hateful forms they rep-
natural language task class labels. This results in the same resent, such as hate against race, sex, and communities.
decoder being used across arbitrary tasks and the informa- Automated expansion of hashtags for individual hateful
tion provided by the class label being interpreted by the form Finally, we employ automated expansion techniques
transformer model. Furthermore, TARS has the capability to identify other hashtags related to each hateful form.
to return predictions even for classes that have no training
data. This is possible because the textual label of the new
class is prepended to the text and the result of the "True/ 5.1 Automated expansion of hashtags for hateful
False" decoder is evaluated. form

In this section, we present an automated technique for


4.1 Using TARS for detecting hateful classes expanding hashtags for individual hateful forms using
hashtag encoding. We aim to capture the context and
In order to detect hateful forms in tweets, we first define semantics of hashtags based on the tweets they are used
the set of all hateful forms of interest, denoted as H. The in. To achieve this, tweets are treated as documents, and
corpus of tweets under consideration is denoted as C. For a corpus is created based on the text in the tweets. Each
each hateful form h in H, we create a lexicon hl that repre- tweet document in the corpus is then encoded using a neu-
sents the language patterns associated with that particular ral network-based encoder. Polarity is assigned to each
hateful form. A limited number of tweets denoted as K, are hashtag based on the polarity of the tweets in which it is
selected from the corpus C and annotated using the lexicon present. A hashtag is assigned negative polarity if at least
hl . If a tweet text contains elements from the lexicon hl , it FT fraction of the tweets are negative.
is labelled with the corresponding hateful form hl. The negative polarity hashtags are then encoded using
To ensure the accuracy of the annotation process, we the tweets in which they are present. Only tweets with neg-
conduct a manual verification process on the labelled ative polarity are used to encode the hashtag. The encoding
tweets and discard any tweets that were wrongly labelled. of each hashtag is the mean encoding of the documents,
The final annotated tweets for the hateful class h are classified as negative sentiment, in which it appears. By
denoted as Kh . Along with the annotated hateful tweets, using the mean encoding, the context and meaning of the
we also manually annotate a set of N tweets as normal hashtag can be captured based on the tweets in which it
tweets (not hateful) from the corpus C. is used. Next, the hashtags are expanded for each manu-
To build a multi-class classifier for detecting hateful ally identified group (cluster) based on their similarity in
forms in tweets, we use a transfer learning approach by the hashtag space. The hashtag space captures the unique
leveraging the pre-trained TARS model as a base model. characteristics of each hashtag and its specific usage pat-
We follow the methodology outlined in the research terns in the context of hate speech on social media. To
paper (Halder et al. 2020) to extend the pre-trained model expand each cluster of hashtags, a clustering algorithm is
with the additional hateful classes H, along with one more employed, specifically, K-means with different centroid
class for normal tweets. The annotated corpus Kh for each update strategies, including no update, mean update and
hateful class h and the set of N normal tweets serve as the medoid update.
training data for fine-tuning the TARS pre-trained model. Leveraging this approach enables the identification of
other hashtags related to each hateful form, even if they
were not part of the original manually identified group.
5 Hateful forms detection using hashtags This leads to a more comprehensive understanding of the
nuances and variations of hate speech on social media,
Our approach for the automated detection of hashtags used which is crucial for developing effective detection and
for various hateful forms consists of three main steps: mitigation strategies. Furthermore, this approach can
Manual identification of hateful hashtags We first provide valuable insights into the underlying topics and
manually identify the hashtags that are frequently used motivations driving the hateful content in each form. The
in tweets containing hateful speech. To achieve this, we approach can be summarized as follows:
138 Page 8 of 16 Social Network Analysis and Mining (2024) 14:138

Algorithm 2  Identification of Hateful Hashtags or not. However, annotating 19M tweets is a costly and time-
consuming process, so only a subset of tweets is annotated.
1: Step 1: Manual identification of hateful hash- Accurate annotations are vital for uncovering homophily in
tags hate speech on social media platforms. Thus, we designed
2: Analyze a set of tweets labelled as hateful and the annotation process, in two phases:
identify commonly used hashtags
3: Step 2: Manual grouping of hashtags into 1. Phase 1 Selection of the initial set of users and their
different categories tweets for annotation.
4: Group the identified hashtags into five cate- 2. Phase 2 Manual annotation of the tweets selected in
gories based on the hateful forms they repre- Phase 1.
sent
5: Step 3: Automated expansion of hashtags Phase 1: user and tweet selection
6: Compute tweet embeddings To select the most suitable tweets for annotation, we care-
7: Detect the polarity of each tweet fully selected a subset of users, including the 544 hateful
8: Assign polarity to each hashtag based on the users and another approximately 20, 000 users based on their
polarity of tweets it is present in degree in the retweet network. All tweets from these users
were considered for manual annotation. Users and tweets
9: Encode negative polarity hashtags using
were further filtered based on the following criteria:
tweets of negative polarity
10: Expand hashtags for each individual hateful
• Only users who posted in English were selected.
form based on similarity in the hashtag space
• Only tweets with a length greater than 10 characters were
using a clustering algorithm
selected to ensure proper sentence structure.
• The number of tweets per user was capped at 10, result-
ing in approximately 10 tweets per user with a length of
at least 10 characters.

6 Dataset preparation This phase yielded a total of 30, 720 tweets to be annotated.
The filtered retweet network had 771, 401 edges and 18, 642
In this section, we describe the dataset used and its modifica- nodes. This systematic approach to tweet selection helped to
tions for evaluating the proposed research questions. ensure the accuracy and reliability of the annotations, which
are crucial in uncovering homophily in hate speech on social
media platforms.
6.1 Dataset details
Phase 2: annotation
We use the hate speech dataset provided by Ribeiro et al. In Phase 2, the annotation process was carried out system-
(2017). This dataset contains 200 most recent tweets of atically and thoroughly to ensure high-quality results. Three
100, 386 users, totalling around 19M tweets. It also contains annotators with undergraduate degrees manually annotated
a retweet-induced graph of the users. The retweet-induced each tweet. The annotators were provided with the following
graph is a directed graph G = (V, E) where each node u ∈ V definition of hate speech: "Any speech that is intended to
represents a user in Twitter, and each edge (u1, u2) ∈ E rep- degrade, intimidate, or incite violence or prejudicial action
resents that the user u1 has retweeted user u2. Furthermore, against a particular person or group of people based on their
every tweet is categorized as an original tweet, retweet, or race, ethnicity, national origin, religion, sexual orientation,
quote (retweet with a comment). Out of the 100, 386 users, gender identity, or other characteristics." The annotators
labels (hateful or normal) are available for 4, 972 users, out were instructed to annotate only the text of the tweets, with-
of which 544 users are labelled as hateful and the rest as out considering any accompanying media or user informa-
normal. We perform pre-processing techniques suitable for tion. A binary label was used to indicate whether a tweet
tweets, including removing links, converting emoticons to was classified as hateful or not. In total, 27.5% of annotated
text, and removing non-ASCII characters." tweets were classified as hateful. To ensure the quality of
the annotation, we calculated the inter-annotator agreement
6.2 Dataset labelling at binary label (hateful or not) using Cohen’s kappa coefficient, which measures the level
of agreement between two annotators beyond chance. The
The dataset used in this study lacks labels for tweet content, kappa coefficient for our annotation task was 0.87, indicating
necessitating manual annotation of tweets as either hateful a high level of agreement between the two annotators. Any
Social Network Analysis and Mining (2024) 14:138 Page 9 of 16 138

Table 1  The number of labelled samples and a few seed hashtags for 7 Experiments
each type of hate speech
Form Number of Hashtags 7.1 Experiments overview
Samples

Communalism 29 #jihadi, #islamic, #terrorist The purpose of the experiments is to investigate the follow-
Homophobia 55 #gay, #dyke, #fag ing research questions (RQs) related to hateful speech and
Racism 49 #nigga, #nigger, #paki homophily:
Sexism 142 #cunt, #bitch, #pussy
Xenophobia 79 #nomorenazis, #wolf2, #illegalaliens 1. RQ1 To what extent do individuals who generate hateful
content exhibit homophily?
2. RQ2 How does the newly proposed familiarity metric
compare to existing metrics in effectively identifying
discrepancies in annotation were resolved by discussing and homophily patterns among generators of hateful speech?
reviewing the tweets until a consensus was reached. 3. RQ3 Do patterns of homophily vary across different
types of similarity aspects?
6.3 Dataset labelling at the hateful forms level 4. RQ4 How effective is our proposed TARS-based
approach in detecting different types of hateful forms?
We aim to detect the five most common distinct forms of 5. RQ5 To what extent are the manually identified seed
hate speech, namely racism, sexism, communalism, xeno- hashtags for each hateful form effective, and how does
phobia, and homophobia Davidson et al. (2017). To achieve the proposed hashtag encoding scheme perform?
this, we leverage a few-shot learning approach called TARS 6. RQ6 How well does our hashtag expansion approach
Halder et al. (2020) and hashtags. perform in detecting hateful forms?
To enable few-shot learning for TARS, we manually 7. RQ7 Are certain types of hateful speech more prone to
annotate a small number of samples for each hateful form. homophily among their generators?
We select a maximum of ten tweets per user from the 544
hateful users previously identified, resulting in a total of 7.2 Experiments settings
5,343 tweets to be annotated. To facilitate annotation, we
create a comprehensive lexicon of words commonly asso- 7.2.1 Parameter setting for familiarity computation
ciated with each of the five forms of hate, sourced from
an existing resource1 and augmented with additional terms The implementation of Variational Graph Auto-Encoder
manually extracted from tweets tagged with hashtags with (VGAE) is carried out using the source code provided by
hateful connotations. the library PyTorch Geometric.2 The adjacency matrix is
Manual inspection of the annotated tweets reveals that constructed from the retweet graph, which is represented as
99% of them correspond to the specified hate categories, an undirected graph. The training of the VGAE model was
demonstrating the effectiveness of our approach. This vali- performed with a learning rate of 0.01, for 120 epochs, with
dates our method’s potential to accurately label hateful a batch size equal to the size of the entire graph. The Adam
tweets and provide a reliable dataset for future research. We optimizer was used during the training process. The trained
obtain a set of 354 tweets that have been rigorously verified encoder portion of the VGAE was then utilized as the graph
to contain hate speech, as presented in Table 1. encoder for further experimentation.
Based on the labelled tweets, we proceed to identify To determine the similarity between users, we utilized the
initial hashtags for each type of hate speech. This involves Universal Sentence Encoder (USE)3 to encode the tweets of
extracting all the hashtags used in the tweets related to each each user. The aim here is to evaluate the effectiveness of our
form of hate speech. Furthermore, we conduct a thorough proposed familiarity computation model by comparing it to
manual inspection to examine the co-occurring hashtags of the standard similarity computation techniques employed in
these initial hashtags, with the aim of constructing a com- the state-of-the-art. In this study, we retained the use of USE
prehensive list of initial seeds for each form of hate speech. encoding for similarity features to ensure a fair comparison
Table 1 shows a few of the seed hashtags for the hateful between our proposed familiarity features and the existing
forms. In both approaches outlined in this paper, we detect ones. To measure the similarity between users, we calculated
hateful forms on the entire tweet corpus. the cosine similarity between their respective documents.
Furthermore, we computed the familiarity between users

2
1
https://​www.​front​gatem​edia.​com/a-​list-​of-​723-​bad-​words-​to-​black​ https://​github.​com/​rusty​1s/​pytor​ch_​geome​tric
3
list-​ and-​how-​to-​use-​faceb​ooks-​moder​ation-​tool/ https://​tfhub.​dev/​google/​unive​rsal-​sente​nce-​encod​er/4
138 Page 10 of 16 Social Network Analysis and Mining (2024) 14:138

using two traditional metrics, namely edge existence and existing pre-trained TARS model to meet our requirements.
the number of mutual friends, as well as three metrics based The samples for each class are split into 70% for training,
on the user vectors obtained from our graph encoder. These 10% for validation, and 20% for testing. The pre-trained
metrics include cosine similarity, Euclidean distance, and model is updated using a learning rate of 0.02, a mini-batch
Manhattan distance. The calculation of familiarity computa- size of 1, and over 8 epochs. The trained classifier is then
tion is performed using the Scipy library.4 utilized to infer the labels for the tweets in the corpus.

7.2.2 Parameter settings for similarity computation 7.2.4 Parameter settings for hateful forms detection using
hashtags
As described in Sect. 3, we aim to capture the similarity
along semantic, syntactic, stylometric, and topical dimen- We retain only those hashtags that occur at least 10 times in
sions. The semantic features are constructed by utilizing pre- the corpus of tweets. To detect the polarity of a tweet, we
trained GloVe word embeddings, while the syntactic and use an existing sentiment classifier called twitter-roberta-
stylometric features are extracted based on the methodology base-sentiment available in the Hugging Face library.6 We
outlined in previous studies (Bhargava et al. 2013; Rajades- perform filtering of tweets based on the confidence score
ingan et al. 2015). of the sentiment classification model. Specifically, we only
To derive topical features, two approaches are used, retain those tweets where the model is confident enough.
namely latent topical features and Empath category scores. We experiment with different confidence scores ranging
Latent topical features are constructed by applying Latent from 0.7 to 0.9 with an increment step of 0.1. To classify
Dirichlet Allocation (LDA) to the tweet corpus, which is a hashtag as negative, we use the FT fraction of negative
composed of all tweets from all users. Each user’s tweets are tweets out of the total tweets in which the hashtag is present.
concatenated to form a document, and LDA is implemented We vary FT from 0.1 to 0.5 with an increment step of 0.1.
using the MALLET software. The values of the hyperpa- We only retain those hashtags that have a negative senti-
rameters 𝛼 and 𝛽 are set to 5.0 and 0.01, respectively. The ment. To encode the tweets, we use the Universal Sentence
Empath category scores are computed using the empath- Encoder (USE). In addition, we experiment with K-means
client library, which calculates the category score vectors clustering with different centroid update strategies, including
for each user based on the tweet document by mapping the no update, mean update, medoid update, and learning rate-
words in the tweet document to predefined categories. These based gradient update. We use the sci-kit-learn library to
scores represent the extent to which a user’s tweets belong implement these techniques. We report on the experimental
to different categories. outcomes of our study for a confidence score of 0.7 and a
Computing the similarity metrics is a compute-intensive negative tweet threshold of FT = 0.3 and K-means with cen-
task, and with the available hardware, computing the simi- troid update. This particular combination of parameters is
larity metrics for each user would not have been feasible selected as it yields the best results in our analysis. In total,
within a reasonable time frame. To address this issue, we we classify 2739 hashtags into five hateful forms.
resort to picking a subset of users. The methodology used to To detect homophily at the hateful form level, we com-
pick the subset of users does not compromise the generaliz- pute the familiarity between two users using our proposed
ability of the approach. We run modularity optimization- approach. To compute the similarity between users, we use
based community detection using the networkx library5 to the USE embeddings of their tweets.
pick a subset of users on the retweet network. The two com-
munities picked have approximately equal numbers of edges, 7.3 Experimental results
around 1, 60, 000, with the number of users being 7, 679
and 3, 277, respectively. These two communities provide To compute familiarity within a group of users, we begin
a sufficient number of users and demonstrate a significant by standardizing the familiarity values for each pair against
variation in edge density between the two. the highest value present within that specific set. This nor-
malization approach facilitates meaningful comparisons
7.2.3 Parameter settings for hateful forms detection using across a spectrum of familiarity metrics. Subsequently, we
TARS calculate the average of all these normalized pair values,
resulting in the group’s mean familiarity value. The same
We train a TARS-based few-shot multi-class classification process is reiterated to derive the similarity value for the
model for the detection of hateful classes. We update an group.

4
https://​www.​scipy.​org/
5 6
https://​netwo​rkx.​org/ https://​huggi​ngface.​co/​cardi​ffnlp/​twitt​er-​rober​ta-​base-​senti​ment
Social Network Analysis and Mining (2024) 14:138 Page 11 of 16 138

In the subsequent visualizations, we manipulate the


hatefulness of individual users and observe the resultant
shifts in homophily patterns. The hatefulness of a user
is quantified as the proportion of their tweets classified
as hateful (based on binary classification). For each des-
ignated fraction of hatefulness, we select users whose
hatefulness meets or exceeds the specified value. This
approach allows us to systematically examine the effects
of varying user hatefulness on homophily.
First, we establish using existing similarity and famili-
arity metrics that homophily exist among hateful users,
that is essentially the research questions RQ1. We analyze
the relationship between similarity and familiarity using
two different existing metrics of familiarity. Our analy-
sis involved varying the proportion of hateful tweets and
examining the homophily in the users fulfilling the pro-
portion. Our findings, presented in Fig. 3, indicate that as
the percentage of hateful tweets increases, the similarity
also increases.
RQ2 To rigorously assess the efficacy of our proposed
familiarity metrics, we kept the similarity metric con-
stant and varied the familiarity metrics independently. To
ensure the robustness and validity of our methodology, we
employ the Pearson correlation coefficient (r) as a meas-
ure of the relationship between familiarity and similarity.
The results revealed that the newly introduced familiarity
metric yielded a Pearson correlation coefficient (r = 0.6),
signifying a moderate positive relationship with similarity.
This performance outperformed the edge-based familiar-
ity metric, which exhibited an r-value of 0.5. Conversely,
the mutual friends-based familiarity metric demonstrated
a weaker positive relationship, with an r-value of 0.2. The
values of r for the Euclidean and Manhattan distance met-
Fig. 3  Variation in similarity and familiarity with increasing hateful-
rics were found to be less than 0.1, albeit greater than zero. ness
As such, we chose not to present these results and instead
used the cosine similarity as it performed better.
The results of our analysis in addressing RQ3 are pre- on social media platforms and that they can be effectively
sented in Fig. 4 for both communities. The figure dem- utilized to identify and track the spread of hateful content.
onstrates the relationship between cosine similarity, To address RQ4, we measure the accuracy of the model.
computed as a similarity measure, and familiarity for six Table 2 demonstrates the effectiveness of the model through
different similarity metrics. The x-axis of the figures rep- its impressive performance metrics across various hate cate-
resents the hatefulness of the users, which is quantified as gories. These results indicate that the single model approach
the percentage of hateful tweets they have produced. is capable of effectively distinguishing between different
The results of our analysis clearly indicate that as the forms of hate speech and is therefore a reliable solution for
hatefulness of the users increases, the similarity values for all inference on a larger corpus.
types of features also increase. This finding provides strong The aim of RQ5 is to assess the effectiveness of the manu-
evidence for the hypothesis that like-minded individuals tend ally identified seed hashtags for each hateful form and the
to congregate and form communities with similar ideologies proposed hashtag encoding scheme. This is a crucial aspect,
and views. Of particular interest is the observation that the as the accuracy of hashtag-based hate speech detection pri-
pattern of increasing similarity with increasing hatefulness marily depends on the quality of the initial seed hashtags and
is particularly pronounced in the topic-based similarity. This the ability of the encoding scheme to capture the semantic
result suggests that latent topics have the capacity to capture characteristics of the hashtags. To address this research ques-
the higher-level semantics of the discussions that take place tion, we first encode each seed hashtag using our proposed
138 Page 12 of 16 Social Network Analysis and Mining (2024) 14:138

Fig. 4  Variation in similarity and familiarity with increasing hatefulness

Table 2  TARS classification results for detecting hateful forms: an


evaluation of the precision, recall and F1-score of the TARS frame-
work in identifying hateful ideologies in social media posts

Precision Recall F1-Score

Normal 0.9680 0.9837 0.9758


Sexist 0.8846 0.7931 0.8364
Xenophobia 0.9375 0.9375 0.9375
Racist 0.6667 0.8000 0.7273
Homophobia 1.0000 0.9091 0.9524
Communalism 0.7500 1.0000 0.8571
Fig. 5  Centroid of manual clusters of hashtags for each hateful form
Micro Avg 0.9289 0.9385 0.9337
Macro Avg 0.8678 0.9039 0.8811
Weighted Avg 0.9327 0.9385 0.9342
a two-dimensional space, allowing for a better understand-
ing of the distribution and relationship between the data
points. Our analysis indicates that the expanded clusters are
scheme and then compute the centroid of each hateful form well-separated, suggesting that our expansion scheme effec-
cluster. We subsequently plot the centroids using a t-SNE tively captures the nuanced differences between individual
plot. Figure 5 shows that the centroids are well-separated, instances of hateful language.
indicating that our manually identified hashtags can differ- To further assess the performance of the clustering algo-
entiate between individual hateful forms. Furthermore, the rithm, we calculated three metrics: Cohesiveness, Inter-clus-
proposed encoding scheme is capable of capturing the dis- ter distance, and Silhouette score. The Cohesiveness score
tinctive features of hateful forms. measures the degree to which data points within a cluster
To investigate RQ6, we employed a t-SNE plot to visual- are similar, while the Inter-cluster distance score measures
ize the expanded clusters, as presented in Figs. 6, 7, and 8 the distance between clusters. The Silhouette score measures
using different variations of clustering. The t-SNE plot pro- the quality of each cluster and the degree to which each data
vides a visual representation of the high-dimensional data in point belongs to its assigned cluster.
Social Network Analysis and Mining (2024) 14:138 Page 13 of 16 138

Fig. 6  Expanded clusters of hashtags for each hateful form Fig. 8  Experiment 2: expanded clusters of hashtags for each hateful
form

Table 3  The number of tweets and hashtags per month for each type
of hate speech.

Form T H U

Communalism 6115 2202 1289


Homophobia 10989 9946 5766
Racism 1615 1576 690
Sexism 14078 2424 7117
Xenophobia 42960 16542 11635

T refers to the total number of tweets, while H refers to the total num-
ber of hashtags and U refers to the total number of unique users

it is important to acknowledge that a single user may con-


tribute to multiple categories, given that their tweets might
encompass a spectrum of hateful forms. To accomplish this,
our approach involves the systematic identification of tweets
associated with each hateful form, followed by the attribution
of authors behind these tweets as active participants in gener-
ating content for the corresponding category of hate.
Fig. 7  Experiment 1: expanded clusters of hashtags for each hateful
form For every tweet, we extract the embedded hashtags and
subsequently ascertain their alignment with our predefined
categories of hate speech. In instances where a tweet con-
Our results show that the Cohesiveness score is 0.647, tains multiple associated hashtags, it can be affiliated with
indicating that the data points within each cluster are similar. multiple hate speech categories. Subsequent to this infer-
The Inter-cluster distance is 0.246, suggesting that the clus- ence process, we present the resulting statistical insights in
ters are well-separated from each other. Finally, the Silhou- Table 3.
ette score is 0.031, indicating that the clusters are of good To address RQ7, we compute the mean familiarity and
quality and that each data point is assigned to its appropri- similarity of the users participating in each hateful form
ate cluster. Overall, our analysis indicates that the cluster- using both approaches, TARS and Hashtag. To facilitate
ing algorithm performed well in identifying and separating comparison, we normalize both similarity and familiarity
instances of hateful language based on their unique features, values by dividing them by their respective maximum val-
demonstrating the effectiveness of our expansion scheme ues. We plot the normalized metrics for each hateful form
in capturing the nuances of different forms of hate speech. in Fig. 9. The results reveal that individuals who engage in
racism and communalism hate speech tend to have stronger
Inference Using the Expanded Clusters connections with like-minded individuals, evidenced by
Utilizing the designated hashtags within each cluster, our the higher values of familiarity and similarity observed
analysis proceeds to identify the individuals responsible for in these forms of hate speech. This finding highlights the
creating content within each category of hate speech. Notably, importance of understanding the dynamics of hate speech
138 Page 14 of 16 Social Network Analysis and Mining (2024) 14:138

8 Related work

The study of homophily, or the tendency for individuals to


associate and form connections with those who are similar
to themselves, has been a prominent topic in social network
research. Homophily in social networks was first proposed
by McPherson et al. (2001). This paper defines homophily
as exists between two users if they follow each other because
of shared common interests. Subsequently, De Choudhury
et al. (2010) establish that homophily plays a vital role in
information diffusion and dissemination on social networks.
Their work is built on a critical observation that homoph-
ily structures the ego networks of individuals and impacts
their communication behaviour. Another early work, Weng
et al. (2010) detects the existence of reciprocity that exists
on Twitter. They show that users sharing common interests
tend to follow each other, resulting in high reciprocity on
Twitter due to homophily. They use similarity and familiar-
ity together, to rank the users.
The role of homophily in information diffusion on social
networks is further strengthened by Halberstam and Knight
(2016), Aral et al. (2009), Starbird and Palen (2012). Specifi-
cally, Halberstam and Knight (2016) investigates homophily
in political information diffusion. They proved their hypoth-
eses on two groups, conservative and liberal, and concluded
that "with homophily, members of the majority group have
more network connections and are exposed to more infor-
mation than the minority group". Additionally, they also
show, that "with homophily and a tendency to produce like-
minded information, groups are disproportionately exposed
Fig. 9  Homophily in hateful classes detected using hashtags: an anal- to like-minded information, and the information reaches
ysis of the prevalence of association among individuals with similar like-minded individuals more quickly than it reaches indi-
hateful ideologies
viduals of opposing ideologies." Another work by Aral et al.
(2009) demonstrates homophily in contagion for product
on social media and the role of homophily in the persistence adoption on dynamic networks. They empirically establish
and spread of these forms of hate speech. on the Yahoo IM (instant messenger) system that peer influ-
Further, both approaches revealed that individuals ence is not sufficient to explain contagion in product adop-
expressing hate against different behavioural traits such as tion. They find that homophily is responsible for more than
racism and communalism tend to strongly connect with each half of the contagion. In addition, homophily has also been
other. This discovery further strengthens our confidence in studied in the context of sustaining online communities, as
the efficacy of the proposed approaches to detect hateful demonstrated by Ducheneaut et al. (2007) who found that
forms. Additionally, the results indicate that individuals homophily is an essential factor in the sustenance of online
who engage in racist and communalism hate speech tend gaming guilds. The role of familiarity and similarity has also
to form stronger connections with others who hold similar been explored in various studies, such as Ying et al. (2010)
hateful beliefs. The clustering of like-minded individuals which showed that semantic similarities in user trajectories
can contribute to the persistence and spread of this type of on mobile networks are a better indicator of user similarity
hate speech. The findings provide important insights that can than traditional trajectory similarity measures. Thus, homo-
inform the development of targeted strategies for addressing phily has been very well studied in the literature and is very
hate speech on social media. Furthermore, the observation important to explain many social phenomena happening in
that different forms of hate speech may exhibit different lev- the virtual world.
els of homophilic behaviour emphasizes the importance of Many papers have jointly studied familiarity and simi-
detecting multiple forms of hate speech and understanding larity for modelling solutions on Twitter. Afrasiabi et al.
their dynamics in the online environment.
Social Network Analysis and Mining (2024) 14:138 Page 15 of 16 138

Afrasiabi Rad and Benyoucef (2014) study communities Declarations


formed over friendships on the YouTube social network.
They observe that communities are formed from similar Conflict of interest The authors have no Conflict of interest to declare
that are relevant to the content of this article.
users on YouTube; however, they do not find large simi-
larity values between friends in YouTube communities. Ethics approval Not Applicable
Recently, topical homophily was proposed by Dey et al.
(2018), where they show that homophily is the driving fac-
tor in the emergence of topics and their life cycle. While
there has been a wealth of literature studying homophily and
References
its role in various social phenomena, there has been limited
research addressing the issue of homophily in hate speech Afrasiabi Rad A, Benyoucef M (2014) Similarity and ties in social
and different forms of hate speech. Our study fills this gap networks a study of the youtube social network. J Inf Syst Appl
by exploring the homophilic behaviour of individuals who Res 7(4):14
engage in different forms of hateful speech, such as racism, Aral S, Muchnik L, Sundararajan A (2009) Distinguishing influence-
based contagion from homophily-driven diffusion in dynamic
communalism, sexism, xenophobia, and homophobia. networks. Proc Natl Acad Sci 106(51):21544–21549
Bhargava M, Mehndiratta P, Asawa K (2013) Stylometric analysis for
authorship attribution on twitter. In Big Data Analytics, pp 37–47
9 Conclusion Davidson T, Warmsley D, Macy M, et al (2017) Automated hate speech
detection and the problem of offensive language. In Eleventh
international aaai conference on web and social media
In conclusion, this research has shown that homophily is De Choudhury M, Sundaram H, John A, et al (2010) “Birds of a
exhibited by users who produce hateful content on social feather”: does user homophily impact information diffusion in
social media? arXiv preprint arXiv:​1006.​1702
media platforms. Furthermore, the results indicate that
Dey K, Shrivastava R, Kaushik S, et al (2018) Assessing topical homo-
homophily is more pronounced for certain forms of hate phily on twitter. In International conference on complex networks
speech, such as racism and communalism. This observation and their applications, Springer, pp 367–376
is important as it suggests that individuals who engage in Ducheneaut N, Yee N, Nickell E, et al (2007) The life and death of
online gaming communities: a look at guilds in world of warcraft.
racist and communalism hate speech tend to have stronger
In Proceedings of the SIGCHI conference on Human factors in
connections with like-minded individuals, which may con- computing systems, pp 839–848
tribute to the persistence and spread of this type of hate Fast E, Chen B, Bernstein MS (2016) Empath: Understanding topic
speech. In addition to providing insight into the dynamics of signals in large-scale text. In Proceedings of the 2016 CHI con-
ference on human factors in computing systems, pp 4647–4657
hate speech, this research also introduces metrics for meas-
Gao H, Wang Z, Ji S (2018) Large-scale learnable graph convolutional
uring familiarity and similarity in the context of social media networks. In Proceedings of the 24th ACM SIGKDD international
platforms. These metrics can be useful for understanding conference on knowledge discovery & data mining, pp 1416–1424
the nature of online communities and how individuals form Halberstam Y, Knight B (2016) Homophily, group size, and the diffu-
sion of political information in social networks: Evidence from
connections with each other.
twitter. J Public Econ 143:73–88
Overall, the results of this research highlight the impor- Halder K, Akbik A, Krapac J, et al (2020) Task aware representation
tance of understanding the underlying dynamics of hate of sentences for generic text classification. In COLING 2020, 28th
speech on social media platforms. The proposed metrics for international conference on computational linguistics
Hamilton WL, Ying R, Leskovec J (2017) Inductive representation
measuring familiarity and similarity, along with the obser-
learning on large graphs. arXiv preprint arXiv:​1706.​02216
vation of homophily in the context of certain forms of hate Kincaid J (1975) Derivation of New Readability Formulas: (automated
speech, can be valuable for developing targeted strategies to Readability Index, Fog Count and Flesch Reading Ease Formula)
reduce the prevalence of hate speech online. for Navy Enlisted Personnel. In Research Branch report, Chief of
Naval Technical Training, Naval Air Station Memphis
Kipf TN, Welling M (2016a) Semi-supervised classification with graph
convolutional networks. arXiv preprint arXiv:​1609.​02907
Author contributions All authors have equal contribution.
Kipf TN, Welling M (2016b) Variational graph auto-encoders. arXiv
preprint arXiv:​1611.​07308
Funding None
Liu Z, Chen C, Yang X, et al (2018) Heterogeneous graph neural
networks for malicious account detection. In Proceedings of the
Data availability The datasets analysed during the current study are
27th ACM international conference on information and knowledge
publicly available and can be downloaded from https://​github.​com/​
management, pp 2077–2085
ENCAS​EH2020/​hates​peech-​twitt​er and https://​www.​dropb​ox.​com/​sh/​
Matamoros-Fernández A, Smith A, Al-Rawi A (2019) Hate speech and
ayt6w​cjzcz​hhtwp/​AADS7​aDFIi​Ibh-​HtCax​dwsHqa?​dl=0
covert discrimination on social media: monitoring the facebook
pages of extreme-right political parties in spain. Policy & Internet
11(3):288–310
138 Page 16 of 16 Social Network Analysis and Mining (2024) 14:138

Mathew B, Dutt R, Goyal P, et al (2019) Spread of hate speech in Weng J, Lim EP, Jiang J, et al (2010) Twitterrank: finding topic-sensi-
online social media. In Proceedings of the 10th ACM conference tive influential twitterers. In Proceedings of the third ACM inter-
on web science, pp 173–182 national conference on Web search and data mining, pp 261–270
McPherson M, Smith-Lovin L, Cook JM (2001) Birds of a feather: Xu S, Zhou A (2020) Hashtag homophily in twitter network: examining
homophily in social networks. Ann Rev Sociol 27(1):415–444 a controversial cause-related marketing campaign. Comput Hum
Paik A, Pachucki MC, Tu HF (2023) “Defriending’’ in a polarized Behav 102:87–96
age: Political and racial homophily and tie dissolution. Social Ying JJC, Lu EHC, Lee WC, et al (2010) Mining user similarity from
Networks 74:31–41 semantic trajectories. In Proceedings of the 2nd ACM SIGSPA-
Rajadesingan A, Zafarani R, Liu H (2015) Sarcasm detection on twit- TIAL International Workshop on Location Based Social Networks,
ter: A behavioral modeling approach. In WSDM, pp 97–106 pp 19–26
Ribeiro M, Calais P, dos Santos Y, et al (2017) “Like sheep among
wolves”: characterizing hateful users on twitter. In MIS2 Work- Publisher's Note Springer Nature remains neutral with regard to
shop at WSDM’2018 jurisdictional claims in published maps and institutional affiliations.
Schlichtkrull M, Kipf TN, Bloem P, et al (2018) Modeling relational
data with graph convolutional networks. In European semantic Springer Nature or its licensor (e.g. a society or other partner) holds
web conference, Springer, pp 593–607 exclusive rights to this article under a publishing agreement with the
Starbird K, Palen L (2012) (How) will the revolution be retweeted? author(s) or other rightsholder(s); author self-archiving of the accepted
information diffusion and the 2011 egyptian uprising. In Proceed- manuscript version of this article is solely governed by the terms of
ings of the acm 2012 conference on computer supported coopera- such publishing agreement and applicable law.
tive work, pp 7–16
Veličković P, Cucurull G, Casanova A, et al (2017) Graph attention
networks. arXiv preprint arXiv:​1710.​10903

You might also like