Professional Documents
Culture Documents
An Analysis of Emotional Bot-Generated Content On Twitter
An Analysis of Emotional Bot-Generated Content On Twitter
1 1,2,3
Ema Kuˇsen and Mark Strembeck
1Vienna University of Economics and Business (WU Vienna), Vienna, Austria
2Secure Business Austria (SBA), Vienna, Austria
3Complexity Science Hub (CSH), Vienna, Austria
ema.kusen@wu.ac.at, mark.strembeck@wu.ac.at
Keywords: Bot Behavior, Emotion Analysis, Emotional Conformity, Temporal Patterns, Twitter.
Abstract: In this paper, we present a study on the emotions conveyed in bot-generated Twitter messages as compared
to emotions conveyed in human-generated messages. Social bots are software programs that automatically
produce messages and interact with human users on social media platforms. In recent years, bots have
become quite complex and may mimic the behavior of human users. Prior studies have shown that
emotional messages may significantly influence their readers. Therefore, it is i mportant to study the effects
that emotional bot-generated content has on the reactions of human users and on information diffusion over
online social networks (OSNs). For the purposes of this paper, we analyzed 1.3 million Twitter accounts that
generated 4.4 million tweets related to 24 systematically chosen real-world events. Our findings show that:
1) bots emotionally polarize during controversial events and even inject polarizing emotions into the Twitter
discourse on harmless events such as Thanksgiving, 2) humans generally tend to conform to the base
emotion of the respective event, while bots contribute to the higher intensity of shifted emotions (i.e.
emotions that do not conform to the base emotion of the respective event), 3) bots tend to shift emotions to
receive more attention (in terms of likes and retweets).
13
Kušen, E. and Strembeck, M.
Why so Emotional? An Analysis of Emotional Bot-generated Content on Twitter.
In Proceedings of the 3rd International Conference on Complexity, Future Information Systems and Risk (COMPLEXIS 2018), pages
13-22 ISBN: 978-989-758-297-4
Copyright © 2018 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved
COMPLEXIS 2018 - 3rd International Conference on Complexity, Future Information Systems and Risk
cent studies indicated that a single bot may generate as evident when observing temporal tweeting patterns, as
many as 500 tweets per day (Kollanyi et al., 2016) and human accounts tend to predominantly tweet dur-ing
bot-generated content may reach up to 19% of the weekdays, whereas bots are equally active during
tweets on particular topics (Bessi and Ferrara, 2016), weekdays and in the weekends (Chu et al., 2010).
leading to a threat of negatively affecting the public Another distinction between Twitter bots and hu-
opinion. For example, Ferrara et al. discussed a num- mans lies in the follower-followee ratio. In a study
ber of malicious consequences that might arise from involving 500,000 Twitter accounts, Chu et al. found
bot activities, including perception altering, destroy-ing that bot accounts tend to follow a lot of users, but have
user reputation, or manipulating the users' opin-ions only a few followers themselves (Chu et al., 2010).
(Ferrara et al., 2016). Thus, given the high vol-ume of
Moreover, it has been reported that Twitter bots
bot-generated content, it is important to study the often have an agenda, e.g., by trying to persuade or
potential impact of bots on emotional content dis-
manipulate with Twitter users. To achieve this, bots
seminated through OSNs. typically use content-related features, such as URLs
Previous studies mainly focused on the presence and hashtags to promote their messages. For exam-ple,
of bot accounts in the social media discussion on a they may boost the perceived importance of a spe-cific
single real-world event, thus making the findings topic by (re)tweeting a certain URL (Chu et al., 2010;
dif-ficult to generalize (Dickerson et al., 2014; Bessi Gilani et al., 2017), use hashtags, or even iden-tify and
and Ferrara, 2016). mention potentially interested target users
For this paper, we analyzed a data-set consisting of (@username) to mobilize people for action. In fact,
4.4 million Twitter messages related to 24 system- (Savage et al., 2016) indicated that bot accounts can
atically chosen events. In particular, we analyzed the capture the attention of human Twitter users who sub-
data-set for the presence and intensity of the eight ba- sequently engage in a discussion on the bot-generated
sic emotions identified by Robert Plutchik (Plutchik, topic and further promote bot-generated content.
2001). The messages in our data-set have been sent
Multiple studies indicated that bot accounts may
from 1.3 million distinct Twitter accounts, 35.2 thou-
endanger democratic elections by swaying the vot-
sand of which we identified as bots.
ers' opinions, spreading misinformation, or even am-
Our analysis shows that human and bot accounts
plifying the perceived influence of a specific politi-
exhibit distinct behavioral patterns with respect to the
cal candidate (Ferrara, 2017). Ratkiewicz et al. stud-
emotions they spread. While humans tend to conform
ied bot activities related to US politics and found
to the base emotion of an event (e.g., express sadness
that bots are responsible for generating thousands of
or fear during negative events), bot accounts dissemi-
tweets that contain links and strategically mention a
nate a more heterogeneous set of emotions. The dis-
few popular users (Ratkiewicz et al., 2011). These
tinction between bot and human behavior is especially
users in turn receive tweets sent by bot accounts and
evident during polarizing events, where bot accounts
spread the respective tweets to their followers.
purposefully pick sides, i.e. they follow a strategic
Thereby, Ratkiewicz et al. found empirical evidence
agenda to influence human users.
which confirms that bots may generate information
The remainder of the paper is organized as fol-
cascades. Moreover, Kollanyi et al. found that bot
lows. In Section 2 we summarize related work. Sec-
accounts indeed have an impact on the political dis-
tion 3 outlines our research procedure, followed by a
course over Twitter (Kollanyi et al., 2016). By ana-
report on our results in Section 4. We further discuss
lyzing the 2016 US Presidential Elections, Kollanyi
our findings in Section 5 and conclude the paper in
et al. found that bots systematically combined pro-
Section 6.
Trump hashtags with neutral and pro-Clinton hash-
tags such that by the time of the election, 81.9% of
the bot-generated content involved some pro-Trump
2 RELATED WORK messaging.
The importance of studying sentiments communi-
As discussed in (Abokhodair et al., 2015; Chu et al., cated by bots and human accounts has been addressed
2010; Kollanyi et al., 2016; Mascaro et al., 2016), bot by Dickerson et al. , who suggested that humans tend to
accounts differ from human accounts in their tweeting express positive opinions with a higher intensity, as
frequency. However, Chavoshi et al. suggested that compared to bots (Dickerson et al., 2014). More-over,
particular bots may also delete some of their tweets to the authors found that humans tend to disagree more
reach a tweet generation rate that is comparable to with the base sentiment of the event they studied (the
human accounts (Chavoshi et al., 2017b). The differ- 2014 Indian election). Based on this finding, (Ev-erett
ence between bots and human accounts is especially et al., 2016) indicated that bot-generated mes-
14
Why so Emotional? An Analysis of Emotional Bot-generated Content on Twitter
sages which disagree with the opinion of a crowd are Table 1: List of events analyzed in our study.
deceptive, reaching a high likelihood of 78% to trick Domain Event Nr. tweets
people into believing a particular bot message was ac- Negative (N=1,490,495; 34%, RT=76.38%)
tually generated by a human account. However, some Politics 1) Erdogan's threats to EU 804
of the prior findings mentioned above could not be 2) US anti-Trump protests 381,982
Pop culture 3) Death of Leonard Cohen 89,619
confirmed in our analysis (see Section 5).
4) Death of Colonel Abrams 1,253
War & 5) Aleppo bombings 995,561
terrorism 6) Seattle shooting 73
15
COMPLEXIS 2018 - 3rd International Conference on Complexity, Future Information Systems and Risk
started their activities (Chavoshi et al., 2016a), and Our results show that humans in general tend to
is able to identify synchronized bot behavior conform to the base emotion of the respective event
(Chavoshi et al., 2017a). (e.g., predominantly positive emotions are sent
In total, we used DeBot to analyze 1,317,555 during positive events, and predominantly negative
dis-tinct Twitter user accounts, 35,247 of which emotions during negative events).
have been identified as bots – giving us an overall Twitter bots, however, exhibit a different behav-
percent-age of 2.67% of bot accounts in our data-set. ioral pattern with respect to the emotions they con-vey
Phase 5. In the final step, we analyzed our data- in their messages. In particular, during negative events
set. humans exhibit a larger difference (d) between positive
Since it includes events related to three different and negative emotions (dn−p=0.192) as com-pared to
base emotions (i.e. events that are either positive, bot accounts (dn−p=0.005) (see Figure 2). The same
neg-ative, or polarizing, see also Table 1), the goal observation can be made for tweets sent during positive
of this paper is to study how emotions conveyed by events (see Figure 3), where the dif-ference between
Twitter bots compare to emotions spread by human positive and negative emotions sent by human accounts
accounts. More specifically, we are interested in the is dp−n=0.666, while the differ-ence drops to dp−n
impact of bots on the diffusion of emotional content. =0.282 for bot accounts. Dur-ing polarizing events
To this end, Section 4 reports our findings in three (e.g., political campaigning or other controversial
parts: 1) relative intensities for each of the eight basic topics) one can expect a mixture of emotions.
emotions as conveyed by bots and human accounts, Therefore, as expected, humans and bots alike express
2) temporal patterns in tweeting of emotional content positive as well as negative emo-tions in polarizing
by bots and human accounts, and 3) user reactions on events (see Figure 4). As an in-teresting finding we
emotional tweets sent by bots and human accounts. observed that human accounts are more negatively
inclined during polarizing events (dn−p=0.0189), while
bot accounts tend to send more tweets receiving a
positive emotion score during po-larizing events
4 ANALYSIS RESULTS
(dn−p=-0.102).
4.1 Intensities of Emotions Conveyed in However, note that especially in polarizing events,
a positive emotion score does not necessarily convey a
Tweets Authored by Human and positive message but is usually biased towards one of
Bot Accounts the polarizing opinions (see Section 5). Thus, the
positive emotion score often results from a bot that
Figures 2-4 show the relative presence and intensity for purposefully “picked a side” to promote. An example
each of the eight emotions during positive, neg-ative, of such a biased bot-generated tweet from our data-set
and polarizing events. In the figures, we vi-sualize reads: “ #ObamaFail I’ll be so happy to see this joke
5 move out of the White House!! #VoteTrumpPence16”.
positive emotions (trust, joy, anticipation ) in green,
negative emotions in red (anger, disgust, sad-ness, and To further examine how well emotions commu-
fear), and the conditional emotion surprise, which can, nicated by bot accounts correlate with those ex-pressed
by default, neither be classified as positive nor by human accounts, we converted the intensi-ties of
negative, in yellow. In Figures 2-4, the scores for each each emotion into a ranked list for each emo-tion
emotion e are averaged over the sentence count category (positive, negative, polarizing) and ob-tained
S and divided by the tweet count (N) Kendall's rank coefficient t. The results indi-cate that
in general human and bot accounts tend to spread
e
ån i comparative emotions during positive events (Kendall's
i=1 S
i . t is a strong positive 0.85) as well as dur-ing negative
N events (Kendall's t is a moderate positive 0.5).
However, we found a larger distinction between
5 We classify anticipation as a positive emotion
humans and bots during polarizing events (Kendall's
because Spearman's correlation coefficient r has shown that
antici-pation correlates strongly with positive emotions (joy, t is a weak positive 0.14).
trust) and only weakly with negative emotions (anger, fear, In order to provide more insight into this obser-
sad-ness, disgust). For example, for tweets related to positive vation, we examine the impact of retweets on the
events, anticipation strongly correlated with trust r=0.69, but
only weakly with fear r=0.31. We observed the same pattern
overall emotion scores in our data-sets. In general,
for negative and polarizing events. In contrast, sur-prise did we found that humans as well as bots predominantly
not exhibit a strong correlation with either positive or sent retweets. During positive events, 68.80% of the
negative emotions. Therefore, we treat surprise as a sep-arate tweets generated by human accounts are retweets,
category.
16
Why so Emotional? An Analysis of Emotional Bot-generated Content on Twitter
Scores
0.20 0.1084
while 87.9% of the messages sent from bot accounts 0.10 0.0445
consist of retweets.
0.2298 0.0565
0.20 0.0127
0.0148
0.00 0.0757
0.1750
0.15 0.1484 Anger Anticipation Disgust Fear Joy Sadness Surprise Trust
0.1118
0.8898
0.20 0.8004
0.10 0.0975 0.7489
0.0956 0.0799 0.15
0.05 0.0621
0.7884
0.00
0.7756
Scores 0.10
0.05 0.0144
Anger Anticipation Disgust Fear Joy Sadness Surprise Trust
17
COMPLEXIS 2018 - 3rd International Conference on Complexity, Future Information Systems and Risk
0.10
0.0895
Scores
1.0
0.05
0.00
0.5
0.0819
0.05
0.00
0
18
Why so Emotional? An Analysis of Emotional Bot-generated Content on Twitter
Table 2: Results of Welch's two sample t-test with a 95% confid ence level of two samples (bots and humans respectively).
Numbers in brackets indicate degrees of freedom. Statistically significant results are shown in bold.
Polarizing Positive Negative
(Nhuman = 1757663; Nbot = 54910) (Nhuman = 1108577; Nbot = 7010) (Nhuman = 1453591; Nbot = 36904)
Anger
t (with RT) t(57858)=-4.3941, p<0.05 t(1115600)=5.6, p<0.05 t(1490500)=-26.82, p<0.05
t (without RT) t(121030)=-6.88, p<0.05 t(69787)=0.752, p>0.05 t(100350)=-14.30, p<0.05
Disgust
t (with RT) t(1490500)=-16.35, p<0.05 t(1115600)=2.34, p<0.05 t(38559)=-0.86, p>0.05
t (without RT) t(121030)=-13.26, p<0.05 t(69787)=-1.05, p>0.05 t(9791)=-1.45, p>0.05
Sadness
t (with RT) t(1812600)=-37.86, p<0.05 t(1115600)=3.27, p<0.05 t(1490500)=-60.42, p<0.05
t (without RT) t(121030)=-15.37, p<0.05 t(69787)=-0.15, p>0.05 t(100350)=-24.37, p<0.05
Fear
t (with RT) t(1812600)=-34.08, p<0.05 t(7083.3)=3.5, p<0.05 t(1490500)=-70.48, p<0.05
t (without RT) t(121030)=-16.42, p<0.05 t(69787)=1.95, p>0.05 t(100350)=-29.63, p<0.05
Trust
t (with RT) t(58244)=4.63, p<0.05 t(1115600)=-17.03, p<0.05 t(1490500)=-17.1, p<0.05
t (without RT) t(100350)=0.32, p>0.05 t(69787)=-11.92, p<0.05 t(100350)=-10.15, p<0.05
Joy
t (with RT) t(1812600)=10.87, p<0.05 t(1115600)=-35.4, p<0.05 t(1490500)=-17.97, p<0.05
t (without RT) t(100350)=-1.02, p>0.05 t(69787)=-24.41, p<0.05 t(100350)=-8.16, p<0.05
Anticipation
t (with RT) t(58354)=1.09, p<0.05 t(1115600)=-14.83, p<0.05 t(1490500)=-23.73, p<0.05
t (without RT) t(100350)=-3.13, p<0.05 t(69787)=-7.58, p<0.05 t(100350)=-12.74, p<0.05
Surprise
t (with RT) t(58337)=-1.98, p<0.05 t(1115600)=3.78, p<0.05 t(1490500)=-10.91, p<0.05
t (without RT) t(100350)=-0.32, p>0.05 t(69787)=0.14, p>0.05 t(100350)=-6.37, p<0.05
2.0 positive emotions, human account
and/or amplify a certain sentiment (positive or nega-
Reactions
0.5
Table 3: Summary of user reactions (mean and standard deviation) on emotional content disseminated by bot and human
accounts. Bot-related table entries which received more attention in terms of liking or retweeting as compared to human-
generated tweets are printed in bold.
Polarizing Positive Negative
Positive
RT 6142.62±18921.07 5727.02±17087.09 1502.73±4890.3
human
Like 1.19±76.18 1.48±77.44 1.06±40.57
human
RT 2629.78±11332.41 1389.35±5674.46 404.8±1209.07
bot
forth an interesting insight into two distinct types of Identifying emotionally polarizing bots during
user reactions on emotional bot-generated content – po-larizing events can serve as an indicator for an
thereby also indicating a potential for bot accounts attempt of opinion swaying, i.e. the bots pick a side
to emotionally manipulate Twitter users (note that in that they promote in their tweets. In our study, we
our data-set no neutral bot-generated tweet received have shown that bot-generated retweets have an
more attention than emotionally-neutral tweets gen- impact on the per-ceived emotionality in the Twitter
erated by human accounts). discourse. In par-ticular, our results indicate that in
negative events pos-itive emotions are amplified by
the effects of retweets, similar to an amplification of
positive emotions dur-ing polarizing events.
5 DISCUSSION
Below we show examples of bot messages with
positive emotion scores during polarizing events.
Our findings suggest that people, unlike bots, tend to
Re-lated to the 2016 US presidential elections, the
conform to the base emotion of an event. This finding
bots in our data-set disseminated messages such as:
is in line with “offline” social studies (Heath, 1996)
which showed that people tend to pass along (word-of- • “ #ObamaFail I’ll be so happy to see this joke
mouth) positive messages during positive events and move out of the White House!! #VoteTrump-
negative during negative events. By comparing our Pence16”,
findings to the ones in the related work (see Sec-tion • “ So proud of my daughter! She just voted
2), we cannot confirm the findings indicated in for @realDonaldTrump #Millennials4Trump
(Dickerson et al., 2014), according to which people #Women4Trump #VoteTrumpPence16 #America”.
disagree more with the base sentiment of a particular
While related to the 2016 Austrian presidential elec-
event. One possible explanation for this result is that
tions, bots disseminated messages such as:
for our study we analyzed tweets from 24 different
events while the authors of (Dickerson et al., 2014) • “ Save your country, take back control and stop
studied a single event (the 2014 Indian election) only. Is-lamisation. We support Austria’s Hofer in
As noted in (Everett et al., 2016), bots may try to tomor-row’s election. #bpw16”.
deceive human users by diverging the sentiment con- Since messages that support one particular can-
veyed in their tweets from the base emotion of the re- didate make up the vast majority (99.37%) of bot-
spective event. Thus, the observation that bots differ generated tweets in the subset containing positive
from human accounts by not complying to the base messages (87.8% of those messages are retweets),
sentiment of the event may be explained by a bot's we can conclude that bots clearly have a strategic
agenda to deceive human users. In fact, we also ob- agenda during polarizing events.
served a deviation from the base sentiment in positive Our findings further indicate that bots tend to
and polarizing events. More specifically, we found that spread (retweet) more negative emotions during posi-
bots tend to be more positive during polarizing events tive events (86.41% retweets) as compared to humans.
and more negative during positive events. Observing the bot-generated tweets in our data-set,
20
Why so Emotional? An Analysis of Emotional Bot-generated Content on Twitter
the respective messages tend to either 1) express a topic-wise unrelated Twitter discussions (e.g., mes-
negative opinion about a specific topic, such as: sages related to the 2016 US presidential election that
include Thanksgiving hashtags). Given such observa-
• “ This was not as good as the last one. It’s hard to
tions, emotions sent by an OSN account may serve as a
ink when there is a lot of black #FantasticBeasts”,
valuable indicator of automated accounts.
• “ That explains the retarded haircut. I hate his We also performed a time-series analysis and
mother even more. #FantasticBeasts”, identified temporal patterns which distinguish human
• “ Nico Rosberg articulates the F1 season and and bot accounts in terms of their emotionality. Fi-
his resignation but offers no real clues as to why nally, we showed that emotional bot-generated tweets
#NicoRosberg”, tend to get more likes/retweets if a bot spreads a tweet
conveying shifted emotions (i.e. emotions that do not
or 2) surprise the prospective readers by injecting comply with the base emotion of the respective event).
neg-ative content about a particular subject. For Interestingly, emotionally neutral tweets au-thored by
exam-ple, the human-generated Thanksgiving tweets bots did not receive much attention in terms of likes
in our data-set are predominantly positive, while and retweets.
bots in-jected topic-wise unrelated negative tweets In our future work, we plan to further investigate
carrying a Thanksgiving hashtag, e.g.: the effects of basic as well as derived emotions on
• “ Sissy Mitt Romney signed Massachusetts gun the diffusion of information in OSNs. Moreover, we
ban #thanksgiving #Trump #MAGA”. plan to investigate whether the same patterns found
on Twitter hold for other OSN platforms, such as
Face-book.
6 CONCLUSION
We systematically collected 4.4 million tweets re- ACKNOWLEDGEMENTS
lated to 24 real-world events that are either positive
(e.g., birthday of a celebrity), negative (e.g., terror We thank Nikan Chavoshi from the Department of
at-tacks), or polarizing (e.g., political campaigning). Computer Science, University of New Mexico, for
For each of the 4.4 million tweets we applied an her kind assistance with the bot detection via DeBot.
emotion-extraction procedure (Kuˇsen et al., 2017a)
that pro-vides intensity scores for the eight basic
emotions according to Plutchik's wheel of emotions. REFERENCES
In to-tal, the tweets in our data-set have been
generated by 1.3 million unique user accounts, 35.2 Abokhodair, N., Yoo, D., and McDonald, D. W. (2015).
thousand of which were identified as bots via DeBot Dissecting a social botnet: Growth, content and
(Chavoshi et al., 2016b; Chavoshi et al., 2016a; influ-ence in Twitter. In Proceedings of the 18th
Chavoshi et al., 2017b). ACM Con-ference on Computer Supported
In order to examine how bots and human accounts Cooperative Work and Social Computing, CSCW'15,
pages 839–851, New York, NY, USA. ACM.
compare to each other in terms of the emotions they
spread, we examined the relative presence of emo- Bessi, A. and Ferrara, E. (2016). Social bots distort the
2016 US presidential election online discussion.
tions, as communicated by both types of accounts.
First Monday, 21(11).
Moreover, since previous studies have shown that bots
tend to exhibit a higher retweet frequency than hu-man Chavoshi, N., Hamooni, H., and Mueen, A. (2016a).
accounts, we adjusted the scores for the effects of Debot: Twitter bot detection via warped correlation.
retweets. Our findings suggest that, in general, hu- In 2016 IEEE 16th International Conference on
mans conform to the base emotion of the respective Data Mining (ICDM), pages 817–822.
event, while bots contributed to the higher intensity of Chavoshi, N., Hamooni, H., and Mueen, A. (2016b). Iden-
shifted emotions (e.g., negative emotions during tifying correlated bots in Twitter. In Spiro, E. and Ahn,
positive events). Our study shows that bots tend to shift Y.-Y., editors, Social Informatics: 8th Interna-tional
emotions in order to receive more attention (in terms of Conference, SocInfo 2016, Bellevue, WA, USA,
November 11-14, 2016, Proceedings, Part II, pages 14–
likes or retweets) and that they often fol-low a specific 21, Cham. Springer International Publishing.
agenda. In particular, we showed that bots tend to Chavoshi, N., Hamooni, H., and Mueen, A. (2017a). On-
emotionally polarize during controver-sial events (such demand bot detection and archival system. In Pro-
as presidential elections). Further-more, we found that ceedings of the 26th International Conference on
bots inject shifted emotions into World Wide Web Companion, WWW '17 Compan-
ion, pages 183–187, Republic and Canton of
Geneva,
21
COMPLEXIS 2018 - 3rd International Conference on Complexity, Future Information Systems and Risk
Switzerland. International World Wide Web Confer- Kollanyi, N., Howard, P., and Woolley, S. (2016). Bots
ences Steering Committee. and automation over Twitter during the U.S. election.
Chavoshi, N., Hamooni, H., and Mueen, A. (2017b). Tem- Data Memo. Oxford, UK: Project on Computational
poral patterns in bot activities. In Proceedings of the Propaganda, 2016(4):1–5.
26th International Conference on World Wide Web
Companion, WWW '17 Companion, pages 1601– Kramer, A. D. I., Guillory, J. E., and Hancock, J. T. (2014).
1606, Republic and Canton of Geneva, Switzerland. Experimental evidence of massive-scale emotional
International World Wide Web Conferences Steering contagion through social networks. Proceedings of the
Committee. National Academy of Sciences, 111(24):8788–8790.
Chu, Z., Gianvecchio, S., Wang, H., and Jajodia, S. (2010). Kuˇsen, E., Cascavilla, G., Figl, K., Conti, M., and Strem-
Who is tweeting on Twitter: Human, bot, or cyborg? In beck, M. (2017a). Identifying emotions in social me-
Proceedings of the 26th Annual Computer Security dia: Comparison of word-emotion lexicons. In Proc.
Applications Conference, ACSAC '10, pages 21–30, of the 4th International Symposium on Social Net-
New York, NY, USA. ACM. works Analysis, Management and Security (SNAMS)
Dickerson, J. P., Kagan, V., and Subrahmanian, V. S. (co-located with IEEE FiCloud 2017). IEEE.
(2014). Using sentiment to detect bots on Twitter: Kuˇsen, E., Strembeck, M., Cascavilla, G., and Conti, M.
Are humans more opinionated than bots? In 2014 (2017b). On the influence of emotional valence shifts
IEEE/ACM International Conference on Advances on the spread of information in social networks. In
in Social Net-works Analysis and Mining (ASONAM Proceedings of the IEEE/ACM International Confer-
2014), pages 620–627. ence on Advances in Social Networks Analysis and
Eagle, N. and Pentland, A. (2006). Reality mining: Sens- Mining (ASONAM), pages 321–324. ACM.
ing complex social systems. Personal and Mascaro, C., Agosto, D., and Goggins, S. P. (2016). One-
Ubiquitous Computing, 10(4):255–268. sided conversations: The 2012 presidential election
Everett, R. M., Nurse, J. R. C., and Erola, A. (2016). The on Twitter. In Proceedings of the 17th International
anatomy of online deception: What makes automated Digital Government Research Conference on Digital
text convincing? In Proceedings of the 31st Annual Government Research, dg.o '16, pages 112–121,
ACM Symposium on Applied Computing, SAC '16, New York, NY, USA. ACM.
pages 1115–1120, New York, NY, USA. ACM. Plutchik, R. (2001). The nature of emotions. American
Ferrara, E. (2017). Disinformation and social bot Scientist, 89(4).
operations in the run up to the 2017 French
presidential election. First Monday, 22(8):1–33. Ratkiewicz, J., Conover, M., Meiss, M., Gonc¸alves, B.,
Ferrara, E., Varol, O., Davis, C., Menczer, F., and Flammini, A., and Menczer, F. (2011). Detecting and
Flammini, A. (2016). The rise of social bots. tracking political abuse in social media. In Proc. 5th
Commun. ACM, 59(7):96–104. International AAAI Conference on Weblogs and So-
cial Media (ICWSM), pages 297–304.
Fredrickson, B. L. (2001). The role of positive emotions
in positive psychology: The broaden-and-build the- Savage, S., Monroy-Hernandez, A., and H ¨ollerer, T.
ory of positive emotions. The American (2016). Botivist: Calling volunteers to action using
Psychologist, 56:218–226. online bots. In Proceedings of the 19th ACM
Conference on Computer-Supported Cooperative
Gilani, Z., Farahbakhsh, R., and Crowcroft, J. (2017). Do Work & Social Computing, CSCW '16, pages 813–
bots impact Twitter activity? In Proceedings of the 822, New York, NY, USA. ACM.
26th International Conference on World Wide Web
Companion, WWW '17 Companion, pages 781–782, St Louis, C. and Zorlu, G. (2012). Can Twitter predict dis-
Republic and Canton of Geneva, Switzerland. ease outbreaks? BMJ, 344.
Interna-tional World Wide Web Conferences Thai, M. T., Wu, W., and Xiong, H., editors (2016). Big
Steering Com-mittee. Data in Complex and Social Networks. CRC Press,
Heath, C. (1996). Do people prefer to pass along good or Taylor & Francis Group.
bad news? Valence and relevance of news as predic-
tors of transmission propensity. Organizational be- Tsugawa, S. and Ohsaki, H. (2015). Negative messages
havior and human decision processes, 68(2):79–94. spread rapidly and widely on social media. In Pro-
Howard, P., Duffy, A., Freelon, D., Hussain, M., Mari, W., ceedings of the 2015 ACM on Conference on Online
and Mazaid, M. (2011). Opening closed regimes, Social Networks, pages 151–160.
what was the role of social media during the Arab Van den Broeck, J., Argeseanu Cunningham, S., Eeckels,
Spring? Project on Information Technology and R., and Herbst, K. (2005). Data Cleaning: Detecting,
Political Is-lam, pages 1–30. Diagnosing, and Editing Data Abnormalities. PLoS
Kim, H. S., Lee, S., Cappella, J. N., Vera, L., and Emery, Medicine, 2(10).
S. (2013). Content characteristics driving the
diffusion of antismoking messages: Implications for
cancer pre-vention in the emerging public
communication envi-ronment. Journal of National
cancer institute. Mono-graphs, 47:182–187.
22