Flaming and Trolling: Claire Hardaker

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

18.

Flaming and trolling


Claire Hardaker

Abstract: In the last two decades, pragmatic explorations of flaming and troll-
ing in computer-mediated communication have gained momentum. However,
although there is little doubt that “attacks, assaults and contemptuous remarks”
(Jucker and Taavitsainen 2000: 73) have always been commonplace, some
researchers (cf. Moor, Heuvelman and Verleur 2010; Nitin, Bansal and Kanzashi
2011) have recently argued that certain types of conflict, such as flaming and troll-
ing are particularly native to social media. To this end, this chapter aims to provide
new insights into current pragmatic research into flaming and trolling, including
how these terms are defined and deployed, case studies that illuminate how these
behaviors are accounted for by existing literature, and the current challenges that
face these fields and their development into the future.

1. Introduction

In recent years, the media has pounced on what I have, in previous research, loosely
termed negatively marked online behaviors (NMOB) (Hardaker 2010, 2013, 2015;
Hardaker and McGlashan 2016). NMOB is behavior that, for whatever reason,
others consider to fall outside of the bounds of social decorum for that particular
online interaction. It could involve something as mild as raising a topic that falls
outside of the scope of a given forum, or more seriously, posting crude jokes on a
tribute page set up to honor a deceased person. At the criminal end of the scale, it
might consist of repeatedly sending threats via emails or text messages, or sharing
someone’s personal, private details on public platforms.
The media’s interest in NMOB is unsurprising. The sheer breadth, depth, and
range of unpleasant online behaviors provide an endless source of attention-grab-
bing headlines and exposés. Likewise, the people behind that behavior, whether
characterized as trolls, cyberstalkers, or online predators, make for simplistic,
almost cartoon-like villains that the vast majority of media-consumers can readily
position themselves against.
This fervor of interest in online abuse, however, though so pervasive now, is a
relatively new phenomenon. The internet could be described as well-established
across numerous countries by the 1990s, yet as late as 2010, antisocial online
behavior was virtually ignored by the news, academia, and even to an extent, the
online platforms that were home to those behaviors themselves. Even at the time
of writing, there is still surprisingly little linguistic research on NMOB, and what

DOI 10.1515/9783110431070-018
In: C. R. Hoffmann and W. Bublitz (eds.). (2017). Pragmatics of Social Media, 493–522. Berlin/Boston:
De Gruyter Mouton. Brought to you by | Cambridge University Library
Authenticated
Download Date | 9/14/17 4:56 PM
494  Claire Hardaker

interest we can find tends to be distributed unevenly. We can readily find interest
in topics such as spam (e.g. Barron 2006; Stivale 1997), cyberbullying (e.g. Strom
and Strom 2005; Topçu, Erdur-Baker and Çapa-Aydin 2008), cyberstalking (e.g.
Bocij 2004; Whitty 2004), aggressive video games (e.g. Dill and Dill 1998; Scott
1995; van Schie and Wiegman 1997) and computer-related depression (e.g. Kraut
et al. 1998). Likewise, flaming had arguably arrived by the turn of the millennium,
with interest from fields as diverse as psychology (Collins 1992; Lea et al. 1992),
linguistics (Arendholz 2013; Herring 1994; Jucker and Taavitsainen 2000), media
and cultural studies (Millard 1997; O’Sullivan and Flanagin 2003), sociology (Lee
2005) and information science (Kayany 1998).
Trolling, however, has struggled to gain validity as field of interest until as late
as 2010. For instance, by the end of the 1990s, there had been a handful of articles
on the politics (Tepper 1997) and deceit (Donath 1999) found in online practices.
By 2005 new research was being published about online deception (Placks 2003;
Utz 2005; Zhou et al. 2004), but very little directly about trolling (see, however
Herring et al. 2002). Finally, by 2010, research dealing specifically with troll-
ing (e.g. Hardaker 2010; Shachaf and Hara 2010; Shin 2008) started to appear
in greater numbers. And by 2015, the growth had become exponential, spanning
fields as diverse as Artificial Intelligence (Dlala et al. 2014), computing (Cheng,
Danescu-Niculescu-Mizil and Leskovec 2015; Fichman and Sanfilippo 2015),
communication studies (Brabazon 2012), cultural studies (Phillips 2015), media
(Binns 2011; McCosker 2014), politics (Virkar 2014), psychology (Buckels, Trap-
nell and Paulhus 2014; Maltby et al. 2015), public relations (Weckerle 2013) and
of course, linguistics (Hardaker 2013, 2015; Hardaker and McGlashan 2016; Hop-
kinson 2013).
Why such reluctance to confront what might be viewed as an online epidemic?
One problem may be a lingering sense that computer-mediated communication
(CMC) is somehow not as worthy a subject of investigation as offline interaction,
and some research has compounded this devaluation by dichotomising virtual and
real-life activity (e.g. Baym 1996: 342; Chiou 2006: 547; Strom and Strom 2005:
41). A secondary problem from the academic perspective may be with

[…] the variability in the perceptions of norms and expectations underlying evaluations
of behaviour as polite, impolite, over-polite and so on, and thus inevitably discursive
dispute or argumentativity in relation to evaluations of im/politeness in interaction. Yet
with the exception of work by Locher (2006) and Graham (2007, 2008), there has been
little research on im/politeness in various forms of computer-mediated communication
from this perspective. (Haugh 2010: 8)

These problems are not limited to the analyst. CMC, by its very nature, is both rel-
atively new and changing extremely quickly. Analysts and users alike may strug-
gle to evaluate the appropriacy of online utterances in light of their own norms and
expectations. With this in mind, this chapter moves next into a brief overview of

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
Flaming and trolling  495

CMC and the peculiarities that this context affords. The current state of the art of
academic research on flaming and trolling in CMC in general and social media in
particular is discussed in Sections 3 and 4 respectively. Finally, Section 5 looks
towards promising future directions.

2. Computer-mediated communication

The overarching term computer-mediated communication is not necessarily clear-


cut, so it is worth briefly defining. Firstly, computer may intuitively seem to index
the prototypical desktop machine, but it can also refer to smartphones, smart-
watches, tablets, games consoles, and more besides. Our interest is also in com-
munication via computers, rather than with or from the computer, as covered by
fields such as human-computer interaction (HCI) and artificial intelligence (AI).
In other words, for the purposes of this chapter, we will define CMC as commu-
nication occurring via any mediating, networked technology (December 1997: 5;
Ferris 1997; Herring 1996, 2003: 612) regardless of where that communication
falls on the scale from almost instantaneous (synchronous CMC) to delayed (asyn-
chronous CMC).
In early research, CMC was generally viewed from a technologically deter-
ministic (Bolter and Grusin 1998) perspective. Strong technological determinism
positions technology as in control, particularly of its place and development in
culture and society. As a very small example of this, the following headline, How
Facebook ruined my holiday: The Internet and mobile technology make it increas-
ingly difficult to switch off (Kirsh 2012) casts Facebook, the internet, and mobile
technology in the role of autonomously self-aware, and even malignant actor,
whilst the user is merely the passive victim.
Researchers such as Kraut et al. (1992: 375) argued that CMC not only
impeded the formation of new offline social ties, but even damaged those already
in existence. In later research, they suggested that, “greater use of the internet
is associated with declines in participants’ communication with family members
in the household, declines in the size of their social circle, and increases in their
depression and loneliness” (Kraut et al. 1998: 1017). Turkle (1990) describes the
phenomenon of the subjective computer in which we project onto a new tech-
nology our own fears and desires. Tyrannical dictatorships are more likely to see
the threat of revolution and insurgency in CMC, whereas a democratic and open
leadership will find in it an opportunity for more voices to be heard, and for the
free communication of ideas and knowledge. Latterly, CMC research shifted its
attention from the technology itself to the user in context (Androutsopoulos 2006:
420). This notion of self-determinism posits that we use, or allow ourselves to be
affected by technology. For example, the sentence, I ruined my life (via CMC)
casts the user as primary actor responsible for her own actions (Chandler 1995).

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
496  Claire Hardaker

In short, as we will see in the coming sections, the current focus of CMC
research is on exploring the diversity, creativity, and meaning of linguistic choices,
the construction of online identities, and the interplay between these two.
Not all the diversity, creativity, and constructions are positive, however. In
fact, almost from its first days, CMC became a fertile ground for misbehavior and
crime, whether in the form of sharing illegal images, finding potential victims, or
just being extremely unpleasant to others from the safety of a distant keyboard. As
both Donath (1999) and Sternberg (2000) observed, as a direct result of this, many
CMC platforms are built and managed with conflict at the forefront of almost
every aspect. The evidence for this can be found in the technology, in the form of
block, ban, or ignore options features. They are usually also documented in the
site’s own literature: “An extensive description of killfile techniques in a group’s
FAQ is a kind of virtual scar-tissue, an indication that they have had previous trou-
ble with trolls or flame-wars” (Donath 1999: 48).
In the space of a short chapter, it is impossible to do justice to all the factors
that play into abusive online behavior, so only three major aspects – anonymity,
meaning, and format – are covered below.

2.1. Anonymity, pseudonymity, and disclosure


Probably the earliest avenue of investigation into online abuse involved analyzing
the way in which anonymity affected behavioral outcomes. In simplistic terms,
depending on the platform in question, CMC can offer users a choice between
complete anonymity (one changes one’s username very frequently or interacts
without signing in) through to pseudonymity (one maintains a long-term username
that does not identify that person offline, but that accrues its own history) through
to full disclosure (one uses an identifier that would allow another to find them
offline) (Bernstein et al. 2011).
Research into online identity has revealed that almost regardless of a site’s
imposed level of disclosure, users will themselves choose just how much per-
sonal information they wish to convey (Chester and Bretherton 2007). Anonymity
and pseudonymity both facilitate the exploration of other identities that may be
deemed inconsistent with their offline selves (Bucholtz 1999; Bucholtz and Hall
2005), and as a consequence, Vinagre (2008: 321) observed that this can encour-
age users to feel that they are beyond the reach of consequences:
Sometimes people share very personal things about themselves. They reveal secret
emotions, fears, wishes. They show unusual acts of kindness and generosity, sometimes
going out of their way to help others. We may call this benign disinhibition. However,
the disinhibition is not always so salutary. We witness rude language, harsh criticisms,
anger, hatred, even threats. Or people visit the dark underworld of the Internet—places
of pornography, crime, and violence—territory they would never explore in the real
world. We may call this toxic disinhibition. (Vinagre 2008: 321)

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
Flaming and trolling  497

As long ago as 380BC, Plato (2007: 2.359c-2.360d) used the story of the Ring of
Gyges as an insight into the ways in which visibility, invisibility, and anonymity
could encourage a shepherd to murder a king. More recently, Lea and Spears (1991)
observed how anonymity could influence group decision-making processes, whilst
Reicher, Levine and Gordijn (1998) considered the ways that it affected power
relations within groups.
Particularly on very large social networks, little prevents users from carry-
ing out online abuse with fabricated, cloned, or even their own accounts if social
stigma does not concern them (Chester and O’Hara 2009; Phillips 2002; Zarsky
2004). Siegel et al. (1986) elaborate on the ways that anonymity can foster a sense
of impunity, a loss of self-awareness, and a likelihood of acting upon normally
inhibited impulses – an effect known as deindividuation (1986: 161). Psycholog-
ically, interactants may give less consideration to the recipient’s feelings. This,
according to Douglas and McGarty, is manifested in NMOB like flaming and troll-
ing (2001: 399). When we add to this, “the potential for reaching a diverse global
audience, consisting of hundreds of cultures, it is unsurprising that conflict is a
common phenomenon in Usenet” (Baker 2001).
This disjuncture between offline and online communication, however, is not
the only anvil on which conflict is wrought. Even the communication of straight-
forward meanings can be more complex than first appears.

2.2. Richness, assumptions, and meanings


Herring (2003) describes text-based CMC as “free from competing influences from
other channels of communication and from physical context” (2003: 612). Whilst
this claim is perhaps a little strong, it does highlight a supposed advantage of
CMC – that others cannot leap to conclusions about us before they have read what
we have to say. However, this apparent advantage can become a serious drawback.
Zdenek (1999) suggests that CMC is “clearly at a disadvantage, because [users] do
not have access to the wide array of social cues that FTF [face-to-face] speakers
do” (1999: 390). In different words, users who are heavily accustomed to face-to-
face interaction may find themselves communicating in an environment that does
not readily convey all the supporting metadata about the words, such as the tone of
voice, gestures, or facial expressions. It is easy to forget this, or to make assump-
tions that one’s intention will be far more obvious than it really is, and the result is
a greatly increased potential for incorrect inferences.
In this precise vein, much early research into CMC implicitly held face-to-
face interaction as the gold-standard from which all other communicative forms,
including CMC, deviated. For instance, Kraut et al. (1992) argued that face-to-face
was expressively and interactively ‘richer’ than CMC. Baker develops this further
when he argues:

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
498  Claire Hardaker

To a “newbie,” posting to Usenet can be a daunting experience. It is impossible to know


who will read the message, or to determine how they will react. Also, unlike face-to-
face interaction, gestural, facial or prosodic cues are absent, and so many posters rely
on smilies (Elmer-Dewitt 1994, Sanderson 1993) to function as indicators of emotion.
(Baker 2001)

As above, a typical focus in CMC research is on what CMC lacks in comparison


to face-to-face interaction (e.g. instant audiovisual feedback). Little, if anything,
is said about the online features that are impossible offline, such as the ability to
share multimedia, to overtly and anonymously rate user interaction, and to fully
filter out undesired content.
Perhaps because of our heavy reliance on the face-to-face schema, however,
current research highlights that even experienced users can over-estimate their
online communicative abilities, and this over-estimation increases the potential for
miscommunication (Herring 2003: 612; Kruger et al. 2005; Zdenek 1999: 390).
A complicating factor, already alluded to above, is that CMC is dynamic, and is
continually developing new genres and styles (Abbasi and Chen 2008: 812), some
of which cannot exist offline (e.g. vlogs; fan pages; synchronous, worldwide social
networks).
CMC has also developed a rich body of cues, including symbols, imagery, style,
formatting, greetings, signatures, memes, and animations (also known as GIFs, i.e.
Graphics Interchange Formats). As a result, we can find research into the ways
that these and many besides can express complex and subtle relational, social, and
pragmatic information, such as affect (Subasic and Huettner 2001), rank (Hara,
Bonk and Angeli 2000), opinion (Nigam and Hurst 2004), genre knowledge (Yates
and Orlikowski 2002), ingroup norms, power relations and positioning (Henri
1992; Panteli 2002).
One glaring gap in the literature relates to innovation and memes. In their
efforts to enrich the communicative experience and convey precise meanings,
users will often circumvent interactional limitations by creating their own lin-
guistic and multimodal varieties (cf. the lolcat, doge, and leetspeak varieties, as
well as emoticons known as emoji), or imbuing existing features with new mean-
ings (Vaughan and Gawne 2011; Wilson and Peterson 2002). These can become
standardised, valued markers which form powerful micro-cultures. Knowledge of
how to appropriately use features can be a key index of ingroup membership, and
successful innovations can even become mainstream commercial ventures. For
instance, emojis can now be bought as a range of merchandise. Despite this, it may
be that the very informality, popularity and transience of these features militates
against them being perceived as ‘serious’ topics of study.

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
Flaming and trolling  499

2.3. Interactivity, format, and transience

A significant proportion of our time online is dedicated not simply to consuming


static content, but to interacting with others, including via video chats, private
messages, emails, public posts, blog comments and far more besides. Inevitably,
all such communication entails a potential for miscommunication. From this, con-
flict and disharmony can escalate into offline consequences, such as legal action,
violence, and even murder. It might then seem easier to simply abolish online inter-
action altogether, but there are several potential benefits to offering interactivity.
Sites with busy communities can generate enormous revenue from advertising,
commission, and merchandise. Cothrel (2000) investigates how users can be an
excellent source of free content for media outlets, determine the development of a
brand (e.g. Walkers’ ‘Do us a flavour!’ campaign) and give invaluable feedback.
Users may also provide free trouble-shooting and technical support to other users,
thereby reducing the load (and expense) on the retailer.
However, there is relatively little research, certainly within linguistics, that
investigates the ways that the transience of some content (e.g. platforms where
interactions are automatically deleted after a short period of time) can shape anti-
social online behavior. Similarly, there is little to be found on the ways that the
format can impact the spread of online conflict. For instance, by its very nature,
the prototypical bulletin-board format (e.g. the comments sections of many news
outlets, blogs and sites such as 4chan, Yahoo! Answers, Mumsnet, YouTube and
Github) promotes a form on interaction in which one user posts an item to many
people (one-to-many), and many are invited to respond to that one (many-to-one).
By contrast, threaded or nested environments (e.g. Slashdot, Reddit and Usenet)
promote interactions where many people post messages for the many to read
(many-to-many). In short, on a threaded forum, each comment may have a greater
number of readers, meaning that NMOBs may gain a wider audience and therefore,
greater uptake. Empirical research into such matters, though, is in short supply.

3. Flaming

Before reviewing the literature, it is interesting to look back into the history of the
word flaming, because, though it may sound or feel new, this concept has been
with us for some time.

3.1. Etymology
A quick glance at any reasonably thorough dictionary shows that flame (v.) was
used as early as the 1500s to describe a violent, passionate outburst. As might
be expected of an ancient word that describes one of the earliest elemental con-

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
500  Claire Hardaker

cepts, the metaphorical extension of the word flame sits within a constellation of
other metaphorical extensions, all dedicated to conveying the nuances of anger and
provocation. For instance, a fiery, hot-tempered, explosive person might fan the
flames by making heated, inflammatory or incendiary comments, and we might
simmer over an insult, burn with indignation, boil with rage, or even blow our top
like an erupting volcano. In short, high temper has long been associated with high
temperature, and when we turn to the modern era, CMC simply appears to have
co-opted this ancient fire-as-rage metaphor and applied it to the new phenomenon
of hasty, vitriolic and excessive online outbursts.
When we turn back the pages of academic research, however, we find a differ-
ent story. As already mentioned above, unlike trolling, flaming has received more
academic attention (see, for instance, Avgerinakou 2008; Chester 1996; Herring
1994; Kayany 1998; Lea et al. 1992; Millard 1997), and greater interest typically
results in a proliferation of definitions, rather than a coalescing agreement on one
accepted understanding. Despite this greater interest and the long history that this
metaphor has beyond the realms of CMC, though, as recently as fifteen years ago,
the concept was still regarded in a somewhat fuzzy manner. For instance, under the
heading, 20th century adolescents: Sounding and flaming, Jucker and Taavistainen
(2000) describe flaming thus:
The other institutionalized form of insults is the practice of flaming on the internet. It
appears to be particularly common in news groups, where a large number of partici-
pants can submit email postings under the cover of anonymity. In this context, flaming
is considered to be bad style and is rejected by the code of behavior on the internet, the
so-called netiquette.1 (Jucker and Taavitsainen 2000: 90)
This description really doesn’t tell us what flaming is at all, beyond the fact that
it is considered “bad style”, and the example that Jucker and Taavistainen (2000)
subsequently provide comes from alt.flame, a newsgroup dedicated specifically
to flaming. Jucker and Taavistainen (2000) acknowledge that by the very nature
of such a group, the flamewars carried out there are probably “of an entirely ludic
nature” (Jucker and Taavitsainen 2000: 91). In short, this tells us very little about
what we might think of as genuine flaming. Only a year later, however, Baker
(2001) gives us a more thorough definition:
Antagonistic postings are known as flames (Siegel et al. 1986) and prolonged, escalat-
ing conflicts are often referred to as flame wars. In flame wars flames can give rise to
other flames, involving more and more posters, some who may be angry that the flame
war is taking over the newsgroup. The tone of flames is intentionally aggressive and
numerous methods of attack are used, ranging from intellectualized debate, through
biting sarcasm to scatological abuse. (Baker 2001)

1
This also rather worryingly seems to suggest the existence of only one universal neti-
quette!

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
Flaming and trolling  501

And only a few years later, Johnson, Cooper and Chin (2008) describe flaming as
“the antinormative hostile communication of emotions […] that includes the use
of profanity, insults, and other offensive or hurtful statements” (Johnson, Cooper
and Chin 2008: 419).

3.2. Case studies: Chaplin and Chambers


Beyond definitions, it is illuminating to look at two brief examples of flaming,
both from the UK, to see how the literature applies to this behavior in practice.
Our first case involves Gary Chaplin, an executive for recruitment business Stark
Brooks. In November 2011, Chaplin, along with approximately 4,000 other peo-
ple, received the following email from a client named Emmanouil Katsampoukas:

(1)  Email from Katsampoukas to Chaplin


Dear Sirs,
My name is Manos Katsampoukas and I am interested in finding a job in the banking/
marketing sector in the UK. Please find attached my CV. Further information available
upon request. Looking forward to hearing from you.
Kind Regards
Manos Katsampoukas.
For Chaplin, Katsampoukas’ email appears to have been provocation enough, and
using the pseudonym Richard Vickers, he reacted as follows:

(2)  Email from Chaplin to Katsampoukas


Emmanouil – I think I speak for all 4000 people you have emailed when I say, ‘Thanks
for your CV’ – it’s nice to know you are taking this seriously and taking the time to
make us all feel special and unique.
If you are not bright enough to learn how to ‘bcc’ and thus encourage cock-jockey
retards to then spam everyone on the list (yes Dan McCarthy from One Search I’m
talking about you – you opportunistic thundercunt) then please fuck off …you are too
stupid to get a job, even in banking.
I get enough retarded spam from idiots – I don’t need the Dan ‘fucktard’ McCunthy’s
of the world thinking they are being smart and original by spamming back to your 4000
best friends. (PS – is ‘One Search’ what you’ve successfully completed on this year?).
Yours hitting the delete button. Have a nice day!

Intentionally or otherwise, Chaplin appears to have clicked “Reply All” and sent
this response not only to Katsampoukas, but to all the other 4,000 recipients of the
original email as well. Chaplin’s identity was eventually traced through his ISP,
and as a consequence, Stark Brooks asked him to resign (Atkinson 2011).

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
502  Claire Hardaker

Whilst this is only one example, we find, as the literature suggests, language in
Chaplin’s email that could be characterized as “intentionally aggressive” (Baker
2001) due to the “profanity, insults, and other offensive or hurtful statements”
(Johnson, Cooper and Chin 2008: 419).
In another case, an irritated reaction that found its outlet in sarcastic humor
resulted not only in the producer of the content being fired, but also in an extended
legal battle and the involvement of multiple celebrities.
In January 2010, as snow and bad weather closed in around Doncaster, Robin
Hood Airport began to issue alerts about possible delays and closures. One
would-be passenger was trainee accountant Paul Chambers (then 26), who was
planning to fly to Belfast to finally meet face-to-face with Sarah Tonner (@crazy-
colours). As the weather worsened, Chambers tweeted several times:
(3) Paul Chambers @pauljchambers 06 Jan 2010
@Crazycolours: I was thinking that if it does then I had decided to resort to terrorism
(4) Paul Chambers @pauljchambers 06 Jan 2010
@Crazycolours: That’s the plan! I am sure the pilots will be expecting me to demand a
more exotic location than NI

We no longer have Sarah’s tweets, but we might infer that she had advocated
taking control of the aircraft. Having already mentioned a penchant for terrorism,
Chambers then tweeted his six hundred or so followers with the following:
(5) Paul Chambers @pauljchambers 06 Jan 2010
Crap! Robin Hood airport is closed. You’ve got a week and a bit to get your shit together
otherwise I’m blowing the airport sky high!!

Despite the airport manager who found it, the senior airport official who was told
about it, and even a police officer investigating it all considering the tweet a joke
rather than a credible threat, a week later, four South Yorkshire police officers
arrested Chambers at his workplace on suspicion of making a hoax bomb threat.
As a result, Chambers lost his job, and though he defended the tweet as a sarcastic
joke borne of frustration, he was ultimately convicted of “sending a public elec-
tronic message that was grossly offensive or of an indecent, obscene or menacing
character contrary to the Communications Act 2003”, leaving him with a fine and
a lifelong criminal record.
Over the next two and a half years Chambers lodged multiple appeals that
were increasingly vocally backed by celebrities including Stephen Fry. Finally, in
the high court, before the country’s most senior judge, Chambers’ conviction was
overturned – a very late acknowledgement, perhaps, that his ‘threat’ to bomb an
airport was really nothing more than a careless moment of irritated online venting.
If we are to draw out a common theme from both the available data and litera-
ture, then it is that flaming is an over-reaction to some sort of provocation, whether
that is an expletive-laden rant in reply to an unsolicited email or a sarcastic bomb-

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
Flaming and trolling  503

threat in response to a delayed flight. More simplistically, unlike trolling, which


we turn to shortly, flaming is reactive, rather than proactive.

3.3. Current challenges


As noted throughout this chapter, the research into flaming is further ahead than that
on trolling, though this does not necessarily mean that such research is without its
problems or challenges, and the most prominent issue, which recurs with trolling,
is that of disambiguating a behavior from other possible behaviors in the first place.
Within the remit of NMOB, we can include flaming, trolling, cyberbullying,
cyberstalking, and more besides, and it is no mean feat to consistently distinguish
them from each other. To exemplify this, for a moment we will step outside of
flaming and consider Bocij’s (2004) definition of cyberstalking:
[Cyberstalking is] a group of behaviors in which an individual, group of individuals
or organization, uses information and communication technologies to harass another
individual, group of individuals, or organization. Such behaviors may include, but are
not limited to, the transmission of threats and false accusations, identity theft, damage
to data or equipment, computer monitoring, solicitation of minors for sexual purposes,
and any form of aggression. Harassment is defined as a course of action that a reasona-
ble person, in possession of the same information, would think causes another reasona-
ble person to suffer emotional distress. (Bocij 2004: 14, emphasis mine)

It is difficult to see how this definition could be used in such a way that it would not
also capture flaming, trolling, cyberbullying, cyberharassment, and sex offenders
grooming children online, as well as any CMC contribution perceived to be objec-
tionable or distressing for other reasons (e.g. posting a video of animal abuse). The
problem is little better when we consider legislation and policy guidance. From
the UK, for instance, the House of Lords’ Communications Committee published
its first report on social media and criminal offences. In this, trolling is described
as the “intentional disruption of an online forum, by causing offence or starting an
argument” (2014: ch2 § 9c). However, this extremely simplistic definition doesn’t
differentiate between someone disrupting a forum because they have (or think they
have) a genuine grievance and someone doing so simply for the sake of amusement.
Similarly, in guidelines on prosecuting cases involving communications sent
via social media, the Crown Prosecution Service alludes to flaming thus:
Examples of cyberstalking may include:
–  Threatening or obscene emails or text messages.
–  Spamming (where the offender sends the victim multiple junk emails).
–  Live chat harassment or ‘flaming’ (a form of online verbal abuse).
–  Leaving improper messages on online forums or message boards.
–  Sending electronic viruses.
–  Sending unsolicited email.
–  Cyber identity theft. (CPS 2016, emphasis mine)

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
504  Claire Hardaker

To describe flaming as a form of online verbal abuse confuses the boundaries


between this and most other kinds of NMOB. How is flaming different, then, from
trolling, cyberbullying, and so forth? We find similar issues in other jurisdictions.
In the US, states that do have statutes for NMOB differ widely in how clearly that
behavior is captured. For example, § 2 of North Carolina’s (2000) cyberstalking
provision states that it is unlawful to:
(2) Electronically mail or electronically communicate to another repeatedly, whether or
not conversation ensues, for the purpose of abusing, annoying, threatening, terrifying,
harassing, or embarrassing any person.
(3) Electronically mail or electronically communicate to another and to knowingly make
any false statement concerning death, injury, illness, disfigurement, indecent conduct,
or criminal conduct of the person electronically mailed or of any member of the per-
son’s family or household with the intent to abuse, annoy, threaten, terrify, harass, or
embarrass. (2000)
This legislation could be used to cover NMOB from repeatedly annoying a recip-
ient (i.e. this could capture spamming, flaming, or trolling) through to terrifying
them by lying about their loved ones (i.e. cyberbullying or cyberstalking).
To summarize, throughout academia, policy, legislation, and the media, there
is little consensus about these terms and their scope. This point is regrettably made
far more clearly in the following section.

4. Trolling

Just as we began the section on flaming with a brief glance into the history of
the word, so it is insightful to do the same with trolling. We find, however, a less
clear-cut answer.

4.1. Etymology
The current definition of trolling may have derived from one of two distinct routes.
The first captures troll as a noun, and finds its roots in the late fourteenth century.
In Old Norse and Scandinavian mythology, a tröll was a large, strong, nasty crea-
ture that possessed supernatural powers, but that would also turn to stone in the
sunlight (Jakobsson 2006: 1; MacCulloch 1930: 285–286). These ancient myths
persist in folklore tales such as the Norwegian fairytale The Three Billy Goats
Gruff. The second possibility captures troll as a verb, and dates back at least as far
as the 1600s. In this respect, trolling derives from fishing, and involves drawing
baited fishing lines through the water.2 This variant also has longstanding meta-

2
This is not to be confused with trawl-fishing, which involves dragging nets.

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
Flaming and trolling  505

phorical extensions that include luring others along with some form of bait, and as
exhaustively searching for something.
Currently, users are as happy to invoke terms relating to mythology for the
person (e.g. get back under your bridge, don’t feed the troll) as to fishing for
the act (e.g. biting, baiting, netting, and hooking) and the reality is that we will
probably never know which history was being drawn on when the first person
uttered troll in some semblance of its modern, online sense. Certainly, there is
little enough evidence to give us much hope of deducing it from the earliest online
postings. One matter that is clear, however, is that this term has been far more
heavily influenced by the media than flaming. This is important to note, since
scholars are not impervious to being influenced by widespread, mainstream narra-
tives.
When we look to the media, we find that interest was initially slow, and prior
to 2010, there are only scattered reports on trolling (e.g. Black 2006; Cox 2006;
Moulitsas 2008; Thompson 2009). Where it was discussed, typical definitions
described trolling as the posting of incendiary comments designed to provoke con-
flict: “Hiding behind the pseudonymity of a Web alias, trolls disrupt useful dis-
cussions with ludicrous rants, inane threadjackings, personal insults, and abusive
language” (Naraine 2007: 146). Brandel (2007) adds that “[a] troll is a person who
posts with the intent to insult and provoke others. […] The goal is to disrupt the
normal traffic of a discussion group beyond repair” (Brandel 2007: 32).
Heffernan (2008) highlights the unprovoked nature of trolling, along with
another goal – amusement at another’s expense:

Consider this question from David Hume: “Would any man, who is walking alone, tread
as willingly on another’s gouty toes, whom he has no quarrel with, as on the hard flint
and pavement?” […] Internet trolls regularly tread on gouty toes. They trick vulnerable
people with whom they have no quarrel; they upset those people; they humiliate them;
they break their hearts; they mess with them. They do it for something Hume didn’t
perfectly name: the lulz—the spiteful high. (Heffernan 2008)

The earliest academic attention to the subject matter also largely created defi-
nitions from intuition, the media, and online ephemera like The Troller’s FAQ
(1996). The result is that the term has been, and continues to be used as an all-en-
capsulating term. For example, Herring et al. (2002: 372) and Turner et al. (2005)
describe trolling as luring others into frustratingly useless, circular discussion that
is not necessarily overtly argumentative. Donath (1999: 45) and Utz (2005: 50)
suggest that trollers can intentionally disseminate poor advice, thereby provok-
ing corrections from others. Tepper (1997: 41) explains how trolling can define
ingroup/outgroup membership: those who ‘bite’ signal novice, outgroup status,
whilst ingroup members will identify the troller, will not be baited, and may even
mock those who are. Donath (1999) and Dahlberg (2001) suggest that trolling is
a one-sided game of deception played on unwitting others: “The troll attempts to

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
506  Claire Hardaker

pass as a legitimate participant, sharing the group’s common interests and con-
cerns” (Donath 1999: 45). Then, once the troll has developed its false identity and
been accepted into the group, they will set about disrupting the forum whilst trying
to conceal their true intent (Dahlberg 2001).
The main point here is that whilst some of these definitions overlap to an
extent, all seem to be describing a collection of symptoms, rather than one coher-
ent behavior or motive that is responsible for those choices of actions. In an effort
to create an empirically informed definition, I analysed 3,727 user discussions of
trolling drawn from an eighty-six million word Usenet corpus and concluded that
trolling is “the deliberate (perceived) use of impoliteness/aggression, deception,
and/or manipulation in CMC to create a context conducive to triggering or antago-
nizing conflict, typically for amusement’s sake” (Hardaker 2013: 79).
In short, one distinction to be drawn between flaming and trolling, as hinted at
in the previous section, is that whilst flaming may be an over-reaction to a provo-
cation, trolling proactively strives to be a provocation in its own right.

4.2. Case studies: Brenda Leyland


Since 2010, there have been a string of prosecutions related to cases that the media
has typically described as trolling. For instance, on 29th October 2010, 36-year-old
Colm Coss pleaded guilty at Manchester City Magistrates’ Court after leaving
offensive messages and jokes on memorial pages set up to commemorate deceased
loved ones, including children. He was sentenced to eighteen weeks in prison.
Similarly, in September 2011, Sean Duffy, then aged 26, also pleaded guilty to the
same charge of improper use of a public electronic communications network at
Reading Magistrates’ Court after posting an offensive image to the tribute page of
a teenager who had been accidentally shot.
Deliberately targeting memorial and tribute sites with mockery is not the only
type of behavior that is picked up by the media as trolling, however. Other, more
sinister behavior also earns this name. In June 2013, 60-year-old Frank Zimmer-
man was given a suspended 26-week sentence after emailing abuse to a string of
celebrities, including Lord Alan Sugar, Louise Mensch MP, and journalist Terence
Blacker. In January 2014, John Nimmo and Isabelle Sorley were jailed for eight
and twelve weeks respectively for sending journalist and feminist activist Caroline
Criado-Perez Twitter abuse after she successfully petitioned the Bank of England
to ensure that a notable historic female figure would be represented on British
banknotes. And in September 2014, Peter Nunn was given an eighteen week custo-
dial sentence for sending abuse to Stella Creasy MP after she had supported Cria-
do-Perez’s campaign. Some of the abuse in this latter case involved rape, death,
and bomb-threats.
It is crucial to note, however, as has been alluded to above, that not all cases
of so-called trolling are this clear cut, but to understand the complexity we need

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
Flaming and trolling  507

a far greater amount of context and background. For this, we turn to the case of
Brenda Leyland.
In 2007, three-year-old Madeleine McCann disappeared from the holiday resort
of Praia da Luz. Her parents, medical doctors Gerry and Kate McCann were dining
at a nearby restaurant at the time, whilst Madeleine and her younger siblings slept
in their hotel bedroom. At around 10pm, Kate McCann went to check on her chil-
dren and discovered that Madeleine was gone. Despite extensive searches, she has
never been found. During the early parts of the inquiry, the McCanns were ques-
tioned by police about her disappearance, and media outlets were quick to voice
their suspicions and theories about how the McCanns might have been involved.
Meanwhile, though the platform was only a year old at the time, users of the new
social media site, Twitter, also took an interest in the story. Whilst some invested
hours trading guesses about the truth behind Madeleine’s disappearance, others
spent their time sending threats of violence, murder and abduction of their other
children at the McCanns.
Responding to the media was relatively straightforward: the McCanns took
legal action and were ultimately awarded substantial damages, along with front-
page apologies. However, social media is a different and much more difficult plat-
form to regulate, and almost a decade later, havens of like-minded people still
interested in the Madeleine McCann case thrive. This brings us to 2014, and the
sleepy civil parish of Burton Overy in Leicestershire.
Brenda Leyland was educated at a convent school, went to church, enjoyed gar-
dening and photography, and was involved in the annual village scarecrow compe-
tition. Not all was perfect, however. A sixty-three year old mother to two sons, she
appears to have been estranged from the eldest, and her marriage had ended in 2001.
According to expert evidence at the inquest, Leyland had a history of attempted
suicide and was receiving both therapy and medication to help her deal with bouts
of severe depression and anxiety. A consultant psychiatrist who had treated Leyland
in the past also stated that she had lifelong unstable emotional personality traits.
On Twitter, Leyland had another identity: @sweepyface. The bio for sweepy-
face’s Twitter account simply read “Researcher”, and after a few dormant years, it
suddenly became active in November 2013. From that point to September 2014,
sweepyface sent roughly 4,600 tweets, and throughout 2014 alone, the number of
people following the account doubled from 93 to 183. The content of sweepyface’s
tweets skewed in a very particular direction: 87 % of them contained the word or
hashtag McCann. Meanwhile, references to the parents by their first names or ini-
tials occurred an average of once every ten tweets.
Within the online community of those campaigning about the McCanns,
sweepyface fell into a group that might be crudely titled the anti-McCanns – that
is, users who believe the McCanns to be responsible, to whatever degree, for their
daughter’s disappearance. By contrast, the pro-McCanns, as might be expected,
believe the parents innocent of most or all wrongdoing. Both groups employ the

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
508  Claire Hardaker

#McCann hashtag as a method of following discussion on this topic, and through


it, sweepyface and others could instantly jump to the latest tweets worldwide on
the subject.
How did sweepyface behave online? She was certainly not complimentary
towards the McCanns and on multiple occasions, she cast highly critical, personal
aspersions about them (see example 10, for instance). However, these tweets talk
about the McCanns, rather than at them, and there is also a lot of evidence to sup-
port the argument that sweepyface genuinely believed that she was pursuing justice:
(6) Sweepyface, 11:05:45, Fri 29 Nov 2013
#crimewatch  I think you and the Met know as millions of us do that the McCanns may
be complicit in Madeleines fate, don’t let us down !!
(7) Sweepyface, 18:13:19, Mon 23 Dec 2013
I have never heard a pro say ” poor child, left alone, scared, not feeling well that day”
all I hear is poor #mccann s how they r suffering
(8) Sweepyface, 18:30:45, Mon 23 Dec 2013
#mccann  So many people think the children were sedated, because the twins never
stirred for almost 12 hours, lights on, shouting crying?
(9) Sweepyface, 09:47:14, Tue 24 Dec 2013
#mccann  It is and has always been, about Madeleine . A victim on so many levels, we
will help fight for as long as it takes 4 her
(10) Sweepyface, 15:21:34, Mon 08 Sep 2014
I think Kate #mccann  sees herself as a modern day Eva Peron beautiful, suffering,
instead of a booze filled nymphomaniac
To briefly summarise some 4,600 tweets, sweepyface frequently described the
McCanns as neglectful parents, strongly objected to their ongoing media appear-
ances, and complained that they were profiting from their daughter’s disappear-
ance. She also tweeted police forces, crime investigation programmes, and media
outlets, campaigning for them to reopen the investigation. When pro-McCann
users and others challenged her, she called them unpleasant names, disputed their
evidence, and occasionally blocked them. In short, sweepyface appears to have
fixated on both the topic and on some controversial theories surrounding it, and
her conduct was annoying to some and offensive to others. However, as best as
can be inferred from the data, the intention to cause conflict for the sake of amuse-
ment – as we find in many definitions of trolling – appears to be missing.
Finally, in September 2014, sweepyface’s online world collided with Brenda
Leyland’s offline world. It began when sweepyface noticed that Sky reporter Mar-
tin Brunt had started to follow her. Sweepyface even tweeted him directly:
(11) Sweepyface, 21:02:14, Mon 29 Sep 2014
#mccann Just noticed @skymartinbrunt is following me, why Martin won’t you
investigate some of these facts and show neutrality ?
(12) Sweepyface, 21:08:22, Mon 29 Sep 2014
#mccann  @skymartinbrunt  Martin, any thoughts on the connection between Amy
Tierney, the witness and Basil ? how does that work ?

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
Flaming and trolling  509

Rather than responding publically, Brunt contacted Leyland in private. Three days
later, on Thursday 02nd October, Sky News launched an exposé on “the McCann
trolls”. As well as radio reports and online articles, the investigation consisted of
an eleven-minute video report posted on YouTube.3 A shortened version of this
report was repeated on the main Sky News channel cycle throughout the day. In
that excerpt, Brenda Leyland is doorstepped by a camera crew led by Martin Brunt:
(13) Sky News, 18:12, Thu 02nd Oct 2014: ‘Evil’ Trolls In Hate Campaign Against McCanns:
video (04m 23s)4. Excerpt: 00s 00m to 01m 11s.
Brunt, voiceover: [Shaky footage as the camera person heads towards a car that Ley-
land is exiting.] This woman uses Twitter to attack the parents of
Madeleine McCann. On the internet, she’s anonymous. Not any-
more. [Leyland has been walking their way. As they meet, she looks
somewhat bewildered.]
Brunt: [Some speech obscured by voiceover.] …I’m Martin Brunt from
Sky News.
Leyland: Well I’m just about to go out.
Brunt: Well we’ve caught you. Can we talk to you about your Twitter?
Leyland: No.
Brunt: And your attacks on the McCanns.
Leyland: No.
Brunt: Erm. Why are you attacking them so regularly?
Leyland: Look I’m just going out with a friend. Okay?
Brunt: But why are you using your Twitter account …
Leyland: Excuse me. [Turns and walks back towards her car.]
Brunt: … to attack the McCanns?
Leyland: [Stops and turns back.] I’m entitled to do that Martin.
Brunt: You know you’ve been reported to the police, to Scotland Yard.
[Camera gets in front of Leyland.] They’re considering, er, a whole
file of Twitter accounts [camera gets closer to Leyland] and that …
Leyland: That’s fair enough.
Brunt: … what supporters say is a campaign of abuse against the McCanns.
Leyland: Okay well I’m going out. [Leyland turns and walks back towards
her car. She looks back briefly as if responding to something fur-
ther but the audio has cut to the voiceover at this point.]
Brunt, voiceover: On Twitter, she uses the name sweepyface with a profile picture of
a pet. She tweets many times a day, and mostly about the McCanns.
In one message she spread rumours about the couple’s marriage.
In another, she hoped Madeleine’s parents would suffer forever.
But sweepyface is not the worst. Almost from the day Madeleine
vanished seven years ago, her family have been targeted with vile
messages on the internet.

3
https://www.youtube.com/watch?v=qkAzz8Pwdvc
4
http://news.sky.com/story/1345871/evil-trolls-in-hate-campaign-against-mccanns

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
510  Claire Hardaker

In the report below this video, Brunt writes:


The Metropolitan Police is investigating a catalogue of vile internet abuse targeting the
family of Madeleine McCann including death threats, Sky News can reveal. Officers
are in talks with the Crown Prosecution Service after being handed a dossier of more
than 80 pages of Tweets, Facebook posts and messages on online forums aimed at Kate
and Gerry McCann. […] One troll – who uses the Twitter identity “Sweepyface” and
has posted dozens of anti-McCann messages using the #mccann hashtag – was con-
fronted by Sky News. When asked about her use of social media to attack the couple,
she replied: “I’m entitled to.”
Though Sky New does not give her offline name, Leyland’s face and voice is given
the prime slot at the start of the video, and she features heavily in Sky’s reports on
“the McCann trolls” throughout the day. The following tweet from her sweepyface
account is also used and widely requoted:
(14) Sweepyface, 12:30:20, Fri 29 Nov 2013
#mccann  Q ” how long must the Mccanns suffer” answer ” for the rest of their miser-
able lives”

This tweet appears in amongst a wide range of other content from various sources,
some containing threats and incitement to violence towards the McCanns. The
identities of those other users are kept anonymous, however, leaving Leyland as
the sole face and identity of “the McCann trolls”. Within hours, other media out-
lets make the connection between sweepyface and Brenda Leyland. Her identity,
age and village are all published, and by the next day, across most of the media,
she is being vilified as a troll.
On Saturday 04th October, two days after the exposé, Leyland is found dead in
a room at the Marriott Hotel, Leicester. An inquest immediately opens, and some
five months later, on 20th March 2015, Coroner Catherine Mason records a verdict
of suicide. As part of the inquest, Detective Sergeant Steven Hutchings of Leices-
tershire Police verifies that none of Leyland’s tweets had amounted to a criminal
offence.
This case exemplifies, to an extreme degree, the problem raised repeatedly
throughout the chapter: namely, that the definitions of terms relating to negatively
marked online behaviors are unclear, inconsistent, and because of this, they can be
badly misapplied. To take the term troll specifically, the societal stigma that goes
with being branded such is serious enough to make an otherwise unknown individ-
ual front page news. However, this very term is also being semantically stretched,
particularly by the media, to such an extent that it is catching cases in which the
accused individual is arguably not a troll at all. In short, there are serious, real-
world consequences to the ways in which we define and apply terms like troll and
these need much more investigation.

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
Flaming and trolling  511

4.3. Current challenges

The issue of terminology is not the only challenge faced by those researching
trolling. Given that trolling, as a field of enquiry, is virtually brand new, it would
be possible to list almost any aspect as a future challenge. However, there is one
defining element of trolling, when contrasted with flaming, that we find across the
literature: deception.
Literature in the field of im/politeness is currently grappling hard with the
place and scope of intentions, in our understandings and productions of socially
(un)acceptable behavior (see, for instance, Arundale 2008; Haugh 2008; Spencer-
Oatey and Xing 2006). But how does this relate to trolling or deception? Brown and
Levinson (1987) noted very early on that someone might express themselves such
that “there is more than one unambiguously attributable intention so that the actor
cannot be held to have committed [them]self to one particular intent” (Brown and
Levinson 1987: 69).
Whilst Brown and Levinson are talking about politeness, this can equally apply
to any form of interaction, from courtroom examination to political interviews to
online trolling. As in the definitions proposed in § 4.1 above, a troll may “attempt
to pass as a legitimate participant” (Donath 1999: 45) and then, “after developing
their false identity and becoming accepted within a group, the troll sets about dis-
rupting proceedings while trying to maintain his or her cover” (Dahlberg 2001).
If indeed a troll chooses a covert strategy, should they be challenged, they may
excuse their behavior as accidental, unwitting, incidental, and so on, whilst deny-
ing any intention to offend. Similarly, just as a troll might disguise their intentions
to deliberately offend, a target might be disingenuous about their interpretations of
that behavior, and pretend to be amused rather than offended as a way to save face.
In summary, the debate involving the role of intentions has received some con-
siderable attention both within and even outside of pragmatics5 but virtually none
considers deception as part of that process.
Nefarious deception (that is, deception motivated by personal gain or harming
others, as opposed to that motivated by kindness or politeness) reaches beyond
intentions, however. It can also encapsulate the formation of an identity that is
inconsistent with one’s offline self, if done with malicious purposes in mind – an
aspect that is heavily facilitated by the anonymity that CMC can offer. This may
even stretch as far as creating multiple accounts deliberately made to look like
separate people, so that the user can cause greater disruption by the appearance of

5
See, for instance, Bach (1987), Davis (1998, 2007, 2008), Gibbs (1999, 2001), Green
(2007, 2008), Jaszczolt (2005, 2006), Keysar (2007), Recanati (1986), Saul (2001),
Searle (1983, 1990), Thompson (2008).

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
512  Claire Hardaker

strength in numbers (Chester and O’Hara 2009; Phillips 2002; Zarsky 2004). Little
has been done in this area from linguistics.
Finally, deception also captures straightforward lies, such as giving advice that
the troll knows to be dangerous or incorrect in the hopes of causing hurt or harm,
or promoting themselves as trustworthy, to gain greater traction within the group.
Whilst there is very little work within linguistics on what we might crudely call
‘narrative deception’ (as opposed to intention deception, or identity deception)
fortunately there is a wealth of excellent work to be found in psychology6 which
may help move this area forward.

5. Future directions

From almost every angle, research into flaming and trolling is very new. At the
time of writing, the field that these topics sit within – CMC – has been established
for barely thirty-five years (see, for instance, Kaye 1985; Kerr and Hiltz 1982;
Kiesler, Siegel and McGuire 1984). Flaming made its debut some ten years after
that (Lea et al. 1992), whilst trolling took another decade to arrive (Herring et al.
2002). The result, then, is that almost every possible avenue qualifies as a future
direction, but there are some that are worth repeating, or raising afresh.
Perhaps the most pressing issue, as discussed throughout this chapter, is clarity
on what terms like flaming and trolling mean. It is unsurprising for new fields to
go through a period of turmoil as concepts become established and certain con-
sensuses are reached. However, in this particular area of research, a lack of clarity
can have extremely serious, real-world impacts, as in the case of Brenda Leyland.
Clear definitions also become paramount when issuing legal guidance and even
legislation itself, however. Determining that a behavior is illegal relies heavily
on being able to identify that behavior in the first place, and here, linguists have
much to offer, particularly from the fields of forensic linguistics, corpus linguistics
(e.g. through observation of largescale patterns), and pragmatics.
Further, despite the now firmly-established field of impoliteness, which includes
seminal work by the likes of Culpeper (1996), Spencer-Oatey (2005), and Bousfield
(2008), there is surprisingly little research into either flaming or trolling that seems
to draw upon this (see however, Graham 2007, 2008; Herring 1994; Locher 2006).
In short, aligning impoliteness research and research into negatively marked
online behaviors – particularly flaming – could benefit both since they have marked
similarities, such as the variability in perceptions of what constitutes impoliteness,
trolling, or flaming in the first place, and the evaluations of degrees of hostility

6
See, for instance, Akehurst et al. (1996), Ekman (1996), Memon, Vrij and Bull (2003),
Rubin (2010), Vrij (2000), Vrij et al. (2000).

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
Flaming and trolling  513

made by the participants (Graham 2007: 743). Similarly, research into flaming
and trolling may be substantially supported by work in psychology, criminology,
sociology, and computing.
In short, for those interested in researching negatively marked online behavior,
a wide vista of possibilities is open for exploration. If there remains any lingering
sense that CMC is somehow not as worthy a subject of investigation as offline
interaction, hopefully this chapter has shown that it is not only a serious area, but
also one that should be ventured into carefully, and with a great deal of sensitivity.

Acknowledgments

This work was supported by the Economic and Social Research Council through
two grants: the Twitter Rape Threats and the Discourse of Online Misogyny pro-
ject (grant reference ES/L008874/1) and the ESRC Centre for Corpus Approaches
to Social Science (grant reference ES/K002155/1).

References

Abbasi, Ahmed and Hsinchun Chen


2008 Cybergate: A design framework and system for text analysis of computer-me-
diated communication. MIS Quarterly 32(4): 811–837.
Akehurst, Lucy, Günter Köhnken, Aldert Vrij and Ray Bull
1996 Lay persons’ and police officers’ beliefs regarding deceptive behavior. Applied
Cognitive Psychology 10: 461–473.
Androutsopoulos, Jannis
2006 Introduction: Sociolinguistics and computer-mediated communication. Jour-
nal of Sociolinguistics 10(4): 419–438.
Arendholz, Jenny
2013 (In)Appropriate Online Behavior. A Pragmatic Analysis of Message Board
Relations. Amsterdam/Philadelphia: Benjamins.
Arundale, Robert B.
2008 Against (Gricean) Intentions at the heart of human interaction. Intercultural
Pragmatics 5(2): 229–258.
Atkinson, Jane
2011 £200,000 Exec Axed after Telling Jobseeker: **** Off. The Sun December
10th. http://www.thesun.co.uk/sol/homepage/news/3990329/200000–exec–
axed–after–telling–jobseeker–off.html.
Avgerinakou, Anthi
2008 Contextual Factors of Flaming in Computer-Mediated Communication. Edin-
burgh: Heriot-Watt University.
Bach, Kent
1987 On communicative intentions: A reply to Recanati. Mind and Language 2:
141–154.

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
514  Claire Hardaker

Baker, Paul
2001 Moral panic and alternative identity construction in Usenet. Journal of Com­pu­
ter-Mediated Communication 7(1). http://jcmc.indiana.edu/vol7/issue1/baker.
html 08/12/09.
Barron, Anne
2006 Understanding spam: A macro-textual analysis. Journal of Pragmatics 38:
880–904.
Baym, Nancy
1996 Agreements and disagreements in a computer-mediated discussion. Research
on Language and Social Interaction 29(4): 315–345.
Bernstein, Michael S., Andrés Monroy-Hernández, Drew Harry, Paul André, Katrina Pano-
vich and Greg Vargas
2011 4chan and /B/: An analysis of anonymity and ephemerality in a large online
community. Association for the Advancement of Artificial Intelligence: 1–8.
Binns, Amy
2011 Don’t feed the trolls: Managing troublemakers in magazines’ online communi-
ties. Mapping the Magazine 3. http://www.people.vcu.edu/~dgolumbia/classes/
1314.2.spr2014/engl391/resources/Trolls.pdf.
Black, Lisa
2006 It’s a troll’s ‘life’ for some: Online games raise addiction concerns. Chicago
Tribune November 30th: 1.
Bocij, Paul
2004 Cyberstalking: Harassment in the Internet Age and How to Protect Your Fam-
ily. Westport: Praeger.
Bolter, Jay David and Richard Grusin
1998 Remediation: Understanding New Media. Cambridge, MA: The MIT Press.
Bousfield, Derek
2008 Impoliteness in Interaction. Amsterdam/Philadelphia: Benjamins.
Brabazon, Tara
2012 Digital Dialogues and Community 2.0: After Avatars, Trolls and Puppets.
Oxford: Chandos Publishing.
Brandel, Mary
2007 Blog trolls and cyberstalkers: How to beat them. Computerworld May 28: 32.
Brown, Penelope and Stephen C. Levinson
1987 Politeness: Some Universals in Language Use. Cambridge: Cambridge Uni-
versity Press. (Original edition 1978. Reprint 1987.)
Bucholtz, Mary
1999 "Why be normal?”: Language and identity practices in a community of nerd
girls. Language in Society 28(2): 203–223.
Bucholtz, Mary and Kira Hall
2005 Identity and interaction: A sociocultural linguistic approach. Discourse Stud-
ies 7(4–5): 585–614.
Buckels, Erin E., Paul D. Trapnell and Delroy L. Paulhus
2014 Trolls just want to have fun. Personality and Individual Differences 35(67):
97–102.
Chandler, Daniel
1995 Technological or Media Determinism. http://www.aber.ac.uk/media/
Documents/tecdet/tecdet.html.

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
Flaming and trolling  515

Cheng, Justin, Cristian Danescu-Niculescu-Mizil and Jure Leskovec


2015 Antisocial behavior in online discussion communities. Proceedings of ICWSM
2015. http://arxiv.org/abs/1504.00680.
Chester, Andrea
1996 Braving the flames: Aggression and the Internet. Paper presented at the Media
and Ethnic Conflict Conference, November 1996, Melbourne, Australia.
Chester, Andrea and Di Bretherton
2007 Impression management and identity online. In: Adam Joinson, Katelyn
Y. McKenna, Tom Postmes and Ulf-Dietrich Reips (eds.), Oxford Handbook
of Internet Psychology, 223–236. Oxford: Oxford University Press.
Chester, Andrea and Agi O’Hara
2009 Image, identity, and pseudonymity in online discussions. International Jour-
nal of Learning 13(12): 193–204.
Chiou, Wen-Bin
2006 Adolescents’ sexual self-disclosure on the Internet: Deindividuation and
impression management. Adolescence 41(163): 547–561.
Collins, Mauri P.
1992 Flaming: The relationship between social context cues and uninhibited verbal
behaviour in computer-mediated communication. http://www.mediensprache.
net/archiv/pubs/2842.htm.
Cothrel, Joseph P.
2000 Measuring the success of an online community. Strategy and Leadership 20
(2): 17–21.
Cox, Ana Marie
2006 Making mischief on the Web. Time December 16th. http://www.time.com/time/
magazine/article/0,9171,1570701,00.html 03/02/10.
CPS
2016 Guidelines on Prosecuting Cases Involving Communications Sent Via Social
Media. Prosecution Policy and Guidance: Legal Guidance March 3rd. http://
www.cps.gov.uk/legal/a_to_c/communications_sent_via_social_media/.
Culpeper, Jonathan
1996 Towards an anatomy of impoliteness. Journal of Pragmatics 25: 349–367.
Cyberstalking
2000 North Carolina: § 14–196.3. http://www.ncga.state.nc.us/EnactedLegislation/
Statutes/HTML/BySection/Chapter_14/GS_14-196.3.html.
Dahlberg, Lincoln
2001 Computer-mediated communication and the public sphere: A critical analysis.
Journal of Computer-Mediated Communication 7 (1). http://jcmc.indiana.edu/
vol7/issue1/dahlberg.html 22/08/10.
Davis, Wayne
1998 Implicature. Intention, Convention, and Principle in the Failure of Gricean
Theory. Cambridge: Cambridge University Press.
Davis, Wayne
2007 How normative is implicature? Journal of Pragmatics 39: 1655–1672.
Davis, Wayne
2008 Replies to Green, Szabo, Jeshion, and Siebel. Philosophical Studies 137: 427–
445.

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
516  Claire Hardaker

December, John
1997 Notes on defining computer-mediated communication. CMC Magazine Janu-
ary. http://www.december.com/cmc/mag/1997/jan/december.html 19/07/08.
Dill, Karen E. and Jody C. Dill
1998 Video game violence: A review of the empirical literature. Aggression and
Violent Behavior: A Review Journal 3: 407–428.
Dlala, Imen Ouled, Dorra Attiaoui, Arnaud Martin and Boutheina Ben Yaghlane
2014 Trolls identification within an uncertain framework. Proceedings of the 2014
IEEE 26th International Conference on Tools with Artificial Intelligence.
http://arxiv.org/abs/1501.05272: 1011–1015.
Donath, Judith S.
1999 Identity and deception in the virtual community. In: Marc A. Smith and Peter
Kollock (eds.), Communities in Cyberspace, 29–59. London: Routledge.
Douglas, Karen M. and Craig McGarty
2001 Identifiability and self-presentation: Computer-mediated communication and
intergroup interaction. British Journal of Social Psychology 40(3): 399–416.
Ekman, Paul
1996 Why don’t we catch liars? Social Research 63(3): 801–817.
Elmer-Dewitt, Philip
1994 Bards of the Internet. Time July 4th: 66–67.
Ferris, Pixy
1997 What Is Cmc? An overview of scholarly definitions. CMC Magazine January.
http://www.december.com/cmc/mag/1997/jan/ferris.html.
Fichman, Pnina and Madelyn Rose Sanfilippo
2015 The bad boys and girls of Cyberspace: How gender and context impact per-
ception of and reaction to trolling. Social Science Computer Review 33(2):
163–180.
Gibbs, Raymond W.
1999 Intentions in the Experience of Meaning. Cambridge: Cambridge University
Press.
Gibbs, Raymond W.
2001 Intentions as emergent products of social interactions. In: Bertram Malle,
Louis Moses, and Dare Baldwin (eds.), Intentions and Intentionality, 105–
122. Cambridge, MA: MIT Press.
Graham, Sage Lambert
2007 Disagreeing to agree: Conflict, (im)politeness and identity in a computer-me-
diated community. Journal of Pragmatics 39: 742–759.
Graham, Sage Lambert
2008 A manual for (im)politeness?: The impact of the FAQ in an electronic commu-
nity of practice. In: Derek Bousfield and Miriam A. Locher (eds.), Impolite-
ness in Language: Studies on Its Interplay with Power in Theory and Practice,
324–352. Berlin/New York: de Gruyter.
Green, Mitchell
2007 Self-Expression. Oxford: Oxford University Press.
Green, Mitchell
2008 Expression, indication, and showing what’s within. Philosophical Studies 137:
389–398.

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
Flaming and trolling  517

Hara, Noriko, Curtis J. Bonk and Charoula Angeli


2000 Content analysis of online discussion in an applied educational psychology
course. Instructional Science 28: 115–152.
Hardaker, Claire
2010 Trolling in asynchronous computer-mediated communication: From user dis-
cussions to academic definitions. Journal of Politeness Research. Language,
Behaviour, Culture 6(2): 215–242. doi: 10.1515/jplr.2010.011.
Hardaker, Claire
2013 “Uh.....Not to Be Nitpicky,,,,,But …The Past Tense of Drag Is Dragged, Not
Drug.”: An overview of trolling strategies. Journal of Language Aggression
and Conflict 1(1): 57–85.
Hardaker, Claire
2015 “I Refuse to Respond to This Obvious Troll.”: An overview of responses to
(perceived) trolling. Corpora 10(2): 201–229.
Hardaker, Claire and Mark McGlashan
2016 “Real Men Don’t Hate Women”: Twitter rape threats and group identity. Jour-
nal of Pragmatics 91: 80–93.
Haugh, Michael
2008 Intention in pragmatics. Intercultural Pragmatics 5(2): 99–110.
Haugh, Michael
2010 When is an email really offensive?: Argumentativity and variability in evalua-
tions of impoliteness. Journal of Politeness Research 6: 7–31.
Heffernan, Virginia
2008 Trolling for ethics: Mattathias Schwartz’s awesome piece on Internet Polter-
geists. New York Times July 31st.
Henri, France
1992 Computer conferencing and content analysis. In: Anthony R. Kaye (ed.), Col-
laborative Learning through Computer Conferencing: The Najaden Papers,
115–136. Berkeley, CA: Springer.
Herring, Susan C.
1994 Politeness in computer culture: Why women thank and men flame. Cultural
Performances: Proceedings of the Third Berkeley Women and Language Con-
ference: 278–294. Berkeley Women and Language Group. http://ella.slis.indi-
ana.edu/~herring/politeness.1994.pdf.
Herring, Susan C. (ed.)
1996 Computer-Mediated Communication: Linguistic, Social and Cross-Cultural
Perspectives. Amsterdam/Philadelphia: Benjamins.
Herring, Susan C.
2003 Computer-mediated discourse. In: Deborah Schiffrin, Deborah Tannen and
Heidi E. Hamilton (eds.), The Handbook of Discourse Analysis, 612–634.
Oxford: Blackwell.
Herring, Susan C., Kirk Job-Sluder, Rebecca Scheckler and Sasha Barab
2002 Searching for safety online: Managing “Trolling” in a feminist forum. The
Information Society 18: 371–384.
Hopkinson, Christopher
2013 Trolling in online discussions: From provocation to community-building.
Brno Studies in English 39(1): 5–25.

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
518  Claire Hardaker

Jakobsson, Ármann
2006 The Good, the Bad and the Ugly: Bárðar Saga and Its Giants. Paper read at The
13th International Saga Conference: The Fantastic in Old Norse/Icelandic Liter-
ature, 06th–12th August 2006, at Durham and York. http://opac.regesta-imperii.
de/lang_en/anzeige.php?sammelwerk=The+Fantastic+in+Old+Norse+Icelan-
dic+Literature.+Preprint+Papers.
Jaszczolt, Katarzyna M.
2005 Default Semantics. Foundations of a Compositional Theory of Acts of Commu-
nication. Oxford: Oxford University Press.
Jaszczolt, Katarzyna M.
2006 Meaning merger: Pragmatic inference, defaults, and compositionality. Inter-
cultural Pragmatics 3(2): 195–212.
Johnson, Norman, Randolph Cooper and Wynne Chin
2008 The effect of flaming on computer-mediated negotiations. European Journal
of Information Systems 17(4): 417–434.
Jucker, Andreas H. and Irma Taavitsainen
2000 Diachronic speech acts: Insults from flyting to flaming. Journal of Historical
Pragmatics 1(1): 67–95.
Kayany, Joseph M.
1998 Contexts of uninhibited online behavior: Flaming in social newsgroups on
Usenet. Journal of the American Society for Information Science 49(12):
1135–1141.
Kaye, T.
1985 Computer-Mediated Communication Systems for Distance Education: Report
on a Study Visit to North America September/October 1985. Milton Keynes:
University of Open Institute of Educational Technology.
Kerr, Elaine B. and Starr Roxanne Hiltz
1982 Computer-Mediated Communication Systems: Status and Evaluation. New York/
London: Academic Press.
Keysar, Boaz
2007 Communication and miscommunication: The role of egocentric processes.
Intercultural Pragmatics 4(1): 71–84.
Kiesler, Sara, Jane Siegel and Timothy W. McGuire
1984 Social psychological aspects of computer-mediated communication. American
Psychologist 39: 1123–1134.
Kirsh, Elana
2012 Untangling the Web: How Facebook ruined my holiday. Jerusalem Post May
21st: http://www.jpost.com/Opinion/Columnists/Article.aspx?id=270839.
Kraut, Robert, Jolene Galegher, Robert Fish and Barbara Chalfonte
1992 Task requirements and media choice in collaborative writing. Human-Com-
puter Interaction 7: 375–407.
Kraut, Robert, Michael Patterson, Vicki Lundmark, Sara Kiesler, Tridas Mukopadhaya and
William Scherlis
1998 Internet paradox: A social technology that reduces social involvement and psy-
chological well-being. American Psychologist 53(9): 1101–1137.
Kruger, Justin, Nicholas Epley, Jason Parker and Zhi-Wen Ng
2005 Egocentrism over e-mail: Can we communicate as well as we think?’ Journal
of Personality and Social Psychology 89(6): 925–936.

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
Flaming and trolling  519

Lea, Martin, T. O’Shea, P. Fung and Russell Spears


1992 ‘Flaming’ in computer-mediated communication. Observations, explanations,
implications. In: Martin Lea (ed.), Contexts of Computer-Mediated Communi-
cation, 89–112. New York: Harvester Wheatsheaf.
Lea, Martin and Russell Spears
1991 Computer-mediated communication, deindividuation and group decision-mak-
ing. International Journal of Man-Machine Studies 34: 283–301.
Lee, Hangwoo
2005 Behavioral strategies for dealing with flaming in an online forum. The Socio-
logical Quarterly 46(2): 385–403.
Locher, Miriam A.
2006 Advice Online: Advice-Giving in an American Internet Health Column.
Amsterdam/Philadelphia: Benjamins.
MacCulloch, John Arnott
1930 Eddic Mythology: The Mythology of All Races in Thirteen Volumes. Vol. II.
Boston: Archaeological Institute of America, Marshall Jones Company.
Maltby, John, Liz Day, Ruth M. Hatcher, Sarah Tazzyman, Heather D.Flowe, Emma
J. Palmer, Caren A. Frosch, Michelle O’Reilly, Ceri Jones, Chloe Buckley,
Melanie Knieps and Katie Cutts
2015 Implicit theories of online trolling: Evidence that attention-seeking concep-
tions are associated with increased psychological resilience. The British Psy-
chological Society 1(1): 1–19.
McCosker, Anthony
2014 Trolling as provocation: YouTube’s agonistic publics. Convergence 20(2):
201–217.
Memon, Amina, Aldert Vrij and Ray Bull
2003 Psychology and Law: Truthfulness, Accuracy, and Credibility. 2nd ed. Chiches-
ter: John Wiley.
Millard, William B.
1997 I flamed Freud: A case study in teletextual incendiarism. In: David Porter
(ed.), Internet Culture, 145–160. New York: Hampton Press.
Moor, Peter J., Ard Heuvelman and Ria Verleur
2010 Flaming on Youtube. Computers in Human Behavior 26: 1536–1546.
Moulitsas, Markos
2008 Ignore ‘Concern Trolls’. The Hill January 09th. http://thehill.com/markos–
moulitsas/dems–ignore–concern–trolls–2008–01–09.html 06/06/09.
Naraine, Ryan
2007 The 10 biggest Web annoyances. www.pcworld.com December: 141–148.
Nigam, Kamal and Matthew Hurst
2004 Towards a robust metric of opinion. Proceedings of the AAAI Spring Sym-
posium on Exploring Attitude and Affect in Text: Theories and Applications,
598–603. Menlo Park, CA: The AAAI Press.
Nitin, Chanderwal, Ankush Bansal, and Deepak Kanzashi
2011 Understanding perceived flaming tendencies on social networking sites: An
exploratory study. Issues in Information Systems 12: 425–435.
O’Sullivan, Patrick B. and Andrew J. Flanagin
2003 Reconceptualization ‘Flaming’ and other problematic communication. New
Media and Society 5(1): 67–93.

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
520  Claire Hardaker

Panteli, Niki
2002 Richness, power cues and email text. Information and Management 40(2):
75–86.
Phillips, David J.
2002 Negotiating the digital closet: Online pseudonymity and the politics of sexual
identity. Information, Communication and Society 5(3): 406–424.
Phillips, Whitney
2015 This Is Why We Can’t Have Nice Things: Mapping the Relationship between
Online Trolling and Mainstream Culture. Cambridge, MA: The MIT Press.
Placks, Simon James
2003 Interpersonal Deceit and Lie-Detection Using Computer-Mediated Communi-
cation. Durham: University of Durham.
Plato
2007 The Republic. 3rd ed. London: Penguin Classic.
Recanati, Francois
1986 On defining communicative intentions. Mind and Language 1: 213–242.
Reicher, Steve, R. Mark Levine and Ernestine H. Gordijn
1998 More on deindividuation, power relations between groups and the rxpression
of social identity: Three studies on the effects of visibility to the in-group.
British Journal of Social Psychology 37: 15–40.
Rubin, Victoria L.
2010 On deception and deception detection: Content analysis of computer-medi-
ated stated beliefs. ASIST 2010 October 22nd–27th, 1–10. Pittsburgh, PA. http://
onlinelibrary.wiley.com/doi/10.1002/meet.14504701124/pdf.
Sanderson, David W.
1993 Smileys: Express Yourself Sideways. Sebastopol, CA: O’Reilly and Associates.
Saul, Jennifer
2001 Critical Studies: Wayne A. Davis, Conversational implicature: Intention and
convention in the failure of Gricean theory. Nous 35: 630–641.
Scott, Derek
1995 The effect of video games on feelings of aggression. The Journal of Psychol-
ogy 129: 121–132.
Searle, John R.
1983 Intentionality. Cambridge: Cambridge University Press.
Searle, John R.
1990 Collective intentions and actions. In: Philip R. Cohen, Jerry L. Morgan and
Martha E. Pollack (eds.), Intentions in Communication, 401–415. Cambridge,
MA: Bradford Books.
Shachaf, Pnina and Noriko Hara
2010 Beyond vandalism: Wikipedia trolls. Journal of Information Science 36(3):
357–370.
Shin, Jiwon
2008 Morality and Internet behavior: A study of the Internet troll and its relation
with morality on the Internet. In: Karen McFerrin, Roberta Weber, Roger
Carlsen and Dee Anna Willis (eds.), Proceedings of Society for Information
Technology and Teacher Education International Conference 2008, 2834–
2840. Chesapeake, VA: AACE.

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
Flaming and trolling  521

Siegel, Jane, Vitaly J. Dubrovsky, Sara Kiesler and Timothy W. McGuire


1986 Group processes in computer-mediated communication. Organizational
Behaviour and Human Decision Processes 37(2): 157–187.
Social Media and Criminal Offences
2014 Communications Committee – First Report. House of Lords Select Committee.
London.
Spencer-Oatey, Helen D. M.
2005 (Im)Politeness, face and perceptions of rapport: Unpackaging their bases and
interrelationships. Journal of Politeness Research: Language, Behaviour, Cul-
ture 1(1): 95–119.
Spencer-Oatey, Helen D. M. and Jianyu Xing
2006 Impoliteness in intercultural interaction: Conventions, intentionality and the
role of the interpreter. Linguistic Impoliteness and Rudeness: Confrontation
and Conflict in Discourse, 03–04 July. University of Huddersfield.
Sternberg, Janet
2000 Virtual misbehavior: Breaking rules of conduct in online environments. Pro-
ceedings of the Media Ecology Association 1: 53–60.
Stivale, Charles J.
1997 Spam: Heteroglossia and harassment in Cyberspace. In: David Porter (ed.),
Internet Culture, 133–144. New York: Routledge.
Strom, Paris S. and Robert D. Strom
2005 When teens turn cyberbullies. The Education Digest, December: 35–41.
Subasic, Pero and Alison Huettner
2001 Affect analysis of text using fuzzy semantic typing. IEEE Transactions on
Fuzzy Systems 9(4): 483–496.
Tepper, Michele
1997 Usenet communities and the cultural politics of information. In: David Porter
(ed.), Internet Culture, 39–54. New York: Routledge.
Thompson, Clive
2009 Clive Thompson on the taming of comment trolls. Wired Magazine, March
23rd. http://www.wired.com/techbiz/people/magazine/17–04/st_thompson
30/03/09.
Thompson, Robert J.
2008 Grades of meaning. Synthese 161: 283–308.
Topçu, Çigdem, Özgür Erdur-Baker and Yesim Çapa-Aydin
2008 Examination of cyberbullying experiences among Turkish students from dif-
ferent school types. CyberPsychology and Behavior 11(6): 643–648.
Turkle, Sherry
1990 The psychology of personal computers. In: Tom Forester (ed.), The Informa-
tion Technology Revolution, 182–201. Oxford: Blackwell.
Turner, Tammara Combs, Marc A. Smith, Danyel Fisher and Howard T. Welser
2005 Picturing Usenet: Mapping computer-mediated collective action. Journal of
Computer-Mediated Communication 10(4). http://jcmc.indiana.edu/vol10/
issue4/turner.html 05/01/11.
Unknown
1996 The Troller’s Faq. http://www.altairiv.demon.co.uk/afaq/posts/trollfaq.html
10/01/01.

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM
522  Claire Hardaker

Utz, Sonja
2005 Types of deception and underlying motivation: What people think. Social Sci-
ence Computer Review 23(1): 49–56.
van Schie, Emil G. M. and Oene Wiegman
1997 Children and video games: Leisure activities, aggression, social integration,
and school performance. Journal of Applied Social Psychology 27: 1175–1194.
Vaughan, Jill and Lauren Gawne
2011 I can has language play: Construction of language and identity in Lolspeak.
Australian Linguistics Society Annual Conference December 7th. http://vimeo.
com/33318759.
Vinagre, Margarita
2008 Politeness strategies in collaborative e-mail exchanges. Computers and Edu-
cation 50(3): 1022–1036.
Virkar, Shefali
2014 Trolls just want to have fun: Electronic aggression within the context of e-par-
ticipation and other online political behaviour in the United Kingdom. Inter-
national Journal of E-Politics 5(4): 21–51.
Vrij, Aldert
2000 Detecting Lies and Deceit. The Psychology of Lying and the Implications for
Professional Practice. Chichester: Wiley.
Vrij, Aldert, Katherine Edward, Kim P. Roberts and Ray Bull
2000 Detecting deceit via analysis of verbal and nonverbal behavior. Journal of
Nonverbal Behavior 24(4): 239–263.
Weckerle, Andrea
2013 Civility in the Digital Age: How Companies and People Can Triumph over
Haters, Trolls, Bullies, and Other Jerks. London: Pearson.
Whitty, Monica T.
2004 Cyberstalking. NSW Crime Division: Criminology Research Council.
Wilson, Samuel M. and Leighton C. Peterson
2002 The anthropology of online communities. Annual Review of Anthropology
31(449–467): 449.
Yates, JoAnne and Wanda J. Orlikowski
2002 Genre systems: Structuring interaction through communicative norms. Jour-
nal of Business Communication 39(1): 13–35.
Zarsky, Tal. Z.
2004 Thinking Outside the Box: Considering Transparency, Anonymity, and Pseu-
donymity as Overall Solutions to the Problems of Information Privacy in the
Internet Society. Unpublished PhD Thesis. New York: Columbia Law School.
Zdenek, Sean
1999 Rising up from the mud: Inscribing gender in software design. Discourse and
Society 10(3): 379–409.
Zhou, Lina, Judee K. Burgoon, Jay F. Nunamaker and Doug Twitchell
2004 Automating linguistics-based cues for detecting deception in text-based asyn-
chronous computer-mediated communications. Group Decision and Negotia-
tion 13(1): 81–106.

Brought to you by | Cambridge University Library


Authenticated
Download Date | 9/14/17 4:56 PM

You might also like