Download as pdf or txt
Download as pdf or txt
You are on page 1of 80

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/268215733

Fact or Fiction?: Discriminating True and False Allegations of Victimization

Chapter · January 2011

CITATIONS READS
0 1,729

3 authors, including:

Kristine A. Peace
MacEwan University
24 PUBLICATIONS   552 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Misinformation & Psychopathy View project

All content following this page was uploaded by Kristine A. Peace on 27 May 2015.

The user has requested enhancement of the downloaded file.


In: Psychology of Victimization ISBN: 978-1-61470-505-5
Editor: Audrey N. Hutcherson ©2011 Nova Science Publishers, Inc.

Chapter 1

FACT OR FICTION?:
DISCRIMINATING TRUE AND FALSE
ALLEGATIONS OF VICTIMIZATION

Kristine A. Peace*, Krista L. Brower, and Ryan D. Shudra


Grant MacEwan University

ABSTRACT
In forensic settings, decision-makers are continually challenged with evaluating the
credibility of reports of victimization. Unfortunately, judgments concerning the
truthfulness of allegations can be misguided, and few standardized approaches to
discriminating the veracity of such reports exist. In many cases, evaluators heavily rely
on erroneous cues to deception, such as gut instinct, witness demeanor, eye contact,
witness confidence, and whether a claim fits with the “typical” crime victimization
scenario. As such, empirical literature has emphasized a focus on verbal behaviours
(including content) as a function of veracity, as these may be less susceptible to
interpretive bias. When confronted with claims of victimization, analysis of the content of
the alleged victims’ statement is of utmost importance for investigators, and may be the
only source of evidence available. While advocating a reliance on empirical evidence, we
observe that few studies concerning truthful and false allegations of victimization within
the criminal justice system have been conducted; yet, a multitude of legal cases have
been consumed by sorting fact from fiction within allegations. In this paper, we provide
an overview of empirical research on the characteristics of true and false allegations of
trauma. The general consensus from extant literature is that truthful claims of trauma
contain an abundance of detail, emotional components, perceptual detail, unstructured
production, contextual embedding, spontaneous corrections, unusual details, descriptions
of interactions, and admissions of lack of memory, relative to false allegations. However,
recent evidence suggests that these features may not be maintained over time, such that
discriminating the truthfulness of historical claims of trauma may be particularly difficult.
The concern over how these claims are maintained in memory over time is addressed,

* Correspondence should be addressed to Dr. Kristine Peace at Grant MacEwan University, Department of
Psychology, City Centre Campus, Rm 6-329H, 10700 – 104 Avenue, Edmonton, AB, Canada, T5J 4S2, or
contact by phone at 780-633-3651, or e-mail at PeaceK@macewan.ca.
2 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

and is especially relevant given that verbal statements typically are provided as evidence
in legal cases over extended durations. In addition, empirical evidence may conflict with
pre-existing beliefs of law enforcement officers. We review current methods that have
been utilized in detecting deception in these contexts, including Criteria-Based Content
Analysis (CBCA), Reality Monitoring (RM), Scientific Content Analysis (SCAN),
Linguistic Inquiry and Word Count (LIWC), Memory Assessment Procedure (MAP), and
the Assessment Criteria Indicative of Deception (ACID) system. The purpose,
procedures, relevant studies, and discrimination rates concerning each form of content
analysis are discussed. We describe the factors that influence the content of narrative
reports of trauma, as well as the limitations they may impose on successful
discrimination. Finally, we conclude by reviewing the effectiveness of existing statement
analysis techniques, the implications of these methods for eliciting allegations of trauma,
and the importance of developing successful discrimination tools to better identify false
allegations.

Keywords: false allegations, credibility assessment, statement analysis, trauma, victimization

FACT OR FICTION?:
DISCRIMINATING TRUE AND FALSE
ALLEGATIONS OF VICTIMIZATION
“Now, these do either wholly contrive and invent the untruths they utter, or so alter and
disguise a true story that it ends in a lie. When they disguise and often alter the same
story, according to their own fancy, 'tis very hard for them, at one time or another, to
escape being trapped, by reason that the real truth of the thing, having first taken
possession of the memory, and being there lodged impressed by the medium of
knowledge and science, it will be difficult that it should not represent itself to the
imagination, and shoulder out falsehood, which cannot there have so sure and settled
footing as the other; and the circumstances of the first true knowledge evermore running
in their minds, will be apt to make them forget those that are illegitimate, and only,
forged by their own fancy. In what they, wholly invent, forasmuch as there is no contrary
impression to jostle their invention there seems to be less danger of tripping; and yet even
this by reason it is a vain body and without any hold, is very apt to escape the memory, if
it be not well assured.”
Michel de Montaigne, Essays, Book the First, Chapter IX: Of Liars (1580)

1. GENERAL INTRODUCTION
Throughout history, many false claims of victimization have occurred in legal settings
(e.g., Yuille, Tymofievich, & Marxsen, 1995). Given the complexity of evaluating the
credibility of a complainant’s statement, false allegations present a significant challenge for
the legal system. With this recognition, psychology has a long tradition of investigating
witness credibility, dating to the early 1900s with Hugo Münsterberg’s On the Witness Stand
(1908). In this text, he argued that a psychological analysis of memory can assist in the
discernment of truth in legal contexts. One source quotes him as stating: “To deny that the
experimental psychologist has indeed possibilities of determining the ‘truth-telling’ process is
just as absurd as to deny that the chemical expert can find out whether there is arsenic in a
Fact or Fiction?: Discriminating True and False Allegations of Victimization 3

stomach or whether blood spots are of human or animal origin” (as cited in Hale, 1980, p.
118). Although more than a century has passed since Münsterberg’s controversial publication
(which was ultimately vilified in the legal community; e.g., Goldstein, 1980), difficulties in
determinations of credibility continue to persist in our legal system (Vrij, Granhag, & Porter,
2010). In the 2001 inquiry into Thomas Sophonow’s wrongful murder conviction in
Manitoba, Canada, Justice Cory highlighted this point. He concluded that (1) witnesses often
provide dishonest testimony, including about their own criminal victimization and (2)
injustices have resulted from errors by “honest, right-thinking eyewitnesses.” He strongly
advocated the need for a better understanding of deception and memory errors among
witnesses or alleged crime victims (Justice Cory, November 4, 2001). Some recent cases
illustrate this point. In one prominent Canadian case, the Nova Scotian government awarded
various sums of money to former residents of the Shelburne School for Boys and the Youth
Training Centre in Truro following claims of historical abuse by staff (see Kaufman, 2002;
Porter, Campbell, Birt, & Woodworth, 2003). Many of these claims were later found to be
false allegations motivated by monetary gain. On June 6, 2006, legal actions launched by
former teachers and guards against the Nova Scotia government were finally put to rest. Over
$7 million dollars is to be paid out to the 79 staff members who were falsely accused of
abusing students to compensate for over 10 years of hardship caused by these claims (CBC
News, June 6, 2006).
While individuals may make false allegations of victimization for financial gain,
attention-seeking and revenge also are strong motivators (Peace & Masliuk, 2011). For
example, former alderwoman Dar Heatherington of Lethbridge, Alberta, was convicted of
public mischief by sending anonymous threatening letters to herself and falsely claiming to
have been abducted, sexually assaulted, and abandoned in Las Vegas (Graveland, 2004;
Regina v. Heatherington, 2004a, 2004b). Similarly, the claim that 20-year-old Audrey Seiler
was abducted at knifepoint on March 27, 2004, in Wisconsin has been discounted by police as
being fabricated, partially due to inconsistencies in her reports of victimization. The college
student has been charged in faking her own disappearance, and investigation continues into a
previous claim made by Seiler that she was assaulted and left unconscious on the street by an
unknown assailant in February of that year, prior to her alleged abduction (CNN, July 1,
2004). Jennifer Wilbanks, known as the “Runaway Bride”, is another well-known example of
fabricated victimization. Wilbanks mysteriously disappeared just prior to her wedding in
April 2005. After a massive 4-day manhunt, Wilbanks called police from a payphone in
Albuquerque, New Mexico, alleging she had been abducted from Atlanta and sexually
assaulted. She later recanted during police questioning and was charged with making a false
statement and falsely reporting criminal behaviour (CNN, May 26, 2005). She pleaded no
contest to the charges and received probation, community service, and fines for the upheaval
she caused (Associated Press, June 22, 2005). These last few cases exemplify successful
credibility assessments where false allegations of victimization were discounted without
lengthy delays. Unfortunately, many false claims remain undetected for years or even decades
(e.g., Porter, Campbell, et al., 2003).
Despite the media focus on false allegations and the obvious need to accurately identify
malingered ‘memories’ in both criminal and civil cases (Porter, Peace, Douglas, & Doucette,
in press), there has been little scientific consideration of fabricated claims of victimization.
Although some studies have addressed the characteristics of true and mistaken (but honest)
memories for traumatic experiences, few have focused on the qualities of intentionally
4 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

fabricated (deceptive) reports of trauma and victimization (e.g., Porter, Yuille, & Lehman,
1999). Some research indicates that reports of fabricated childhood experiences made by
adults differ qualitatively from true experiences. Fabricated reports tend to be more detailed,
contain more repeated details, and are associated with higher ratings of vividness and
confidence (e.g., Porter et al., 1999). However, little research has addressed allegations of
trauma in adulthood (see Peace & Porter, 2011). Also relevant to victimization claims is the
manner in which allegations are maintained over time, both in memory and symptom
reporting. Unfortunately, little research has addressed the relative consistency of truthful and
deceptive claims of trauma (e.g., Granhag & Strömwall, 2000a). Evaluation of such claims is
of critical importance and reflects real-life high-stakes situations involving deception (Porter
& ten Brinke, 2010). The consequences of such false claims of victimization include
extensive resources required for the investigations, distress or trauma for falsely accused
persons, and increased skepticism concerning claims by genuine victims of criminal events.
Estimates of false allegations of victimization have ranged from 2-40% of all cases (e.g.,
Benedek & Schetky, 1985; Everson & Boat, 1989; Green, 1986; Halliday, 1988; Jones &
McGraw, 1987; Kanin, 1994; Littman & Szewczyk, 1983; MacLean, 1979; Mann, 1985;
McDowell, 1992; Sheridan & Blaauw, 2004; Spencer & Flin, 1990; Thoennes & Tjaden,
1990). For example, in their analysis of 350 abuse referrals, Anthony and Watkeys (1991)
found that 63 of the 153 unsubstantiated claims (i.e., 41%) were false allegations made either
by the child or the parent (the latter often during custody disputes). In another study, Penfold
(1995) evaluated allegations of sexual abuse arising in approximately 2% of child custody
disputes and reported fabrication rates of 8% to16.5%. Even members of the Criminal
Lawyers Association, have stated that sexual assaults and domestic violence are “very easy to
fabricate” because they are private experiences (Bradley, May 9, 2006). Although one may
assume that it is easy to sort fact from fiction, no ‘Pinocchio’s nose” has been found (e.g.,
DePaulo et al., 2003; Vrij, Granhag, et al., 2010) and the extent of convictions stemming from
false allegations remains unclear (Zaparniuk, Yuille, & Taylor, 1995).
While studying this issue has clear importance for the basic understanding of memory
(e.g., retention of emotional events over time) and deception, it also has applied forensic
relevance. Although courts of law traditionally have relied heavily on the testimony of
witnesses in formulating their decisions of guilt and innocence, it is now recognized that both
intentional deception and mistaken recollections are all too common in the courtroom (as
noted by Justice Cory in the Sophonow Inquiry; Kaufman, 2002; Gutheil & Sutherland,
1999). For example, in numerous cases of sexual abuse and assault, it has been established
that the evidence was based on false memories (Loftus, 2003a) and intentional fabrications
(Raitt & Zeedyk, 2003). As a result, investigators have turned their attention to various
techniques involved in deception detection (Vrij, Granhag, et al., 2010). In many cases,
evaluators heavily rely on erroneous cues to deception, such as gut instinct, witness demeanor
and confidence, eye contact, and whether a claim fits with the “typical” crime victimization
scenario (Porter & ten Brinke, 2010). As such, analysis of the content of the alleged victims’
statement is of utmost importance for investigators, and may be the only source of evidence
available. Unfortunately, judgments concerning the truthfulness of allegations can be
misguided by emotion the traumatic nature of claims (e.g., Gunnell & Ceci, 2010; Brewer &
Burke, 2002; Peace & Porter, 2011), and few standardized approaches to discriminating the
veracity of such reports exist. The purpose of this chapter is to review literature on the
features of true and false allegations, with a focus on claims of trauma, and the predominant
Fact or Fiction?: Discriminating True and False Allegations of Victimization 5

content-based techniques used to assess statements (e.g., Criteria-Based Content Analysis,


Reality Monitoring, Scientific Content Analysis, Linguistic Inquiry and Word Count, the
Memory Assessment Procedure, and Assessment Criteria Indicative of Deception). Further,
we address factors that influence the content of narratives and may limit the effectiveness of
statement analysis procedures (e.g., level of experience and familiarity, preparation,
motivation, and coaching). Ultimately, the development of reliable techniques for detecting
false allegations will prevent miscarriages of justice (Vrij, 2008), including re-victimization
of genuine claimants when faced with the disbelief of criminal justice personnel. It is only
through detailed review and empirical study that we gain a better understanding of credibility
assessment as applied to victims.

2. CONCEPTUALIZATIONS OF FALSE ALLEGATIONS


“And, after all, what is a lie? 'Tis but the truth in masquerade.”
Lord Byron (cf. Nanavati, 2010, p. 3)

2.1. Defining Deception and False Allegations

The manner in which lies are constructed by the deceiver and presented to another person
is of critical importance in the criminal justice system. Various definitions of lying have been
offered. In an early definition, Augustine argued in On Lying (395/1956) that “a lie consists in
speaking a falsehood with the intention of deceiving”. This differs from later definitions
posited by theologian Thomas Aquinas in 1225, where he argued that any communication of
false information, independent of whether one knows it is false or not, is an act of deception
(see Ford, 2006). The definition of lying by modern scholars considers the speaker, receiver,
and their interaction: “a successful or unsuccessful deliberate attempt, without forewarning, to
create in another a belief which the communicator considers to be untrue” (Vrij, 2008, p. 15).
This definition acknowledges that lying goes beyond the intent to deceive and must consider
the liars’ belief that the information is incorrect (Memon, Vrij, & Bull, 2003). For the purpose
of this chapter, lies are defined broadly and refer to verbal behaviour that is “intended to
conceal, misrepresent, or distort truth or information for the purpose of misleading others”
(Porter, 1998, p. 169). The types of lies examined in this chapter are not the everyday deceit
(i.e., white lies) that we all engage in on some level (e.g., “Oh, I’m doing good thanks”;
DePaulo, Kashy, Kirkendol, Wyer, & Epstein, 1996; Ekman, 2001; King, 2006). Rather, we
have focused on deceptive behaviour that has serious consequences in legal and clinical
settings. In particular, false allegations of traumatic events, and criminal or otherwise
unacceptable behaviour, have been witnessed throughout history. Such false allegations are
exemplified below with reference to cases involving a single claim by one against another to
more complex, wide-reaching “contagious” claims involving numerous complainants and
accused persons. While by no means an encyclopaedic listing of such cases, those provided
are intended to give representative examples that lead into a discussion of the modern
phenomenon.
6 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

2.2. Historical False Allegations

It appears that the social climate in fifteenth century Italy encouraged many questionable
accusations of crimes or moral indiscretions. For example, in Florence, allegations were made
anonymously by placing one’s written story in a tamburo (wooden box) located in front of the
Palazzo Vecchio. Although such allegations typically were unfounded, they served as the
primary evidence in many legal cases. In 1476, Leonardo da Vinci himself, along with four
others, was accused of engaging in a homosexual affair via the tamburo. Although later
cleared of the charges, many such allegations ended in convictions during this time (Nuland,
2005). One of the best-known historical examples of false allegations stemming from mass
hysteria were the “witch trials” of Europe and America during 1400-1700s. Among the best-
documented cases were the 1692 Salem trials in Massachusetts, which began with the
“strange” behaviour of several young girls who were alleged to be under the influence of a
witch. Upon pressure from town and church authorities, they made allegations against three
women for engaging in witchcraft. Three allegations soon turned into over 400 accusations
(Hiltunen & Doty, 2002). Hiltunen (1996) conducted a discourse analysis of legal documents
collected during the trials for 26 cases. He noted that the false claims contained vivid
information and had a sense of “authenticity” in their linguistic detail. On the other hand, the
accused denials of witchcraft were disbelieved based on the characteristics of their responses
during interrogations (often under threat of or actual torture). Defendants were asked leading
questions, and their denials were considered to be “inconsistent” – lending to the assessment
that they were being deceptive (Hiltunen, 1996). These perceived inconsistencies in the
rebuttals of the accused women led to unabashed belief in the false allegations of four young
girls and subsequent discrediting of anything said by defendants (Hiltunen & Doty, 2002).
Similarly, Sjöberg (1997, 2002) discussed an interesting set of historical false allegations
involving the witch panic in a small parish in Rättvik, Sweden. In 1671, Nils Larsson alleged
that he had been repeatedly abducted, taken to Blåkulla (a traditional village of folklore), and
was the victim of perverted sexual practices and satanic rituals. In total, 60 children testified
to mass satanic activity and convictions were obtained, resulting in a death sentence in some
cases. Oddly, the claims made by Nils Larsson himself were disbelieved. Sjöberg (2002)
reports they were “too fantastical to be true” (p. 134) and that it was the features of his
narrative that allowed investigators to discredit his claims. Problems with Larsson’s claims
were perceptions that they were too colourful and extensive, and contained many descriptions
of interactions. Interestingly, these characteristics (i.e., abundance of detail, spontaneous
reproduction, coherence, contextual imbedding) are often considered to be indicative of
truthfulness in modern approaches to credibility assessment (e.g., Sjöberg, 2002; Vrij, 2005).
The Larsson case demonstrates that the specific narrative features of false claims may appear
credible, but the story as a whole does not contain the “ring of truth”. This appears to be one
of the few cases during the widespread witch panic across Europe and America in which a
thorough inquiry was conducted prior to burning people at the stake. Modern claims of mass
satanic ritual abuse and child sex rings are reminiscent of the days of the witch trials (e.g.,
Bottoms, Shaver, & Goodman, 1996; Qin, Goodman, Bottoms, & Shaver, 1998).
Fact or Fiction?: Discriminating True and False Allegations of Victimization 7

2.3. Historical Approaches to Credibility Assessment

There have been numerous approaches to detecting deceit throughout history. Since
biblical times, methods of detecting deception have varied widely across the world (Korn,
1997). The underlying premise of ancient measures was that liars exhibited physical or
physiological changes that “proved” them to be lying. For example, it was believed that
individuals experienced salivary inhibition (“dry mouth”) when under the stress of being
deceptive (Ford, 2006). In Arabia, multiple suspects of a crime were required to lick a hot
iron and the one who was most severely burned was considered guilty. In ancient China
accused parties had to chew on grains of rice during questioning and spit these out to
determine if they were dry (guilty) or moist (innocent). During the Inquisition, the British
also employed deception detection measures involving a “trial piece” of bread and cheese that
suspects were required to swallow. If particles of food remained in the mouth or throat, guilt
was assumed (Trovillo, 1939). These ancient methods based on salivary inhibition and
physiological responses are still evidenced in some parts of the world today (e.g., Ford, 2006;
Kleinmuntz & Szucko, 1984).
Physical characteristics and behaviour also were important considerations throughout the
history of credibility assessment. For example, an Egyptian scroll from 900 BC described
signs of distress (i.e., running hands through hair, facial discolouration), evasiveness, and
rubbing one’s big toe in the sand to be indicators of deceptive behaviour (e.g., Hocking,
Bauchner, Kaminski, & Miller, 1979; Trovillo, 1939). Detection of witchcraft during the
Inquisition involved examination for “stigmata” or physical marks on the body. In Great
Britain, credibility assessments were made during “trials by ordeal”. Suspected witches were
bound at their hands and feet and thrown into a lake of water. Based on the “sink or float”
principle (i.e., immorality and impurity were buoyant), witches would float whereas innocent
suspects would sink into the water (and often drown during the trial; see Kramer &
Sprenger’s The Malleus Maleficarum in 1486/1979). In the early 1900s, a “truth serum”
consisting of a cocktail of barbiturates (e.g., sodium pentothal, sodium amytal, scopolamine)
was developed under the assumption that barbiturates compelled individuals to provide
truthful and accurate information (Piper, 1993). Although there is little empirical evidence
supporting the use of “truth serums”, some investigative agencies continue to use barbiturates
in eliciting confessions (Bartol & Bartol, 1999, 2004).

2.4. Credibility Assessment in the Courtroom

Historically, then, allegations often were accepted as genuine based on belief in the
morality of individuals – why would any upstanding citizen make claims up, especially when
there is no gain to be received? In addition, methods for detecting deception were ambiguous
and often resulted in false convictions. Today, some scholars tenaciously hold on to the
opinion that people rarely fabricate claims of trauma or criminal victimization (e.g., Brown,
Scheflin, & Hammond, 1998; Terr, 1994). The courts (in general) also have demonstrated a
bias towards believing allegations based on “repressed memories” (see Porter, Peace, et al., in
press). However, the judiciary has acknowledged that unfounded claims of any sort should be
approached with skepticism. Lord Justice Salmon in Regina v. Henry and Manning (1969)
stated that with respect to allegations of rape, “Human experience has shown that in these
8 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

courts girls and women do sometimes tell an entirely false story which is very easy to
fabricate, but extremely difficult to refute” (as cited in Raitt & Zeedyk, 2003, p. 463). Several
Canadian cases have demonstrated that judges recognized the need to assess the credibility of
claims of victimization, especially in “he said, she said” cases (e.g., Connolly & Gordon, in
press; Magner, 2000; Porter, Campbell, et al., 2003). For example, allegations of sexual
assault were correctly identified as false in Regina v. Lukasik (1982), Regina v. Hudon
(1996), and Regina v. Ambrose (2000), and the alleged victims were convicted of perjury
and/or public mischief.
Given the importance of credibility assessment in the criminal justice system, it is critical
to evaluate how cues to deception (and false allegations in particular) are utilized by the
courts. Unfortunately, judicial rulings reflect the prevailing view that credibility assessment is
a matter of “common sense” (e.g., Porter & ten Brinke, 2009; Read & Desmarais, 2009a,
2009b). For example, the Supreme Court has ruled that “credibility is a matter within the
competence of lay people” (Regina v. Marquard, 1993, p. 248), and that determinations of
truth are based on “… the demeanour of the witness and the common sense of the jury”
(Regina v. Francois, 1994). That being said, empirical studies have repeatedly demonstrated
that laypersons are poor deception detectors (Bond & DePaulo, 2006; Vrij, 2008). Most
people perform around the level of chance (50%) for accurately detecting lies (Vrij, Granhag,
et al., 2010), and demonstrate a truth bias in decision-making (e.g., Bradford & Goodman-
Delahunty, 2008; Levine, Sun Park, McCornack, 1999; Peace & Sinclair, in press). However,
recent research suggests that when confronted with lies of a highly emotional or provocative
nature, deception detection abilities are further impaired. Peace, Porter, and Almon (in press)
evaluated observers’ ability to discriminate true (corroborated) and false allegations of sexual
assault. They found that observers performed poorly at this task (i.e., 45% accuracy),
demonstrated a strong truth bias (i.e., they were more likely to judge reports as true), and used
liberal decision criteria (i.e., participants were less stringent in the criteria used to judge what
is “truth”) likely due to fear of incorrectly classifying a genuine rape as a false allegation. As
such, we may be ‘sidetracked’ by the emotionality of victimization claims, diminishing the
accuracy of our veracity judgments (e.g., Ask & Granhag, 2007a; Campbell & Porter, 2002;
Kaufmann, Drevland, Wessel, Overskeid, & Magnussen, 2003).
Given that police, judges, and jurors are regularly called upon to evaluate the credibility
of witnesses and victims, do they perform any better? The short answer is no (Vrij, 2004). In
fact, professionals within the criminal justice system that are assumed to have advanced lie
detection abilities often perform more poorly (e.g., Garrido, Masip, & Herrero, 2004; Kassin,
Meissner, & Norwick, 2005; Porter, Woodworth, & Birt, 2000) or are comparable to
laypersons (e.g., Ekman & O’Sullivan, 1991; Ekman, O’Sullivan, & Frank, 1999; Hartwig,
Granhag, Strömwall, & Vrij, 2004; O’Sullivan & Ekman, 2004). Not only is detection
accuracy comparable (see Aamodt & Custer, 2006), so are the cues that both laypersons and
professionals believe are accurate indicators of deception (e.g., Anderson, DePaulo, Ansfield,
Tickle, & Green, 1999; Porter, Campbell, Stapleton, & Birt, 2002; Strömwall & Granhag,
2003a; Vrij & Semin, 1996). While deception detection may be somewhat more accurate for
high-stakes lies (e.g., Mann, Vrij, & Bull, 2004; Vrij & Mann, 2001a; see Porter & ten
Brinke, 2010), this has not been consistently demonstrated in professional populations
(Bradford & Goodman-Delahunty, 2008). For example, Vrij and Mann (2001b) had police
officers evaluate emotional pleas for the safe return of a loved one from suspects later proved
to be guilty of killing their relative. In this case, officers’ ability to discern fact from fiction
Fact or Fiction?: Discriminating True and False Allegations of Victimization 9

was not above chance levels (Vrij & Mann, 2001b). Further complicating this matter, judges
are often highly confident in their ability to detect deception, yet this is unrelated to their
accuracy (see Porter & ten Brinke, 2009). This is especially problematic when veracity
decisions are based the relative credibility of witnesses. For example, in Regina v. Malik and
Bagri (2005), the judge concluded that “weighing the credibility of a number of witnesses
who testified” was enough to determine guilt.
So what cues are utilized in judicial decision-making? Research in forensic and
experimental contexts has confirmed a heavy reliance on statement and/or testimonial
consistency as a determinant of credibility (e.g., Brewer & Burke, 2002; Granhag &
Strömwall, 1999; Granhag & Strömwall, 2000a; Granhag & Strömwall, 2004; Malloy &
Lamb, 2010; Seelau & Wells, 1995). For example, Greuel (1992) asked police officers to
indicate what cues they believed were associated with deception, specifically focusing on
cases of sexual assault. Results indicated that statement inconsistency was the highest rated
cue to deception. In addition, research manipulating the consistency of statements in mock-
juror scenarios has repeatedly demonstrated that exposure to inconsistent statements leads to
greater perceptions of inaccuracy, lowered confidence in the witnesses testimony, and drastic
reductions in the perceived credibility of the witness (e.g., Berman & Cutler, 1996; Berman,
Narby, & Cutler, 1995; Goodman, Batterman-Faunce, Schaaf, & Kenney, 2002; Lieppe &
Romanczyk, 1989; Semmler & Brewer, 2002). For example, Brewer, Potter, Fisher, Bond,
and Luszcz (1999, Study 1) studied potential jurors’ reactions to eyewitness inconsistencies,
nervousness, ignored questions, gaze aversion, levels of confidence, exaggerations, coherence
of report, and added details. All participants rated the extent to which each of these items was
suggestive of inaccuracy. While their findings indicated that all such behaviours led to juror
suspicion about the accuracy of the account, inconsistency between statements was
considered the strongest indicator of inaccuracy (Brewer et al., 1999). In fact, witnesses are
often discredited on the basis of inconsistencies (e.g., Glissan, 1991; Salhany, 1991; Stuesser,
1993; Wellman, 1986). Although consistency serves as a “cue to deception”, it is unclear how
this is assessed. For example, the police officers in Greuel’s (1992) study all saw the same
testimony in a rape case; however, half of the officers rated the statements as consistent
whereas half found them to be inconsistent (see also Granhag & Strömwall, 2000b, 2001a).
This may result from variations in the way testimonial consistency is construed (e.g.,
contradictions, omissions, added details) (Brewer & Burke, 2002). Despite variations in the
way individuals perceive the degree of consistency, consistent statements are viewed as
credible whereas inconsistent statements are discredited – known as the consistency heuristic
(e.g., Fisher, Vrij, & Leins, in press; Granhag & Strömwall, 2000a). In fact, these researchers
consider our reliance on the consistency heuristic alarming given the lack of empirical
research addressing the consistency of truthful and deceptive statements (Granhag &
Strömwall, 2001b).
While forensic psychologists have played an active role in educating the legal system on
memory and testimonial biases (e.g., Read & Desmarais, 2009b; Wells et al., 1998), memory
continues to be poorly understood in this context. Judges and juries often hold beliefs that
memories of traumatic events will be accurately and consistently retained over time with little
variation (e.g., Granhag & Strömwall, 2001b; Read & Desmarais, 2009a). In turn, if victims
or witnesses (or perpetrators) alter their accounts of an incident, their memory is perceived to
be unreliable and their testimony discounted (Malloy & Lamb, 2010). Professionals and
laypersons alike often perceive the “truth” as being steadfast and deception as more variable
10 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

(e.g., King, 2006), despite empirical evidence to the contrary (e.g., Fisher et al., in press;
Peace & Porter, 2011). In addition, strategies for legal cross-examination often are focused on
statement consistency and are well-documented in legal texts (e.g., Salhany, 1991; Stuesser,
1993). Glissan (1991) stated that an inconsistent report can destroy the credibility of a witness
and advised lawyers to seek out such alterations in statements. For example, in Regina v.
Heatherington (2004a, 2004b), Alberta Court Judge Caffaro cited Heatherington’s
inconsistent and vague statements as indicators of her deceptive behaviour. While this does
not always lead to misjudgment (i.e., Heatherington’s claims of abduction and sexual assault
were never proven; also see Hartwig, Granhag, Strömwall, & Kronkvist, 2006), it is
detrimental when a genuine victim or witness is re-victimized by disbelief (e.g., Ahrens,
2006; Campbell, 2006; Malloy & Lamb, 2010). Further, when individuals are asked what
they think is the best strategy for trying to deceive someone, they report that “telling a story
that is consistent and does not contradict itself” is the best approach to take (Hines et al.,
2010). Thus, deceptive individuals may provide claims that are consistent, and subsequently
judged as credible (Fisher et al., in press).
Although testimonial consistency is weighted heavily in veracity determinations, the
amount of detail provided within testimony also is considered in judicial decisions (e.g.,
Sporer & Sharman, 2006). In a study by Anderson et al. (1999), they found that participants
relied on the verbal cue of statement detail most frequently. When detail was combined with
consistency and plausibility, this verbal cue cluster was used most often in judgments of
truthfulness and was the most accurate predictor of both truths and lies (Anderson et al.,
1999). Similarly, Hines et al. (2010) reported that 80% of their sample indicated statement
detail to be related to perceived credibility. Interestingly, this was split among individuals
who believed maximal detail (35%) relative to moderate-minimal detail (28.5%) was
indicative of truthfulness (Hines et al., 2010). In real world forensic contexts, veracity may be
determined both on the amount and quality of detail. For example, in Regina v. Norman
(2005), the claimant’s (R.C.) allegation of sexual assault was determined to be truthful even
though she had previously made a false allegation of sexual assault against a former
boyfriend. In his decision, Justice De Filippis indicated that, besides physical evidence,
several factors influenced his decision to accept the allegation as genuine:

“The amount of detail provided by the complainant about the assaults is a significant
indicator of reliability. The assaults were not described in vague or generic terms. For
example, the complainant recounted how the defendant sprayed her with a water bottle
not once but three times. She pointed out that during one of these incidents he took a
drink of water and spat it at her. She recalled clearly that when she struck her nose after
falling out of bed, the defendant said, sarcastically, “baby, you’re so clumsy” and pushed
her head into the wall. Similarly, a lying witness need not, and would not, have pointed
out that the defendant was looking for a cigarette during one of the assaults or to have
testified that he asked whether she wanted her other breast bruised just before he
ejaculated” (para. 28)

It is clear that De Filippis also considered spontaneous corrections and unusual details in
his decision-making (R. v. Norman, 2005; see Characteristics of True and False Allegations
section in this chapter). Research with mock judges found that only 3% of their participants
mentioned that spontaneous corrections or inclusion of additional detail was important to their
perception that a statement was credible (Hines et al., 2010). However, a poor understanding
Fact or Fiction?: Discriminating True and False Allegations of Victimization 11

concerning cues to deception among laypersons tends to account for a lack of knowledge
about what they would rely on (e.g., Anderson et al., 1999; Read & Desmarais, 2009a).
Research has found that judges, lawyers, and police believe that truthful statements are
more consistent and more detailed over time than deceptive reports, but are generally unable
to identify what constitutes consistency (e.g., Strömwall & Granhag, 2003b). While there is
little dispute about details (e.g., Vrij, 2005), there remains mixed findings with respect to
consistency (e.g., Granhag & Strömwall, 2002). As a result, legal decision-makers may be
relying on inaccurate cues, leading to drastic errors and/or delays in the pursuit of justice. For
example, in S. L. B. v. G. A. (2004), an appeal was made to overturn a prior decision where
Justice Sanderman dismissed an allegation of sexual assault as being false due to inconsistent
statements given by the alleged child victim. Although the appeal was denied, the veracity of
this claim is unclear and testimony should not be dismissed solely on the basis of
inconsistencies.
However, there is hope. Some members of the judiciary have identified and advocated
the need to examine empirically-based cues to both deception and truthfulness (e.g., Porter &
ten Brinke, 2009), and there is some evidence of this occurring in several cases within the last
decade (e.g., R. v. Heatherington, 2004a, 2004b; R. v. Norman, 2005). In addition, training
about empirically based cues to deception tends to improve accuracy in laypersons and
professionals alike (e.g., Akehurst, Bull, Vrij, & Köhnken, 2004; Bradford & Goodman-
Delahunty, 2008; Hartwig et al., 2006; Porter, Juodis, ten Brinke, Klein, & Wilson, 2010;
Porter & ten Brinke, 2009; Porter et al., 2000; Vrij, 1994; Vrij, Edward, & Bull, 2001a; Vrij,
Granhag, et al., 2010). Modern legal approaches to false allegations support the need for
deception detection methods in the courtroom (ten Brinke & Porter, in press). We have long
since surpassed the trials by ordeal of medieval England, dry rice tests of Ancient China, and
the hot iron tests of Arabia (e.g., Ford, 2006; Wrightsman & Porter, 2006). However, modern
approaches to deception detection may still be fallible (Vrij, Granhag, et al., 2010). These can
generally be categorized into three domains: physiological, behavioural, and verbal (e.g.,
DePaulo et al., 2003; Lykken, 1998; Memon et al., 2003), the latter of which is the focus of
this chapter. Several methods of verbal or written statement analysis have been derived
through empirical research, all of which aim to effectively discriminate the veracity of an
allegation (e.g., Masip, Sporer, Garrido, & Herrero, 2005). While the predominant techniques
will be addressed later in this chapter, it first is necessary to review general features of
genuine and deceptive claims.

3. CHARACTERISTICS OF TRUE AND FALSE ALLEGATIONS


“Oh what a tangled web we weave, When first we practise to deceive!”
Sir Walter Scott (1810), Marmion. Canto vi. Stanza 17.

While there have been a variety of techniques applied to the assessment of statement
credibility (see next section), the basic premise of each remains the same: truthful and
deceptive experiences are characteristically different (Vrij, Granhag, et al., 2010). Features of
truths and lies often have been assessed on content and linguistic features (Vrij, 2008).
However, other aspects of statement analysis, such as plausibility, subjective memory
assessments, and consistency, also should be taken under consideration and are addressed
12 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

here. Regardless of the source of these differences, numerous studies have reported that
certain cues (independent of technique) are strong predictors of veracity (e.g., Lipian, Mills,
& Brantman, 2004; Vrij, 2008). Although these cues are rarely examined in isolation of each
other, we wanted to briefly discuss the most predominant cues as well as their empirical
support.

3.1. Content-Based Features

In general, truthful events/traumas are based in reality and should be highly detailed with
contextual and perceptual information (e.g., Peace & Porter, 2011; Porter & Peace, 2007). On
the other hand, fabricated events/traumas have no basis in reality, thus perceptual and
contextual information must be generated and is unlikely to leave a lasting “impression” in
memory (e.g., Mitchell & Johnson, 2009). In order to assess this, much attention has been
focused on the content of truthful and deceptive claims (e.g., Barnier, Sharman, McKay, &
Sporer, 2005; Davidson, 1990; Larsson & Granhag, 2005; Vrij, 2008). As a result, several
empirically-supported cues that assist in determining veracity have been revealed.

3.1.1. Amount and Type of Detail


One of the most supported content-based cues to deception concerns the amount of detail
provided within a statement (Masip et al., 2005; Vrij, 2008). Extant research using
populations of undergraduates (e.g., Bell & Loftus, 1989; Blandón-Gitlin, Pezdek, Lindsay, &
Hagen, 2009; Colwell, 2007; Memon, Fraser, Colwell, Odinot, & Mastroberardino, 2009;
Vrij, Leal, Mann, & Granhag, 2011), children (e.g., Clemens et al., 2010; Larsson & Granhag,
2005; Peterson, 2007, 2010), child victims (e.g., Alexander et al., 2005; Orbach & Lamb,
1999; Roberts & Lamb, 2010; Steller, Werttershaus, & Wolf, 1988), adult victims (e.g.,
Parker & Brown, 2000; Raitt & Zeedyk, 2003; Peace & Porter, 2011), witnesses (e.g., Yuille
& Cutshall, 1989; Zaparniuk et al., 1995), and suspects (e.g., Hartwig et al., in press; Mann,
Vrij, & Bull, 2002; Porter & Yuille, 1995, 1996; Vrij & Mann, 2001a) have all reported detail
to be indicative of veracity (e.g., Vrij, 2008). Specifically, research stemming from several
content-analysis techniques has demonstrated that truthful accounts are consistently more
detailed relative to fabricated accounts (e.g., Granhag & Vrij, 2005). However, there have
been a few studies where the amount of detail did not significantly discriminate veracity. For
example, in a longitudinal study on narratives of truthful and fabricated trauma, Peace (2006)
found that the amount of detail was not a strong predictor of veracity during the initial ‘recall’
session but became a more useful cue to credibility over time. This likely resulted from an
increased investment and ease of providing detail in fabricated narratives during the initial
phase (lie generation), but increased difficulty in maintaining this level of detail from memory
in subsequent recall sessions (also see Colwell, 2007). As such, liars may provide an “over
the top” elaborated story, but cannot “keep the story straight” so the lie becomes less detailed
over successive retellings (e.g., Arntzen, 1983; Dudukovic, Marsh, & Tversky, 2004; Porter,
Peace, & Emmett, 2007). In other words, participants experienced “fabrication deflation”
relative to their truthful events (e.g., Polage, 2004). It also is possible that detail differences
may be obscured in studies where participants are given detailed information about a
supposed “witnessed” event that they are to lie about, relative to truthful witnesses who
experienced the event (Vrij, 2008). For example, Vrij, Mann, Kristen, and Fisher (2007)
Fact or Fiction?: Discriminating True and False Allegations of Victimization 13

failed to find that truths were more detailed than lies concerning a staged theft participants
were witness to. However, participants in the lie-group were given a detailed information
sheet describing the witness scenario. This provided them with the necessary information to
generate lies of sufficient detail so as not to be discernable from truths on this criterion (Vrij
et al., 2007). Studies that have employed a similar methodology also have failed to reveal
detail differences (Vrij, 2008). That said, other researchers have reported that while truthful
accounts may be longer (i.e., response length), they may not be more detailed (e.g., Mogil,
2005). To address these issues, some researchers have taken to analyzing statement detail as a
proportion of the total number of words available (e.g., Carpenter, 1990; Porter & Yuille,
1996; Sporer, 1997). This has generally been referred to as the type-token ratio (TTR), where
the number of distinctive words/details (types) is divided by the total number of words
(tokens) to obtain the ratio (Singleton, 2002). While liars may provide a lot of detail in their
statements (although still often less than truth-tellers), they are generally more stereotypical
(e.g., Osgood, 1960) and carefully phrased (e.g., Hollien, 1990; Vrij & Akehurst, 1998), such
that lexical diversity is lower. While this principle appears to hold true, some studies have
reported higher type-token ratios for lies relative to truths (e.g., Carpenter, 1990; Colwell,
Hiscock, & Memon, 2002). However, there are several explanations for these outcomes. In
the Colwell et al. (2002) study, participants were distorting testimony concerning a witnessed
staged theft so as not to implicate the thief, thus still basing much of their testimony on what
they had actually witnessed. Further, Carpenter (1990) argued that knowledge of the event
details can artificially inflate type-token ratios (see note above about event information; Vrij
et al., 2007). Finally, liars may provide fewer unique details (types) but also fewer words
overall (tokens), which then translates into a larger type-token ratio (e.g., Dulaney, 1982). For
example, a liar may provide 40 unique details in a 50-word statement (TTR 0.8), relative to
40 unique details in a 100-word truthful statement (TTR 0.4). Other research evaluating high-
stakes lies has found no difference between lies and truths in TTR (e.g., Porter & Yuille,
1996). As a result, Vrij (2008) stated that it may be more the quality of details, rather than the
quantity, that is indicative of truthfulness.
Focus on the quality of details also includes information concerning the context of the
claim. Contextual embedding refers to details about the location, time, and surrounding
events that establish the overall ‘context’ of an experience (Vrij, 2008). Similar to the amount
of detail criterion, much research has demonstrated that truths contain a significantly greater
degree of contextual embedding relative to lies (e.g., Barnier et al., 2005; Colwell, 2007;
Gnisci, Caso, & Vrij, 2010; Larsson & Granhag, 2005; Masip et al., 2005; Memon et al.,
2010; Peace & Porter, 2011; Sporer, 1997). To the authors’ knowledge, only a handful of
studies have reported this criterion to be more prevalent in deceptive claims (e.g., Alonso-
Quecuty, 1992, 1996; Manzanero & Diges, 1996; Ruby & Brigham, 1998). However, the lack
of comparability between truths and lies, and the small number of statements assessed (6
truths, 6 lies; Ruby & Brigham, 1998), limits the reliability of such conclusions. That being
said, some scholars have failed to find significant differences in the amount of contextual (or
spatial/temporal) embedding available in statements (e.g., Akehurst, Köhnken, & Höfer,
2001; Gödert, Gamer, Rill, & Vossel, 2005; Parker & Brown, 2000; Santtila, Roppola, Runtti,
& Niemi, 2000; Strömwall, Bengtsson, Leander, & Granhag, 2004; Vrij & Heaven, 1999). It
appears as though the lack of differences in these studies resulted from use of low-stakes lies,
unrealistic simulation paradigms or videos, and restricted verbal abilities within the sample.
While studies have been criticized on their use of insignificant or low-stakes lies (e.g., Sporer,
14 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

1997; Vrij & Heaven, 1999; see discussion by Porter & ten Brinke, 2010), other research that
has evaluated the characteristics of victim and witness statements have reported more time
and place details (i.e., contextual information) during both immediate (i.e., initial report
generation) and delayed recalls (e.g., Erdmann, Volbert, & Böhm, 2004; Landry & Brigham,
1992; Peace & Porter, 2011; Porter, Peace, et al., 2007; Rassin & van der Sleen, 2005; Steller
et al., 1988; Yuille & Cutshall, 1989). Given that 62% of studies confirm the presence and
directionality of contextual embedding, it is considered a reliable predictor of truthfulness
(Vrij, 2008).
Another type of detail that appears reliable in the assessment of veracity is the
reproduction of conversation within a statement (DePaulo et al., 2003). This criterion is
argued to address naturally occurring aspects of an event, such as conversation or speech
(Steller & Köhnken, 1989). In particular, this criterion relates to how participants’ reproduce
dialogue and replicate utterances (Zaparniuk et al., 1995; Vrij, 2008). For example,
statements that include “I said ‘Who’s there?’, and I heard someone say ‘Drop your purse and
get on the ground right now’” would be coded as reproducing conversation. Further,
statements such as “He asked me how I was feeling” do not meet this criterion (Vrij, 2008).
Extant research has repeatedly found that statements containing reproduced conversation are
more likely to be truthful (e.g., Akehurst et al., 2004; Granhag, Strömwall, & Landström,
2006; Landry & Brigham, 1992; Parker & Brown, 2000; Tye, Amato, Honts, Kevitt, &
Peters, 1999; Vrij, Edward, Roberts, & Bull, 2000). Overall, 60% of studies confirm the
presence of this criterion (Vrij, 2008), making it another important verbal indicator of
veracity.
Finally, the extent to which unusual details are provided within a statement also may be a
good predictor of veracity (DePaulo et al., 2003). While somewhat less reliable than the
previous detail cues (i.e., 44% support from previous studies), distinctive or unusual details
tend to exist more in truthful relative to deceptive reports (e.g., Erdmann et al., 2004; Landry
& Brigham, 1992; Ruby & Brigham, 1998; Santtila et al., 2000; Steller et al., 1988). Further,
Parker and Brown (2000) reported that genuine rape victims described sexual acts with much
more breadth and distinctiveness relative to false allegations. Although these studies have
confirmed this relation, further research is warranted as many studies tend not to code for this
criterion (see Vrij, 2008). Conversely, Sporer (1997) reported that deceptive stories contained
more unusual details relative to truths; however, this study suffered from a lack of ground
truth to the stories and concerned ‘personally important events’ relative to witnessed or
perpetrated events. Granhag et al. (2006) also found that children reported more unusual
details in an imagined story about encountering a man beside a parked car relative to those
that did encounter the man in actuality. It is possible that the unusual detail criterion may be
more variable in children’s statements due to increased levels of imagination inflation and
false memory generation relative to adults (e.g., Garry & Polaschek, 2000; Lamb et al., 1997;
Mazzoni & Memon, 2003). It also may be the case that liars endorse the axiom “truth is
stranger than fiction”, thereby intentionally adding unusual details to their statements. Recent
reports on detecting lies have suggested that investigators should ask unexpected questions
(e.g., Leal, Vrij, Mann, & Fisher, 2010; Liu et al., 2010) and solicit drawings of events (e.g.,
Leins, Fisher, Vrij, Leal, & Mann, in press) in order to outsmart liars and perhaps diminish
the extent to which they intentionally include ‘truth’ criteria such as unusual details (e.g.,
Gnisci et al., 2010; Vrij, Granhag, Mann, & Leal, 2011). Thus, unusual details may be
indicative of truth, but this criterion should not be viewed conclusively. Interestingly, recent
Fact or Fiction?: Discriminating True and False Allegations of Victimization 15

research has demonstrated that perceived credibility is negatively associated with the presence
of bizarre or unusual details, such that truth may not be stranger than fiction when assessed by
mock judges (e.g., Peace, Brower, & Rocchio, 2011).
Evaluations of overall detail (and types of details) present in truths and lies reaffirms that
these cues are strongly associated with truthfulness, especially when detail quality (such as
contextual embedding, reproduction of conversation, and unusual details) is considered. This
is reminiscent of Justice De Filippis’ opinion in R. v. Norman (2005), where the amount (and
type) of detail provided contained details that would be unlikely in a deceptive claim (see
previous section of this chapter).

3.1.2. Unstructured Production, Coherence, and Spontaneous Corrections


Certain structural features of a statement also have been strongly associated with
truthfulness, such as unstructured production, coherence, and spontaneous corrections (Vrij,
2008). These occur when an individual does not report an event in a chronological order, such
that they may insert details they have forgotten in the middle of explaining the core aspect of
the events, or go back to the beginning to add further explanation before jumping ahead to
events that happened later (e.g., Steller & Köhnken, 1989; Vrij, 2008; Zaparniuk et al., 1995).
This unstructured production often goes hand in hand with making spontaneous corrections
(i.e., adding detail or correcting previously mentioned detail; Vrij, 2008). In turn, reports that
are less structured and contain more spontaneous corrections may be perceived as less
coherent (Peace, 2006). Due to the conceptual overlap between these criteria, we have
evaluated their influence collectively. Studies using content analysis techniques have, in
general, found that unstructured production (i.e., less coherent, disorganized, unconstrained)
to be indicative of truthfulness and less prevalent in deceptive statements (Granhag & Vrij,
2005). Truthful narratives have been associated with less coherence and structure related to
variations in memory recall (i.e., spontaneously remembering more details as they come to
mind; e.g., Porter et al., 1999), relative to the overly structured and coherent quality of
deceptive accounts due to rehearsal (e.g., Colwell et al., 2002; Lees-Haley, 1989). For
example, Winkel, Vrij, Koppelaar, and Van der Steen (1991) reported that genuine rape
victims often provide unstructured accounts of their experiences (see also Parker & Brown,
2000). Additional studies that have assessed witnesses and victims also have confirmed this
finding (e.g., Colwell et al., 2009; Erdmann et al., 2004; Köhnken, Schimossek, Aschermann,
& Höfer, 1995; Lamb et al., 1997; Porter et al., 1999; Ruby & Brigham, 1998; Santtila et al.,
2000; Tye et al., 1999; Zaparniuk et al., 1995). However, some research with children
suggests that imagined (i.e., deceptive) events contained more unstructured production,
possibly due to their imaginative abilities and verbal limitations (e.g., Granhag et al., 2006).
Further, truthful statements have been associated with more coherence in several studies (e.g.,
Blandón-Gitlin et al., 2009; Porter & Yuille, 1996). Although promising, coherence as a
veracity cue may be limited by comparison events. In particular, truths may be more coherent
when compared to implanted false memories (e.g., Blandón-Gitlin et al., 2009; Erdmann et
al., 2004; Porter et al., 1999) and suspect confessions in an interrogation context (e.g., Porter
& Yuille, 1996) relative to claims of traumatic victimization. In relation to this, these criteria
may be less predictive of veracity the more frequently an individual has recalled or thought
about the event (Vrij, 2008). For example, Peace and Porter (2011) found that statement
coherence (i.e., structured reporting of event) was not predictive of veracity. They suggest
this result likely occurred due to carry-over effects from their within-subjects sample or
16 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

activation of a ‘trauma script’ (based on genuine traumatic experiences) enabling similar


methods of reporting traumatic events regardless of veracity (Peace & Porter, 2011; also see
Anderson, Cohen, & Taylor, 2000). Both victims and witnesses may be required to recall
events repeatedly for legal purposes (e.g., Fisher, 1995; Fisher et al., in press), but also may
think and talk about events frequently with friends and family (e.g., Barnier et al., 2005;
Porter & Peace, 2007). As a result, repeated recall may lead to increased coherence and
structure upon reporting (e.g., Alonso-Quecuty, 1992; Vrij, 2008).
According to the current body of literature, unstructured production and spontaneous
corrections are more often associated with truths (e.g., DePaulo et al., 2003), and a lack of
coherence is mixed in its research support (Vrij, Granhag, et al., 2010). However, frequency
of recollection is highly relevant to consider in regards to the features of both truths and lies,
and warrants further empirical attention. Repeatedly imagined events can contain compelling
“memory” details that make them appear more viable, such as distinct imagery and contextual
information (e.g., Johnson, Foley, Suengas, & Raye, 1988; Masip et al., 2005). For example,
when evaluating the influence of statement repetition, Hernandez-Fernaud and Alonso-
Quecuty (1997) found overall that across three statements, contextual and sensory
information increased for both truthful and deceptive accounts but this effect was more
pronounced in truthful statements.

3.1.3. Plausibility
The extent to which a story “sounds plausible” is perhaps one of the most difficult criteria
to quantify, and yet studies have repeatedly found it to be a diagnostic cue to deception
(Granhag & Vrij, 2005). Stemming from research on narrative structures, plausibility appears
to be a summative cue that evaluates how content-based criteria unfold within a claim (e.g.,
Canter, Grieve, Nicol, & Benneworth, 2003). For example, Labov (1972) provided a detailed
structure of narrative stories that is particularly relevant to accounts of autobiographical
events. In general, he suggested that narratives are composed of six components: (1) abstract
– initial summary of story; (2) orientation – contextual details including time, place, people,
actions; (3) complicating action – the central or main part of the story and associated
behavioural reactions; (4) evaluation – assessment of the importance of event and attitude; (5)
outcome of the complicating action; and (6) coda – linking the narrative to the present time or
situation (Labov, 1972). Stein and Glenn (1979) elaborated on this model dividing the
components of narratives into setting and episodic facets. The setting facet involved the
following: (1) abstract – summary of preceding actions and tone setting, and (2) setting –
details about the physical, social, and time context of the event. With respect to the episode, it
was argued that the typical narrative contains descriptions of the: (3) initiating event – key
event of the narrative or disruption; (4) attempt – actions resulting from the initiating event;
(5) consequence – the results of the attempt; and (6) reaction – evaluation of feelings about
the event and what the likely consequences are (Stein & Glenn, 1979). So why is this
important? Plausibility judgments are frequently based on whether memories follow this type
of narrative structure or contain the components listed above (Canter et al., 2003). In turn,
assessments of plausibility directly map on to evaluation of the credibility of a claim. Simply
put, when events follow these suggested sequences, they are likely to be perceived as
genuine, and when they deviate they are more likely to be judged as dishonest. Using the
narrative structure model (Stein & Glenn, 1979), Canter et al. (2003) manipulated statements
made by criminal suspects to adhere to the narrative structure or not (e.g., sequence of events
Fact or Fiction?: Discriminating True and False Allegations of Victimization 17

disordered), and also whether a criminal anchor (e.g., once a thief, always a thief) was present
in reports. Their pilot study results indicate that claims following the “typical” narrative
structure were considered more coherent and plausible relative to unstructured statements.
Further, statements without a criminal anchor were deemed more believable. In a follow-up
study, the authors of this research created a measure of plausibility including perceptions such
as how coherent, consistent, credible, and logical the report was. Using the same stimulus
materials as before, they found only a main effect of anchoring – the presence of a criminal
anchor reduced the perceived plausibility of reports. Interestingly, no effects were noted for
structured versus unstructured narratives and plausibility, although there was evidence of a
trend for unstructured accounts to be considered more plausible. This finding was accounted
for when the separate effects of statements of burglaries versus homicides were factored out,
such that the results were in the anticipated direction (Canter et al., 2003).
Of most importance to credibility assessment is the content of these sequences that draw
upon more formalized criteria used in statement analysis techniques (e.g., sensory and
contextual details, logical consistency, reproduction of conversation, temporal order, affective
information, to name a few). In particular, the content of a report speaks heavily to event
plausibility. Pezdek and Taylor (2000) furthered these ideas and provided a theoretical basis
for differences between truthful and fabricated claims. They proposed that claims differ in the
amount of episodic and schematic descriptions provided. In this case, fabricated stories would
be more associated with “story-like” information, where claims fit a general schema for “a
negative or traumatic event” (e.g., Migueles & García-Bajos, 2006; Tuckey & Brewer, 2003).
Although truthful claims also would consist of schematic descriptions of events (e.g., a
“script” memory for robbery where a gun is pulled and fired at the ceiling to get the attention
of everyone in the vicinity), these are argued to also contain more specific details related to
the incident itself (episodic details). Further, the presence of episodic details about events
increases perceptions of plausibility (Pezdek & Taylor, 2000). Two examples of similar
events (an outdoor attack, written by different participants) from a sample of truthful and
fabricated victimization claims collected by the first author of this chapter exemplify this
point:

Example 1: “I had been out at the bar drinkin with a bunch of my friends and decided to
walk home rather than cabbing it as I didn’t have the cash. It was pretty late, maybe 2:30
or so. I decided to cut through the cemetery off Spring Garden to save time getting home.
Plus it was kinda misty and cool out and the graves were all eerie. I guess I was in my
own little world as I fell down close to this big tree. I thought I tripped at first but next
thing I knew there was this guy standing over top of me. I didn’t hear him come up or
anything and was terrified. I kind of jolted up then and tried to scramble away, my shoe
got stuck on a tombstone on the ground, one of those flat grave marker things. That is
when I heard it, when I was trying not to fall on my face again, Joe’s voice (my ex)
telling me he wanted to talk to me, he’d been following me. I started laughing at that
point until he grabbed my shoulders and shook me …”

Example 2: “Late one Tuesday night at around 2:00am, I was walking through the woods
taking a short-cut home. It was a dark and dreary night, and the rain from earlier had
muffled the sounds all around me. The dampness seeped into my bones, I felt so cold and
isolated. Suddenly, off to my left a branch snapped in the woods, and footsteps came
crashing down upon me as my assailant jumped from the darkness and grabbed me …”
(Peace, 2006)
18 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

It is evident that both narratives contain trauma schema information related to being
attacked late at night. However, Example 1 (truthful) has more specific details about the
experience whereas Example 2 (fabricated) is more story-like and appears less plausible. This
idea is similar to content-analysis approaches, and factors into the current conceptualization
of plausibility and memory characteristics as assessed in recent research (e.g., Peace & Porter,
2011; Porter, Peace, et al., 2007; Vrij, Leal, et al., 2011). For example, reality monitoring
techniques (reviewed below) code for the “realism” of a statement (i.e., the extent to which a
statement makes sense and is plausible; Vrij. 2008). Studies using this approach have reported
strong associations between high realism and truthfulness (e.g., Höfer, Akehurst, & Metzger,
1996; Sporer, 1997; Sporer & Hamilton, 1996; Sporer & Küpper, 1995; see Sporer, 2004, for
more detail). Previous studies also have found plausibility to be related to perceptions of
credibility, and the accurate detection of deception (e.g., Hiltunen, 1996; Kraut, 1978;
Landström, Granhag, & Hartwig, 2005; Strömwall & Granhag, 2003a, 2003b). For example,
Porter, Peace, et al. (2007) found that deceptive claims of trauma were rated as much less
plausible relative to truthful accounts. Although difficult to define operationally (but yet
reliably coded in many studies; e.g., Peace & Porter, 2011), plausibility ratings appear to
capture some quality of the narratives that other characteristics do not to the same extent. That
is, the rating of plausibility allowed for a determination of whether a narrative had “the ring
of truth”. Although tests of the narrative structure model yielded mixed findings, this
approach can inform the development of assessment tools measuring statement plausibility
and veracity. Based on narrative structural models, statement analysis research, and its
relevance to the legal system, plausibility warrants further attention in the assessment of
credibility.

3.2. Linguistic Features

Apart from assessment of content, scholars recently have turned their attention to the
linguistic properties of truthful and deceptive statements (e.g., Burgoon, Blair, Qin, &
Nunamaker, 2003; Newman, Pennebaker, Berry, & Richards, 2003; Zhou & Zhang, 2008).
These approaches (as discussed below) examine features such as word count, verb, adjective
and pronoun use, and terms indicative of cognitive and affective processes (to name a few)
(Pennebaker, Chung, Ireland, Gonzales, & Booth, 2007). Although individual variations exist,
certain trends in linguistic content have emerged that distinguish truths from lies.
Lack of self-referencing (i.e., use of first-person pronouns) within a statement is
considered one such “marker” of deception. It has been argued that liars feel a need to
distance themselves from their deceptive behaviour (e.g., Buller, Burgoon, Buslig, & Roiger,
1996; Knapp, Hart, & Dennis, 1974), which results in fewer references to I, me, my, within a
statement. That being said, other scholars have argued that people use fewer first-person
singular pronouns when they are: discussing uncomfortable personal topics they would rather
avoid, lack self-awareness, or are put on the defensive (e.g., Feldman Barrett, Williams, &
Fong, 2002; Shapiro, 1989; Voraurer & Ross, 1999). Liars also are believed to utilize more
negative emotion words (such as hate, sad, angry), which may result from attempts to make
claims appear emotional as well as expunging their own guilt over lying (e.g., DePaulo et al.,
2003). In addition, the cognitive complexity of lying is believed to be evidenced in the use of
more exclusive words and fewer motion verbs in order to keep the story as simple and
Fact or Fiction?: Discriminating True and False Allegations of Victimization 19

believable as possible. In order to investigate the linguistic features of true and false claims,
Newman et al. (2003) analyzed opinion statements made by undergraduates concerning
controversial topics (i.e., abortion; Experiments 1-4) and statements of a mock theft
(Experiment 5). Their results indicated that liars used more negative emotion words (i.e., hate,
angry) and motion verbs (i.e., walk, move, go), and fewer first-person singular pronouns (i.e.,
I, my, me), third-person pronouns (i.e., she, their, them), and exclusive (i.e., but, except,
without) words. All of these features appear to be consistently demonstrated across studies,
with the exception of use of third-person pronouns (DePaulo et al., 2003; Vrij, 2008). In
another study, Bond and Lee (2005) found that prisoners’ truths were associated with more
other references (i.e., third-person pronouns) and that lies contained more motion words. In
their study on deceptive communication, Hancock, Curry, Goorha, and Woodworth (2008)
investigated the linguistic features of truths and lies in text-based communications between
partners. Partners engaged in conversation via computer-mediated communication, were
given four topics to discuss, and one partner was instructed to lie about two of the topics.
Their results indicated that lies contained more words, sensory details, and third-person
pronouns, and fewer first-person pronouns relative to truths (Hancock et al., 2008).
Conversely, in a study of online dating profiles, Toma and Hancock (2010) found that word
count, self-references, and negative emotions were all significantly lower in lies, with the
exception of a greater number of negations (i.e., no, never, not). This latter finding is
consistent with research that has demonstrated that liars may “protest too much” (e.g., Porter,
Peace, et al., 2007). Most recently, Peace and Kuchinsky (2011) found that truthful narratives
of trauma contained more descriptive details (i.e., verbs, adverbs, swears), social processes
(i.e., relational terms such as husband, friend, talk), affective processes (i.e., negative emotion
words), and spoken categories (i.e., non-fluencies or hedges and fillers such as you know,
umm, so yeah), whereas fabricated narratives contained more perceptual details (i.e.,
visual/auditory details) and terms relating to relativity (i.e., terms concerning motion, space,
and time). In addition, liars reported more emotional/affective details over time, making them
more difficult to distinguish from truths (Peace & Kuchinsky, 2011). These results contradict
previous content-based distinctions that truths are associated with more sensory and
contextual detail (Vrij, 2008). However, most studies have not examined claims of
victimization that may be, by their nature, very different from false claims made by suspects
or witnesses to criminal events (Peace & Porter, 2011).
Another line of research has examined linguistic features such as quantity, complexity,
uncertainty, non-immediacy, expressivity, diversity, informality, specificity, and affect (see
Zhou, Twitchell, Burgoon, Qin, & Nunamaker, 2004, for detailed definitions). Using
automated deception detection techniques, researchers have found that deceivers use: fewer
individual references, words that are shorter in length, more modal verbs, higher levels of
non-immediacy, less imagery, but more content diversity and overall words (which includes
higher rates of sensory, spatial, and temporal detail use) (e.g., Qin, Burgoon, & Nunamaker,
2004; Zhou, Twitchell, et al., 2004; Zhou & Zhang, 2008). In fact, Zhou and Zhang (2008)
report some results that directly contradict linguistic and content-based features mentioned
above (i.e., deceptive statements are more detailed, cognitively complex, and associated with
greater expressivity), whereas other cues that are consistent with past research (i.e., fewer
self-references, less spontaneous correction, more modal terms and emotion, and less
complexity). Utilizing a different linguistic technique (i.e., SCAN), proponents have reported
a series of linguistic cues (i.e., denial of allegations, ambiguous introductions, spontaneous
20 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

corrections, unbalanced statements, lack of emotions, temporal ordering, time and duration
words, lack of first-person pronouns, and changes in language) that are all considered
indicative of deception (e.g., Driscoll, 1994; Korris, 2006; Sapir, 1987, 2000; Smith, 2001).
Additional cues such as sentence level complexity (i.e., ratio of total words to total sentences)
and informality (i.e., ratio of misspelled words to total words) also are identified as possible
cues to deception (Qin et al., 2004). Using an amalgamation of these linguistic analysis
techniques, Fuller, Biros, and Wilson (2009) analyzed statements provided by suspects in
official military investigations and found that word and verb quantity, as well as the sensory
ratio (i.e., perceptual terms in relation to total words), were most prevalent in deceptive
claims.
In general, liars tend to use more negative words, fewer self-references, and may produce
reports that contain similar or inflated levels of sensory and spatial detail (e.g., Hancock et al.,
2008; Newman et al., 2003; Vrij, 2008; Zhou & Zhang, 2008). While it does appear that
purely linguistic features of a report can be used to determine veracity, there are far too many
inconsistencies across empirical studies to endorse widespread use of linguistic cues to
determine veracity. Similarly, analysis of content-based cues via linguistics (i.e., words
related to contextual embedding, sensory details, cognitive processes) has yielded mixed
results. For example, Bond and Lee (2005) reported that truthful statements were associated
with more sensory and fewer spatial words relative to deceptive statements. In another study,
Vrij et al. (2007) used linguistic software to code characteristics (i.e., sensory, time, and
spatial details, as well as cognitive mechanisms) of truthful and deceptive claims concerning a
staged eyewitness or criminal event (i.e., witnessing or committing theft of money from a
wallet during an experimental session). Their analyses did not reveal significant differences
between truths and lies. Both of these research groups have concluded that “the jury is still
out” as to whether linguistic analysis can be used to code for established content-based cues
(Bond & Lee, 2005; Vrij et al., 2007). The authors of this chapter concur that much research
needs to be done before linguistic analysis of any sort should be applied to analysis of
victimization claims in criminal contexts.

3.3. Subjective Memory Assessment

Early research into credibility assessment techniques often was based on a participant’s
assessment of their own memory (Sporer, 1997). Primarily stemming from cognitive concepts
of reality and source monitoring (e.g., Johnson, Hashtroudi, & Lindsay, 1993; Johnson &
Raye, 1981), subjective ratings were argued to vary between an event that was experienced
relative to imagined (i.e., deceptive). That is, we should be able to distinguish the source of
our memories by examining the features of our recall. A study by Johnson et al. (1988) was
one of the first to examine this idea in relation to recall of autobiographical events (relative to
previous studies on recall of perceived or imagined words, pictures, or actions; Foley &
Johnson, 1985). Participants recalled perceived events such as a trip to the dentist, social
occasion, or a library visit, as well as a fantasy or dream they had previously experienced.
Results indicated that perceived events differed from imagined events on a variety of
measures: they were associated with higher levels of sensory details (i.e., visual, sound,
smell), realism, spatial arrangement of environment, and time and place details (i.e., location,
setting, year, season, day, hour). In addition, real events included surrounding details of
Fact or Fiction?: Discriminating True and False Allegations of Victimization 21

memories preceding and following the event being described (e.g., contextual embedding).
Conversely, imagined events were rated as being more intense, consequential (i.e., had more
implications at the time), complex, thought about more often, and less realistic. In a second
study, Johnson et al. (1988) asked participants to indicate what factors influenced their
knowledge or belief that the event happened in reality or was imagined. Responses were
divided on the basis of characteristics about the memory itself (e.g., perceptual details),
supporting memories or information (e.g., pictures of a trip), and general knowledge-based
reasoning (e.g., it must be fantasy because I never went to Spain). As suspected, judgments of
genuine experiences were based on details of the memory itself and the surrounding context,
whereas imagined events were associated with cognitive reasoning about the likelihood of the
event.
Subjective differences between true and false events continue to be evidenced in recent
research using self-discrimination paradigms (e.g., do you think your memory is real or you
just imagined it?) (see Masip et al., 2005). However, some studies have suggested that the
distinctiveness or type of event may impede our judgment processes. Kelly, Carroll, and
Mazzoni (2002) reported that when making discriminations, people relied on metamemory
processes (e.g., “feelings of knowing”) rather than ratios of real to imagined events presented.
They also noted that distinctiveness interfered with judgments, such that a bias towards the
real is evidenced when distinctive imagined events are presented (Kelly et al., 2002). That is,
if an experience is associated with unusual features, participants would be more likely to
think “that must be real because it stands out in my mind”, even though it was a false event.
This is supported by literature reporting that the veracity of negative events is easier to
distinguish relative to positive experiences, possibly because they are more distinctive in
nature (e.g., Kemp, Burt, & Sheen, 2003). Further, other researchers have suggested that we
show a bias in rating characteristics of fabricated negative events. For example, Kealy,
Kuiper, and Klein (2006) had participants generate made-up and real autobiographical
memories from childhood or adolescence (< 16 yrs), and subsequently rate their own memory
characteristics. Participants’ self-ratings of their memories indicated that real memories were
more accurate, typical, and likely to be true than made-up events. These findings were noted,
in particular, with ratings for pleasant relative to unpleasant events (Kealy et al., 2006).
Recent studies involving subjective assessments of true and false trauma allegations have
revealed similar outcomes. For example, Porter, Peace, et al. (2007) found that when
participants reported true and false claims of traumatic victimization, they subjectively rated
their confidence in the “memory” and the credibility of the story as much lower when
assessing fabricated claims. That being said, participants’ reports of subjective emotionality
and cognition surrounding the events (i.e., turning off emotions, becoming emotional,
detachment, emotional intensity, thinking about the event, and negative feelings when
discussing the event) was greatly inflated in fabricated relative to truthful traumas (Porter,
Peace, et al., 2007). This suggests that while participants may have difficulty generating
plausible memory ratings based on a fake event, they are better able to feign emotion that
would be expected for such claims. Peace and Porter (2011) also found that participants rated
their own truthful traumas as more vivid, having a better overall memory quality, more
coherent, and more credible relative to fabricated narratives.
It is likely that these patterns result from participants’ personal knowledge of the veracity
of events, which are revealed in their subjective ratings, whether intentional or not. This
reflects metamemory processes (such as “feelings of knowing”) and the surrounding context
22 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

of recall (i.e., “I know I was never sexually assaulted”) when participants are required to
make assessments of their own memory features (e.g., Garry, Manning, Loftus, & Sherman,
1996; Johnson et al., 1988; Kealy et al., 2006; Kelly et al., 2002). In their review of deception
detection literature, DePaulo et al. (2003) also argued that liars have more difficulty
convincingly embracing their deceptive claims due to moral qualms, personal investment, and
lack of knowledge and experience in the circumstances being lied about. Studies based on
real and imagined events using reality monitoring techniques have frequently found that
participants subjectively rate their own truthful experiences as more realistic, memorable,
believable, and detailed (e.g., Arbuthnott, Geelen, & Kealy, 2002; Kealy et al., 2006; Sporer,
1997). In addition, past research has demonstrated that subjective ratings of negative events
yield differences as individuals are less likely to believe a distressing event they imagined or
fabricated happened to them, and rate them accordingly (e.g., Kemp et al., 2003). It also is
possible that the type of trauma reported (e.g., car accident, criminal victimization, death of a
loved one) may influence the features of narratives and self-rated memory properties. Despite
limitations of some studies, subjective memory assessments are an alternate framework for
conceptualizing differences between genuine and fabricated or imagined events (e.g., Johnson
& Raye, 1981; Suengas & Johnson, 1988). As stated by DePaulo et al. (2003): “Liars usually
make an effort to seem credible; truth tellers more often take their credibility for granted” (p.
78). This effort, evidenced in deflated self-ratings of their memories, may be an indicator of
deception and warrants further investigation in applied forensic contexts. It is possible that
alleged victims in high-stakes situations would respond differently to questions about true and
false experiences (i.e., how confident are you in your memory? how vivid is your memory?),
which may help investigators determine veracity.

3.4. Consistency

Not only are the objective and subjective features that differentiate narratives important,
but also the degree of consistency associated with narratives over time. Due to the legal
significance of testimonial consistency in perceptions of credibility (e.g., Bell & Loftus, 1989;
Brewer et al., 1999; Malloy & Lamb, 2010; Poole & White, 1995), narrative changes over
time are a critical, yet understudied, component of veracity determinations (Fisher et al., in
press). There is much wisdom in the old adage “The wheels of justice grind slowly.”
Typically, in the criminal justice system, there are lengthy delays between a complainant’s
initial allegation of a crime and his/her testimony in the court (e.g., Euale & Turtle, 1999;
Porter, Campbell, et al., 2003). Further, witnesses to a crime often provide multiple
statements about the event and are re-interviewed on several occasions prior to testimony
(Fisher et al., in press). How might a report of a false traumatic victimization change over
time relative to a report based on a true experience? How well can a complainant making a
false allegation maintain this story over months or years in an attempt to appear credible? In
making predictions about the consistency of true and fabricated reports over time,
consideration should be given to the fact that they are based on very different
phenomenological experiences. On one hand, a true report is based on a traumatic memory or
a re-experiencing of the sensory and emotional aspects of an event typically requiring no
conscious attempt at “getting the story straight.” On the other, false reports are based on
cognitive inventions, or lies, with which the liar must actively maintain a coherent narrative.
Fact or Fiction?: Discriminating True and False Allegations of Victimization 23

In the latter case, “getting the story straight” is imperative (Vrij, 2008). Only recently has
research on credibility assessment begun to address the effect that time delay and multiple
interviews or recall attempts have on memory for genuine and imagined or fabricated
experiences (e.g., Granhag & Strömwall, 2001b; Granhag & Strömwall, 2002; Granhag et al.,
2006; Peace & Porter, 2011). In general, consistency is used as a cue to justify decisions that
a person is being deceptive but also a rationale for why someone is judged to be truthful
(Granhag & Strömwall, 2001a, 2001b). While past research appears to have reached a
consensus that inconsistencies undermine perceived credibility, the paucity of empirical data
on the retention of truthful and deceptive claims of trauma has led to discrepant viewpoints on
how memory recall will vary over time (e.g., Erdmann et al., 2004; Granhag et al., 2006;
Masip et al., 2005).
One perspective is that fabricated or deceptive claims will be less consistent over time
relative to genuine experiences, especially in terms of the salient or central aspects of events
(e.g., persons, actions) (e.g., Greuel, 1992; Undeutsch, 1989). Wagenaar, van Koppen, and
Crombag (1993) evaluated criteria of truthful and fabricated claims and argued that truth can
be inferred from consistency in consecutive statements. Deceptive statements are less likely
to contain relevant and detailed information reported in a consistent manner. In an interview
with the National magazine, Dr. Stephen Porter noted that “allowing time for memory to fade
makes it more difficult for people to keep their stories straight” (Dotto, 2004). In a series of
studies on memory for lying, Polage (2004) found that deceptive items were less believable
over a five week interval relative to truthful items – what she calls “fabrication deflation”.
Research by Mazzoni and Memon (2003) provides an explanation of why this may occur.
Repeated imagination and rehearsal may inflate the details of events that were never
experienced such that details of deceptive experiences would not be consistent over time due
to imagination inflation. For example, when evaluating the influence of statement repetition,
Hernández-Fernaud and Alonso-Quecuty (1997) found overall that across three statements,
contextual and sensory information increased for both truthful and deceptive accounts, but
this effect was more pronounced in truthful statements. Some studies suggest that the more
time you give someone to fine-tune a lie, the more “fantastic” it may become in successive
retellings (e.g., Dudukovic et al., 2004; Johnson et al., 1993). This is in line with research that
has reported fabricated claims have an exaggerated or “over the top” quality (e.g., Porter,
Peace, et al., 2007) – where elaborations and exaggerations tend to lead to inconsistencies
across reports. Recall of “fiction” is likely to show more room for error or fallibility over time
relative to events based in reality.
Alternately, other researchers argue that truthful and deceptive statements may be equally
consistent over time and change at similar rates across recall (e.g., Granhag & Strömwall,
2000a; Granhag, Strömwall & Jonsson, 2003). This rests on the idea that in order for
fabricated claims to be successful, the liar must have a good memory and engages in more
rehearsal of deceptive details. Essentially, this perspective reflects French philosopher
Montaigne’s argument that “He who is not sure of his memory should not undertake the trade
of lying” (Essays, Of Liars, 1946). Because of the cognitive load of such a task (e.g., Vrij,
2004), he/she may evidence more fluctuation or a greater drop in consistency between initial
recalls but this will level off as the story gets solidified over successive retellings. With
genuine experiences, the malleability of memory is associated with providing more, less, or
changed details over time (e.g., Loftus, 2003b; Turtle & Yuille, 1994). Thus, consistency for
truthful experiences also will decline over time. For example, Granhag and Strömwall (2002)
24 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

evaluated verbal and nonverbal cues to deception over successive interrogations. Participants
were exposed to a realistic mock-crime scenario and instructed to provide truthful or
deceptive reports of what they witnessed. They were interrogated about their memories on
three occasions over the span of 11 days. Overall, deceptive claims were associated with
significantly fewer words (although not details), and this decreased across interrogations.
Further, levels of consistency for truthful and fabricated claims were relatively equivalent for
information reported across interrogations (Granhag & Strömwall, 2002). Therefore, it is
possible that consistency may change at a similar rate for both event types. Lipian et al.
(2004) argued that although the words of a statement may change, the story may not.
Therefore, it is feasible that truthful and fabricated claims may decrease in levels of
consistency while being able to “keep the story straight”. Related to this is the possibility that
genuine and deceptive reports may become more similar over time due to a “synthesis of
experiences” (Loftus, 2003b), influencing the comparative consistency between reports
(Erdmann et al., 2004).
The third perspective is that deceptive statements are maintained more consistently than
reports based on genuine experience. Canadian lawyers interviewed in the National
commented that event recall may appear scripted when people are lying, especially if the
same narrative is provided on multiple occasions (Dotto, 2004). This is argued to result from
the fact that liars are focused on remembering what they have said in previous statements,
whereas truth-tellers base their reports on their memory for the event and may show more
variation (Granhag & Strömwall, 2000a). For example, Granhag and Strömwall (2002) stated
that rehearsal strategies (and the frequency of rehearsal) employed by liars may promote more
consistent responding which differs from increased variation that can result from repeated
recall of a genuine memory (see also Lees-Haley, 1989). Variations in consistency and recall
in genuine memory reports relative to deceptive claims also has been recognized by
proponents of certain content-analysis techniques (i.e., CBCA). In an analysis of real and
false allegations of rape, Parker and Brown (2000) found that genuine victims reported a
range of emotions and an unstructured narrative. In contrast, false allegations appeared
manufactured, statements were highly structured and scripted over time, and descriptions
lacked the “ring of truth” present in truthful allegations. It could be argued that deceptive
claims often are based on a schema of traumatic victimization and could appear more stable
over time due to the repeated activation of a “trauma-script” (e.g., Anderson et al., 2000).
In an attempt to address these perspectives, a recent study by Peace and Porter (2011)
evaluated how truthful and deceptive allegations of traumatic victimization were retained
over an extended six month interval. Participants recalled both true and false traumas over
three recall sessions, and answered direct questions about their experiences (i.e., clothing,
location, day of week, duration, emotions before and after, weapon, age, calendar date) at
each experimental session. Analysis of participants’ responses to the consistency questions
revealed that while both narrative types decreased in consistency over time, truthful claims of
trauma were maintained in memory more consistently across all time intervals relative to
false allegations of trauma. Studies have reported that fabricated narratives of trauma are less
consistent relative to truthful narratives (see Polage, 2004), but few (if any) have found
similar rates of change in narratives. Further, consistency scores were able to correctly
discriminate veracity at a rate of 70% overall, with classification values of 76% over extended
intervals (Peace & Porter, 2011). In fact, it appeared as if consistency becomes more
predictive of veracity over time (see also Fisher et al., in press). In a follow-up study, Peace,
Fact or Fiction?: Discriminating True and False Allegations of Victimization 25

Kasper, and Forrester (2011) examined detailed consistency of the statements made by
participants in the previous study, where each report was assessed for consistent details
reported, omissions (i.e., details left out of reports), commissions (i.e., details added in to
reports), and contradictions (i.e., information inconsistent with previous details). Preliminary
analysis of these features has indicated that truthful traumas contain more consistent details,
omissions, and commissions, and fewer inconsistent details overall. That being said, the
number of consistent details mentioned over time decreased more for genuine than false
claims of trauma. These findings suggest that truthful reports of victimization may contain
more variations in recall over time, relative to a more “scripted” false allegation (Peace,
Kasper, et al., 2011). In light of these discrepant viewpoints and the lack of conclusive
empirical research on the retention of deceptive statements, it is difficult to state the
directionality of this cue (Fisher et al., in press). That said, it is (unfortunately) one of the
most heavily relied upon indicators of inaccuracy and/or deception within the criminal justice
system (Brewer et al., 1999; Malloy & Lamb., 2010).

4. TECHNIQUES FOR DISCRIMINATING VERACITY


“It is singular how soon we lose the impression of what ceases to be constantly before us.
A year impairs, a lustre obliterates. There is little distinct left without an effort of
memory, then indeed the lights are rekindled for a moment - but who can be sure that the
imagination is not the torch-bearer?” (Lord Byron, 1821-22/1979)

While the assessment of deception has been investigated via physiological, nonverbal,
and verbal measures (Ford, 2006; Vrij, Granhag, et al., 2010), our focus is on specific
techniques that have assessed the content or information provided in true and false statements.
There have been several procedures developed that claim to accomplish such a task, with
varying empirical support and discrimination accuracy (Porter & ten Brinke, 2010). All
approaches are conceptually similar in that they aim to discriminate truthful and
fabricated/deceptive/imagined accounts based on their features (e.g., Schooler, Gerhard, &
Loftus, 1986). A review of the six most common techniques is provided below.

4.1. Criteria-Based Content Analysis

A popular technique for determining the veracity of an allegation is Criteria-based


Content Analysis (CBCA; Vrij, 2008). CBCA is a component of the Statement Validity
Assessment (SVA), a practical tool designed in Germany to evaluate allegations of sexual
abuse (Bartol & Bartol, 1999). Although originally created to assess children’s statements,
this technique also has been extended to adult populations (e.g., Köhnken et al., 1995). The
approach has received substantial empirical support for its use in adult populations of
witnesses and suspects of a crime (e.g., Akehurst et al., 2001; Landry & Brigham, 1992;
Porter & Yuille, 1995, 1996; Ruby & Brigham, 1998; Sporer, 1997; Vrij, Edward, & Bull,
2001b, 2001c; Vrij, Akehurst, Soukara, & Bull, 2002; Zaparniuk et al., 1995). Determining
the relative truth or deception of an allegation becomes essential during the course of an
investigation, even more so when an alleged perpetrator is being brought to trial. The CBCA
26 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

as a component of Statement Validity Assessment is immensely valuable as a tool to


determine veracity. Although a well understood process today, research on statements of
autobiographical events was rare until the early 1990s (see Vrij, 2005, for review).

4.1.1. History/Background
The development of this statement analysis technique stemmed from German testimonial
traditions and research (see Mittermaier, 1834). Binet’s (1900) research on suggestibility in
children and the implications of variations of memory reporting as a function of the questions
posed (e.g., leading questions) led German psychologist William Stern to conduct research on
the reliability of testimony (Stern, 1902, 1903, 1904; Yarmey,1984). The experimental
paradigm applied by Stern involved free recall followed by questions about the details of a
witnessed scene (presented as a picture to the subject). Examining the accuracy of memory
for scenes, he reported that “perfectly correct remembrance is not the rule but the exception”
(Stern, 1902, p. 327). The implications of these experiments on eyewitness testimony were
evident, and research efforts became focused on forensic settings (Vrij & Akehurst, 1998). In
particular, attention was directed to cases of child sexual abuse due to the lack of evidence
often available and uncertainty regarding the reliability of children as eyewitnesses.
Undeutsch (1989) argued that witness statements are the most important evidence in
determining case outcomes. This shift in focus to the psychology of testimony and memory
early in the twentieth century came to be known as the Aussage period (Bartol & Bartol,
1999). Child sexual abuse remained the focus of much research on memory reliability and
testimony as children were believed to have difficulty distinguishing reality from imagination.
Between 1900 and 1930, German courts became increasingly skeptical about accepting
allegations as psychological experts were questioning the reliability of memory (Stern, 1910;
Undeutsch, 1967). Further, the courts remained resistant to psychological experts for many
years (e.g., Wigmore, 1909), but recognition that psychological research could inform legal
processes pertaining to credibility flourished in the mid-1950s. In 1954, the Supreme Court of
the Federal Republic of Germany ruled that psychological or psychiatric testimony on the
reliability of witnesses’ accounts would be permitted in court, especially for cases reliant
upon memory evidence (e.g., Bartol & Bartol, 1999; Porter, Campbell, et al., 2003). The
framework that led to the development of a formalized system of statement analysis was
primarily established by psychologist Udo Undeutsch. The foundation rested on what is
known as the Undeutsch Hypothesis (1967; see also Undeutsch, 1982, 1984, 1989). This
states that memories for experienced events will differ from those that have not been
experienced (e.g., imagined or fabricated reports) in both content and quality. Theoretical
support has been derived for this hypothesis (e.g., Köhnken, 1989, 1996, 1999, 2002; Steller,
1989), and it forms the basis of the CBCA content analysis approach (see below). The SVA
procedure consists of three parts: a structured interview, criterion-based content analysis
(CBCA), and a validity checklist (Vrij, 2005). First, during the interview stage, the
interviewer builds rapport with the witness and then a detailed memory report of the
experience in question (e.g., sexual abuse) is elicited using free recall and direct questioning
(Raskin & Esplin, 1991a, 1991b). Statements obtained during the interview(s) are then
compared for the presence of criteria on the CBCA argued to be indicative of “truth”. The
final aspect of the procedure is a validity checklist, which includes evaluation of the
appropriateness of language, interview characteristics (e.g., leading/suggestive), motives to
report, and an evaluation of consistency of statements (e.g., Steller & Köhnken, 1989;
Fact or Fiction?: Discriminating True and False Allegations of Victimization 27

Undeutsch, 1989; Yuille & Farr, 1987). Thus, SVA is the overall credibility assessment
procedure that includes rapport-building, the interview setting, and demeanour, whereas
CBCA is more specific to addressing the detailed content characteristics of a statement
(Sporer, 1997; Steller, 1989; Vrij, 2005). Inclusion of all three components allows
investigators to make credibility determinations as follows: incredible, probably incredible,
indeterminate, probably credible, and credible (Bradford, 1994). Scholars around the world
have devoted considerable efforts to the study of statement analysis using this framework (see
Vrij, 2005, for review).

4.1.2. Components of CBCA


As described, CBCA is a component of SVA that was developed to determine the
veracity of statements (or in this case, allegations). The CBCA has been used in various
empirical and practical settings to determine the credibility of statements (e.g., Blandón-Gitlin
et al., 2009; Lamb et al., 1997; Porter & Yuille, 1996; Sporer, 1997). Further, application of
CBCA techniques has been identified as the core part of the SVA procedure (e.g., Vrij et al.,
2002) and is heavily relied upon in credibility assessment decisions (e.g., Gumpert &
Lindblad, 2001). The underlying assumptions of the CBCA are that lying is cognitively
complex and liars are concerned with the impression that they make (Vrij & Mann, 2006). As
a result, the way in which people relate an experience will vary dependent on the veracity of
the account (Vrij, 2008). Predicated on years of historical and scholarly research (e.g.,
Arntzen, 1983; Leonhardt, 1930; Mittermaier, 1834; Trankell, 1972; Undeutsch, 1967),
nineteen criteria (also referred to as “reality criteria”) were identified in the late 1980s and
have been used in the systematic assessment of statements (Köhnken & Steller, 1988; Steller
& Köhnken, 1989). The CBCA criteria are listed in Table 1. Each criterion is argued to be
present to a greater extent in truth-telling situations relative to deceptive statements (e.g., Vrij,
2005; Vrij, Akehurst, Soukara, & Bull, 2004a; Zaparniuk et al., 1995). At a basic level, scores
on all criteria are summed, and high CBCA scores are considered indicative of a credible
report. However, it has been emphasized that while high scores suggest truthfulness, the
absence of these criteria should not invariably suggest deception (Bradford, 1994). Through
experience, researchers using CBCA techniques outside the context of child sexual abuse
claims have determined that some of the criteria are irrelevant (Vrij, 2008). For example,
Marxsen, Yuille, and Nisbet (1995) identified that all 19 CBCA criteria need not be applied
(nor are applicable) to all cases. Further, criterion 10 (accurately reported details
misunderstood), 17 (self-depreciation), and 18 (pardoning the perpetrator) appear to involve
elements that are common in child sexual abuse claims but may not be associated with
adolescent or adult reports of other types of traumatic victimization (e.g., Porter & Yuille,
1996; Zaparniuk et al., 1995). Overall, Marxsen et al. (1995) argued that the first five criteria
and that two of the remaining 14 should be present in a credible statement. Thus, it has been
argued that CBCA assessments boil down to five core criteria (details, spontaneous
reproduction, amount of detail, contextual imbedding, and interactive descriptions) and the
rest do not add much more predictive validity in veracity determinations (Sporer, 1997).
28 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

Table 1. Techniques for Discriminating Veracity

CBCA RM (MCQ) LIWC SCAN ACID MAP


Amount of
Change in
1 Coherence Clarity/vividness Word count (LP) Vividness detail & number
language
of words
Emotional
Spontaneous Sensory Words/sentence Placing of
2 Spontaneity components
reproduction information (LP) emotions
(self and other)
Spatial Dictionary words Improper Response Time and
3 Sufficient detail
information (LP) pronoun use length (FR) place details
Lack of
Contextual Time Words > 6 letters conviction/ External
4 Coherence
imbedding information (LP) memory for details (FR)
incident
Descriptions of Total function No denial Contextual
5 Reconstructability Relevancy
interactions words (LP) of allegations details (FR)
Out of
Reproduction of Emotions and Pronoun Internal
6 sequence Plausibility
conversation feelings usage (LP) details (FR)
information
Unexpected Realism/ Social Response Memory
7 Verb usage (LP)
complications plausibility introduction length (MS) perspective
Cognitive Spontaneous External
8 Unusual details Swear words (LP) Anxiety/stress
operations corrections details (MS)
Peripheral Social Structure of Contextual Talking
9 Amount of detail
details processes (PP) the statement details (MS) about event
Accurate
Affective Internal Thinking
10 reported details Familiarity Tense change
processes (PP) details (MS) about event
misunderstood
Observer/
Related external Cognitive Subjective and Admission Memory
11 participant
associations processes (PP) objective time of error vividness
perspective
Accounts of
Perceptual Unimportant Memory
12 subjective Consequentiality Coherence
processes (PP) information quality
mental state
Unnecessary
Attribution of
Talked/thought Biological connection/ Type-token Memory
13 perpetrator’s
about event processes (PP) missing ratio (TTR) coherence
mental state
information
Spontaneous Affective Emotional
14 Peripheral details Relativity (PP)
corrections details intensity
Personal concerns Confidence
Admitting lack Confidence in
15 (e.g., work, home, in memory
of memory memory accuracy
religion, death) accuracy
Spoken categories
Raising doubts
(e.g., assent, Perceived
16 about one’s own
nonfluencies, credibility
testimony
fillers)
Self- Lack of
17
depreciation memory
Memory
Pardoning the LP = linguistic
18 changes over
perpetrator processes
time
Details
PP = psychological
19 characteristic of
processes
the act
Fact or Fiction?: Discriminating True and False Allegations of Victimization 29

As such, CBCA criteria have been condensed into five main categories of assessment: (1)
general characteristics (i.e., quantity of details, logical structure, unstructured production);
(2) specific contents (i.e., contextual embedding, descriptions of interactions, reproduction of
conversation, accurately reported details misunderstood, related external associations,
accounts of subjective mental state, attributions of perpetrator’s mental state); (3) content
peculiarities (i.e., unexpected complications during event, unusual details, superfluous
details); (4) motivation-related contents (i.e., spontaneous corrections, admitting lack of
memory, raising doubts about one’s own testimony, self-deprecation, pardoning the
perpetrator); and (5) offence-specific elements (i.e., details characteristics of the offence) (e.g.,
Sporer, 1997; Yuille & Farr, 1987; Vrij,. 2008). Some of these (such as 4 and 5) are more
specific to the context of child sexual abuse cases where the criteria were originally applied
and used extensively. Others argue that all criteria are critically important in the assessment
of credibility and should be combined for a well-rounded analysis of statements (Undeutsch,
1989; Steller & Köhnken, 1989). In addition, Köhnken (1999) has argued that CBCA criteria
are theoretically related to cognitive and motivational factors. For example, Criteria 1-13 are
considered too difficult to successfully falsify due to the cognitive complexity of the task
(e.g., providing a lot of detail consistently over time). Criteria 14-18 are more likely to occur
in truths for motivational reasons in which deceivers engage in impression management and
appear to over-compensate with how they believe truthful individuals act (e.g., not admitting
lack of memory, doubts, or correcting themselves). Each criterion is important in assessing
the veracity of a claim (Köhnken, 2002). However, a detailed evaluation of the discriminatory
power of each criterion is necessary to ultimately determine what should be relied upon in
CBCA (Vrij, 2008). Some researchers have argued that this approach is a highly useful tool
for distinguishing between truthful and fabricated claims (e.g., Honts, 1994), whereas others
have reported that more research is required before any conclusive claims can be made (e.g.,
Ruby & Brigham, 1997, 1998; Wells & Loftus, 1991).

4.1.3. Discrimination Rates


One of the first studies to examine the discriminative power of CBCA criteria utilized
transcripts of confirmed (n = 20) and unconfirmed (n = 20) cases of child sexual abuse and
found that all cases were correctly classified based on the SVA/CBCA procedure (Raskin &
Esplin, 1991a, 1991b). Although the 100% classification rate has not been replicated and this
study was criticized for methodological issues (e.g., only one rater, unconfirmed cases may
not be deceptive), it prompted a wealth of empirical research on the classification potential of
SVA in general and CBCA criteria in particular. While some research has reported levels of
classification below chance (e.g., Erdmann et al., 2004; Landry & Brigham, 1992), others
have found CBCA to have demonstrated moderate success in determining veracity. For
example, Zaparniuk et al. (1995) had participants view a videotape of a robbery or a minor
theft (true condition) or hear a verbal description of the event (false condition). All
participants were asked to recall the event shortly afterward, and individuals in the false
condition were instructed to report their memory as if they had actually witnessed the event.
Statements were then assessed using CBCA criteria and rules on which to base decisions (i.e.,
the presence or absence of a certain set of defined criteria). Overall, use of CBCA was able to
successfully discriminate truthful from false statements in the majority of cases, with an
overall rate of 78% correct identification. That said, the statements utilized in this study could
be criticized on the basis of their true/false conditions, where the false group was only
30 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

providing a deceptive account of seeing the event but were still provided the auditory script
for what happened (Zaparniuk et al., 1995). In their study on truthful and deceptive
statements made by suspects and witnesses in criminal events (i.e., thefts), Porter and Yuille
(1996) found truths could be reliably differentiated from deceptive reports with a
classification rate of 78.3%. However, it is important to note that the majority of CBCA
criteria did not significantly discriminate between reports (only amount of detail, coherence,
and admissions of lack of memory). Some researchers have reported that CBCA is more
effective for evaluating witness statements than suspect statements, with sometimes only
small differences in CBCA scores being noted for suspects (e.g., Porter & Yuille, 1995; Vrij
et al., 2002; Zaparniuk et al., 1995). However, this finding is likely due to differences in
interview style with witnesses (information gathering/open-ended) as compared to suspects
(interrogative/ specific-questions) (Vrij, 2008). One of the only studies to examine truthful
and fabricated claims of highly traumatic (and directly experienced) events was conducted by
Parker and Brown (2000). They used CBCA and validity checklists associated with the SVA
procedure to determine differential characteristics in genuine (56%) versus deceptive (37%)
allegations of rape. Overall, they found that the veracity of a rape allegation could be
correctly classified in 87.5% of cases, and that certain CBCA criteria (i.e., unstructured
production, quantity of detail, description of interactions, reproduction of conversation,
subjective mental state, and attribution of perpetrator’s mental state) were most predictive of
truthfulness. Further, CBCA techniques were more accurate in correctly discriminating true
from false rape allegations than judgments made by police officers (Parker & Brown, 2000).
Similarly, Vrij, Edward, et al. (2000) found that application of CBCA techniques to
judgments made concerning the veracity of videotapes (with a focus on coding verbal relative
to non-verbal behaviour) improved judgment accuracy to a rate of approximately 80% correct
identification.
Overall, accuracy rates for the correct classification of truthful and deceptive statements
have varied widely using CBCA. The lowest rate of discriminability using CBCA was 47%
(Landry & Brigham, 1992) and highest have ranged between 93% (Köhnken et al., 1995) and
100% (Raskin & Esplin, 1991b). Qualitative reviews of the first 16 and 37 studies (Vrij,
2002a, 2005) using CBCA techniques have yielded several conclusions. In general, CBCA
criteria are met more frequently when individuals provide truthful statements as compared to
deceptive accounts of an event. There also are some variations within CBCA criteria as to
which are more effective predictors of truthfulness than others (e.g., amount of detail,
unstructured production, coherence). In their meta-analysis on cues to deception, DePaulo et
al. (2003) found significant effect sizes across all CBCA studies on criteria such as amount of
detail, plausibility, logical structure, repeated details, spontaneous corrections, and admitted
lack of memory. Other CBCA criteria failed to be meaningful predictors of veracity.

4.1.4. Criticisms
Several criticisms of SVA (and CBCA in particular) have been proffered by scholars
(e.g., Ruby & Brigham, 1997). Some have argued that SVA is more of an art than an
empirically validated technique (e.g., Davies, 1994). Variations arising from experiential,
motivational, and coaching factors (to be discussed later in this chapter) also have been
considered limitations of the approach (Vrij, 2005). Apart from these, one of the primary
criticisms of CBCA is the lack of a sound theoretical framework for the approach. Sporer
(1997) argued that although the Undeutsch Hypothesis predicted differences with respect to
Fact or Fiction?: Discriminating True and False Allegations of Victimization 31

genuine and deceptive claims (i.e., how they differ), there is no theoretical basis as to why
these reports should differ. There also are few specifications as to when differences in
statements would be observed, with the exception of the length of the statements (e.g., at least
one page; Merckelbach, 2004; Raskin & Steller, 1989). However, as noted by Sporer (1997),
certain criteria used to assess statements may be dependent on the type and complexity of
event (e.g., Blandón-Gitlin, Pezdek, Rogers, & Brodie, 2005). Expansion of the criteria and
application to a broad scope of credibility assessment domains (other than child sexual abuse
cases) requires a theoretical basis. This also is directly related to the content of the CBCA
criteria. Some have argued that this technique is biased in that it only seeks truth-verification
and has no criteria specific to lying (e.g., Lucas & McKenzie, 1999; Purdioux, 2010; Rassin,
1999). Even Undeutsch (1989) himself stated that “But a useful technique for the evaluation
of eyewitness testimony ought to work in both directions: it should be as useful for the tracing
of possible errors or falsehood as for the verification of a truthful and reliable account” (p.
103). Further, the absence of CBCA criteria should not be indicative of deception (e.g.,
Bradford, 1994). Due to unspecified cut-off scores for labelling an account as definitively
truthful or fabricated, and uncertainty regarding the relative weighting of some criteria (i.e.,
are some more indicative of truthfulness than others?), scholars have suggested that CBCA
should not be the only measure used in credibility assessments (e.g., Bradford, 1994; Davies,
1994; Pezdek et al., 2004). Deficient operational and behavioural definitions of the criteria are
also considered to be problematic in the application of CBCA to the criminal justice system
as well as sometimes only moderate levels of inter-rater agreement (e.g., Anson, Golding, &
Gully, 1993). Although it has utility, many criticisms have been levelled at statement analysis
based on CBCA with regard to its formulation and implementation. CBCA was derived from
practice in the courts and relies on bottom-up reasoning due to its lack of theoretical
underpinnings (see Masip et al., 2005; Sporer, 1997). Therefore, caution is advised in the
application of CBCA to forensic practice (e.g., Ruby & Brigham, 1997; Vrij, 2002a, 2008).

4.2. Reality Monitoring

The Reality Monitoring (RM) approach (Johnson & Raye, 1981) has been proffered as an
alternative content analysis approach for credibility assessment (e.g., Sporer, 1997). This
method also evaluates differences between truthful and false verbal responses. Not unlike the
Undeutsch Hypothesis, proponents of the RM approach specify that qualitative differences
exist between experienced versus imagined (e.g., fabricated, false) events. However, RM
provides a rationale as to why genuine versus deceptive experiences differ. Numerous studies
have used this approach in evaluating memories for autobiographical experiences (e.g.,
Alonso-Quecuty, 1992, 1996; Alonso-Quecuty, Hernández-Fernaud, & Campos, 1997; Höfer
et al., 1996; Manzanero & Diges, 1996; Roberts, Lamb, Zale, & Randall, 1998; Vrij,
Akehurst, Soukara, & Bull, 2004b; Vrij, Edward, et al., 2000; Vrij, Harden, Terry, Edward, &
Bull, 2001). Only recently has attention turned towards the application of RM to forensic
settings (e.g., Porter et al., 1999; Sporer, 1997; Vrij, 2005).

4.2.1. History/Background
Long before modern perspectives on reality monitoring, observations of differences in
“memories” for genuine and imaginary experiences were offered. Aristotle (c. 335 BC/1908
32 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

AD) believed that perceptions, or sensory stimulations, formed the basis of memory. He
argued that “the act of perception stamps in, as it were, a sort of impression of the percept (on
the soul), just as persons who make an impression with a seal” (p. 65). As a result, perceived
(i.e., genuine) experiences made an impression upon the mind according to the sensory
characteristics of the event. Further, Plato in his work The Republic discussed the idea of
reality and imagination being akin to a divided line in our perceptual world (360 BC/1998).
Based on these views, it could be argued that experiences of internal origin (i.e., imagined
events) would not bear the same physical mark of perception, and would fail to leave a
distinct “stamp” upon the soul of memory as they are but a ‘shadow’. In fact, sixteenth
century philosopher David Hume (1737/1886) was one of the first to consider and elaborate
the ideas of Aristotle and Plato in comparing the qualities of memory and imagination. While
he endorsed a “spatial” conception of memory, Hume acknowledged that the mind was
sufficiently powerful to generate images akin to memories for perceived events. He argued
that events experienced in reality left a durable mark upon memory distinct from products of
the imagination (Hume, 1737/1886, p.173). In turn, Hume argued that genuine memories
were “lively and strong”, and characterized by a “steadiness, order, and coherence” that did
not exist in the imagination (Hume, 1739/1964, p. 138). Hume believed that memories (as
reproductions of original events) were more coherent and logical than imagination (see Porter
et al., 1999 for a modern validation of this view), whereas imagined events were subject to a
high degree of creativity and re-arrangement (Copleston, 1964). Collectively, these early
views suggested that the qualities of memory and imagination may differ; memory reflects
events as originally perceived whereas imagination is a creative reconstruction of reality that
leaves a permeable stamp on the mind. Shaped by these philosophical foundations on the
functioning of memory and later development of cognitive psychology, the process by which
real (perceived) and imagined events are discriminated and potentially confused in memory
formed the basis for the reality monitoring framework (Johnson & Raye, 1981).

4.2.2. Components of RM
Reality monitoring refers to the strategies employed by individuals to discriminate
memories from external (perceived/experienced) or internal (imagined) sources, in other
words, reality from imagination (Johnson & Raye, 1981). RM is based upon the
characteristics of our memories for events, and subsequent reality judgments (Johnson, 1988;
Johnson et al., 1988). Accordingly, memories of genuinely experienced events should be rich
in perceptual details (e.g., colours, sounds), contextual details (e.g., time and place), and
affective information (e.g., feelings during event). Conversely, imagined events should
contain more information about the cognitive processes involved in creating the imagery
(e.g., sensory, perceptual, or reflective processes) (e.g., Arbuthnott et al., 2002; Johnson,
1983; Larsson & Granhag, 2005). Cognitive operations are argued to occur more frequently
in imagined events as a result of deliberate imagining, searching for pieces of memory,
intention, planning, and making conclusive statements (Johnson & Raye, 1998). As such, the
origin of a “memory” is reflected in its characteristics, with internally generated (imagined)
events differing from externally generated (perceived) events (Johnson & Raye, 1981). Thus,
a memory rich in spatial and visual details but lacking in reflective construction and cognitive
operations (i.e., records of organization, elaboration and retrieval such as “I think I remember
it being on a Tuesday”) would be considered to be externally generated and therefore real (see
also Johnson et al., 1993). Further, the inclusion of additional information in memory based
Fact or Fiction?: Discriminating True and False Allegations of Victimization 33

on knowledge/reasoning of past experiences may assist in determining that an event was


perceived rather than imagined (Johnson et al., 1988).
In order to empirically examine their model, “reality” criteria were developed and
measured using the original 39-item self-report Memory Characteristics Questionnaire
(MCQ; Johnson et al., 1988). This questionnaire surveyed subjective properties of
autobiographical experiences assessing the perceptual, contextual, affective, and cognitive
characteristics of memory. Factor analysis of the MCQ has yielded eight subscales of memory
characteristics: clarity/vividness, sensory experiences, spatial information, time information,
emotions and feelings, reconstructability of the story, realism, and cognitive operations
(Sporer, 1997). This modified version of the MCQ was called the Judgment of Memory
Characteristics Questionnaire (JMCQ; Sporer & Küpper, 1995) and used to objectively assess
the features of memory. Larsson and Granhag (2005) offer definitions of how some of the
RM criteria may be used: “visual (e.g., “I saw a frog”), audio (e.g., “I heard that he sang a
song”), sensory (e.g., “The ice cream was cold”), affective (e.g., “I was happy when we
played”), spatial (e.g., “The boy put his toy on the table”) and temporal information (e.g., “He
sang before he left”), as well as cognitive operations (e.g., “I remember thinking he was
pretty”).” (p. 50). Overall, the MCQ and subsequent modifications (e.g., Suengas & Johnson,
1988) were used to determine the properties of memory that are relied upon in making reality
monitoring decisions (see Table 1 for the listing of RM criteria). In addition, several studies
have extended the reality monitoring approach using imagined events to apply to studies of
false memories and deception (e.g., Porter et al., 1999; Sporer, 1997; Vrij, Evans, Akehurst,
& Mann, 2004). Vrij, Akehurst, et al. (2004a) argued that this framework could be applied to
truthful (experienced) and fabricated (imagined) events. The focus of RM in forensic contexts
often doesn’t involve assessment of one’s own memories, but extending evaluations to other
people’s memories of truthful and fabricated events (Barnier et al., 2005). This also has been
termed interpersonal source monitoring (Johnson, Bush, & Mitchell, 1998), and may be a
useful technique to address the reliability of a memory or witness statement in criminal and
civil contexts (e.g., Larsson & Granhag, 2005).
Extending the RM approach to deception detection, truthful statements are those resulting
from events that have been experienced whereas deceptive claims are internally generated
fabrications (e.g., Sporer & Sharman, 2006). Therefore, truths are based on memory for
events that were perceived whereas lies have no basis in reality and exist only in the
imagination (Alonso-Quecuty, 1995). As such, the principles that apply to perceived and
imagined memories should also be applicable to truthful and fabricated statements (e.g.,
variations in sensory, contextual, semantic, and cognitive processes). Some have further
differentiated imagined and deceptive accounts, which could invariably influence outcomes
pertaining to memory characteristics (Barnier et al., 2005). Imagined events have been
associated with the creation of false memories (i.e., created but believed to be true) under
conditions such as “memory work” and suggestive questioning (e.g., Hyman & Pentland,
1996; Porter et al., 1999; Read & Lindsay, 2000). However, deceptive accounts are associated
with an awareness that a lie is being told, and are rarely accompanied by the belief that
deceptive events happened in reality (e.g., Garry et al., 1996). Raitt and Zeedyk (2003) noted
that regardless of genuine, false, or intentionally fabricated, one way to discredit victims is to
call into question their memory for events. However, due to the relatively recent application
of RM to forensic psychology, standardized criteria and methods of applying RM are still in
development (e.g., Masip et al., 2005; Memon et al., 2010; Vrij & Mann, 2004).
34 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

4.2.3. Discrimination Rates


As described, RM is still a relatively new approach to determining the credibility of
statements and as such, few studies have been conducted to establish its utility in this context.
However, a notable few have used RM and reported varying accuracy rates. Sporer and
Küpper (1995) had participants provide a report of an autobiographical event as well as a self-
generated event, with a one-week interval between the two accounts. This study was the first
to expand upon and group RM criteria into eight dimensions for assessment: sensory
experiences, clarity and vividness, spatial and location details, temporal or time details,
emotions/feelings, realism, reconstructability, and cognitive operations. Based on these
criteria, the reported rates of correct classification were 70% for deceptive accounts and 68%
for truthful statements. Ratings conducted by naïve coders relying on intuition rather than RM
criteria were not significantly different from chance. Vividness and contextual details were
found to be higher in truthful statements, but this effect was only evident after a time delay of
one week. Overall truthful claims were associated with more temporal and realistic
information and less sensory information relative to deceptive statements (Sporer & Küpper,
1995). Similar findings were reported by Sporer (1997), where classification analysis found
discrimination rates of 75% accuracy for truthful events and 67.5% accuracy for deceptive
statements. Biland, Py, and Rimboud (1999) found a lack of results with respect to sensory
and contextual information as predicted by the RM approach, but did reveal verbal hedges
and self-references to be associated with truthful (relative to deceptive) statements. Only
sound and temporal details were found to be greater in truthful claims across studies
conducted by Vrij, Edward, et al. (2001b, 2001c) and Granhag, Strömwall, and Olsson
(2001). In another study, Granhag and his colleagues analyzed statements made by children
who watched a magician show versus those who just “pretended” they had over a series of
interviews. The authors found a disproportionately high rate of correct classification for
truthful (91%) and fabricated (81%) claims based on statements made in the initial interview
(overall 85%). Although the rates were still high (73% true, 85% false, 79% overall) for
discrimination at the time of the second interview, it is interesting to note that there is a trend
for truthful and fabricated claims to become less distinct over time (see Granhag &
Strömwall, 2009). Whether this operates as a function of deterioration in the memory itself, or
inflation in deceptive claims with repetition that serves to consolidate details (e.g., Suengas &
Johnson, 1988), or a combination of both is yet unclear. Sporer (2004), in a review of 10
studies utilizing RM, cites accuracy rates from 61 – 91% for truthful accounts and 61 – 81%
for deceptive accounts. More recently, Memon et al. (2010) examined “true” or “invented”
claims based on two variations of the RM framework utilized in other studies: Version 1
involving affective, external and contextual details (Colwell, Hiscock-Anisman, Memon,
Rachel, & Colwell, 2007); Version 2 involving spatial, visual, temporal and auditory details
(Vrij et al., 2007). In the “true” condition, participants witnessed a heated exchange between
the experimenter and a confederate (posing as a computer technician) over the technician
having dropped and ruined a laptop computer. In the invented condition, no computer
technician entered the room and these participants did not witness any argument. Following
this, participants were asked to recall what they had witnessed (true condition) or make up a
witness statement based on a script of what happened (invented condition). The statements
were assessed using both RM versions, and found to discriminate reports relatively equally:
truth classifications were 57% (Version 1) and 61% (Version 2), whereas lie classifications
were 75% and 78% (respectively) (Memon et al., 2010). Further research needs to be
Fact or Fiction?: Discriminating True and False Allegations of Victimization 35

conducted to determine a more standardized accuracy rate, but we can extrapolate from the
above findings that RM tends to be able to discriminate truthful from deceptive narratives in
about 75% of statements and appears especially advantageous for detecting lies (e.g., Gnisci
et al., 2010; Roberts & Lamb, 2010; Sporer, 1997).

4.2.4. Criticisms
Masip et al. (2005) noted that criteria such as sensory details (i.e., visual and auditory),
contextual and time information, and realism were good discriminators. Further, research
using the RM approach has noted perceptual and contextual differences between real and
imagined events (e.g., Porter et al., 1999; Vrij, Akehurst, et al., 2004b). However, the
proposed deceptive marker of increased cognitive operations has not been consistently
evidenced (e.g., Höfer et al., 1996; Sporer, 1997; Vrij, Kneller, & Mann, 2000). Vrij,
Akehurst, et al. (2004a) noted that cognitive operations may not be revealed when
participants are asked to lie about factual information or events in which they were not
personally involved. They found that when lying about personally relevant experiences,
deceptive statements did contain more indications of cognitive operations. Comparisons of
studies applying RM to deception detection also may yield contradictory results because of
differences in experimental paradigms, sample sizes, operational definitions of RM criteria,
scoring, and statistical analyses (Masip et al., 2005). In addition, different variables may
influence variations in RM criteria, such as: (1) mode of presentation – studies that have used
directly experienced events appeared to show more support for some RM predictions; (2)
preparation and delay – mixed findings have been yielded with varying delays (minutes/days)
with some reporting more contextual and sensory information in deceptive accounts after a
delay (e.g., Alonso-Quecuty, 1992) whereas others finding more contextual information in
truthful reports (e.g., Sporer, 1997); and (3) repeated statements – some studies report no
change in characteristics with repetition (e.g., Granhag & Strömwall, 2000b, 2001b) whereas
others reported increases in sensory, contextual, and cognitive operations over time (more so
for deceptive accounts; Alonso-Quecuty & Hernandez-Fernaud, 1997). Findings using this
approach have not been consistent (Masip et al., 2005). The limitation of previous studies
arises, in part, from a failure to consider experiences involving directly experienced and/or
personally salient events (with a few notable exceptions; e.g., Porter & Yuille, 1996; Sporer,
1997). Most studies have used mundane, neutral, or mildly negative events as the subject
matter of the statements. While it is true that we may fabricate frequently in everyday life
situations and interactions (e.g., DePaulo et al., 2003; Memon et al., 2003), fabrications
involving highly emotionally charged and traumatic events may contain distinctly different
qualities from truthful memories of such experiences. Mimicking emotion associated with
traumatic events may be more difficult since we presumably experience trauma far less
commonly than other events. The nature of memory for genuine claims of trauma is still hotly
contested (e.g., McNally, 2003), lending to an unclear picture about the retention of deceptive
traumas relative to genuine claims. Despite these contradictions, the discriminative power of
the RM approach is promising. RM techniques, in general, are useful in discriminating real
from imagined events in statements made by children and adults (e.g., Gnisci et al., 2010;
Larsson & Granhag, 2005; Memon et al., 2010; Vrij, 2002a, 2002b).
36 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

4.3. Scientific Content Analysis

The development of ‘forensic linguistics’ has prompted much research concerning the
application of linguistic techniques to criminal investigations (e.g., Coulthard, 1994;
McMenamin, 2003). The Scientific Content Analysis (SCAN) procedure is used to detect
deception within a statement via detailed analysis of the linguistic features of the narrative
(Sapir, 1987, 2000). This technique differs somewhat from those mentioned previously as it
has been primarily utilized in investigative settings where someone has been accused of
wrongdoing or is suspected of making a false claim and is interviewed/interrogated by a
trained SCAN interviewer (Korris, 2006).

4.3.1. History/Background
The SCAN system was introduced by Avinoam Sapir (1987, 2000), based on his previous
experience as an Israeli police officer, polygrapher, and code breaker in the army (Leo, 2008;
Elliot, 2011). According to the Laboratory for Scientific Interrogation (LSI;
http://www.lsiscan.com), which is Sapir’s American-based company that markets SCAN, this
system was developed by conducting extensive research on interpersonal and linguistic
communication. Purportedly, SCAN has been implemented worldwide in law enforcement
and private industry with high rates of success (see Shearer, 1999). For example, Sapir has
been cited as stating that SCAN is more than 95% accurate and can quickly distinguish truths
from lies (Leo, 2008). The premise is similar to other techniques: statements based on
invention or fantasies differ qualitatively from actual experiences (Smith, 2001). In addition,
SCAN considers the structure, content, and omission of information to be the most important
indicators of determining veracity (Lesce, 1990; Sapir, 1987, 2000). Due to SCAN’s focus on
statement structure and content, relative to subjective factors such as nonverbal behaviour,
Sapir (1987) argued that SCAN is likely to be scientific and objective. In fact, advocates
argue that SCAN can analyze the features of a statement and determine veracity independent
of corroborating evidence (Leo, 2008). The SCAN technique is currently used in investigative
settings in several nations including America, Canada, Australia, and Israel (Smith, 2001).

4.3.2. Components of SCAN


The SCAN system of deception detection begins with eliciting a written statement from a
witness or suspect (Leo, 2008; Sapir, 1987, 2000). Specifically, the individuals are
encouraged to provide a handwritten statement in their own words, without interruption or
influence from the investigator (Vrij, 2008). They are instructed to write about what they did
during a certain time period (i.e., please write down everything that you did from the time you
woke up this morning to the time you went to sleep), and to provide context to the event so
that the investigators can understand what happened (Adams, 1996; Korris, 2006; Sapir,
1987, 2000). This is accomplished via completion of the Victim Inquiry Effective Witness
(VIEW) questionnaire that asks examinees a series of open ended questions to solicit
responses in their own words (Sapir, 1995; Smith, 2001). The questions are designed to tap
into Sapir’s (1987) view that it is “impossible to lie twice”, which includes questions based
on the same content being presented in different ways (i.e., trying to elicit the same
information in different ways). The administration of the VIEW can be done on an
individual’s own time and does not need to be completed in the presence of an investigator
(Korris, 2006; Sapir, 1995). Following receipt of these responses, an investigator trained in
Fact or Fiction?: Discriminating True and False Allegations of Victimization 37

SCAN interpretation completes the SCAN coding system. If the investigator has reason to
believe that there are aspects of the statement that lack truth, they can further discuss these
elements with the examinee as a starting point in an interview (Vrij, 2008).
The SCAN coding system is reliant upon assessment of 13 different statement criteria
(see Table 1; Sapir, 1987). The first criterion relates to changes in vocabulary and
terminology that can reveal a change in the mind of the statement provider, with a focus on
descriptions of family, people, transportation, communication, and weapons. Take, for
example, a hypothetical scenario where a man is suspected of drunk driving and claims he has
been involved in a traffic accident with a second car (later admitted to be fictitious). When
questioned about details of the other vehicle, he refers to “the vehicle”, “the car”, and finally
“the dark coloured car” within the same statement (Smith, 2001). Further, changes in tense
(especially a passive tense) could be indicative of deception. According to Sapir (1987),
accurate statements are written in a first person singular, past tense. The justification for both
these criteria is that consistent use of language in a statement is evidence of a truthful
statement, whereas testimonial changes indicate deception (Sapir, 1987). Third, the placement
of emotions within a statement can be indicative of veracity (Vrij, 2008). If emotions are
inappropriately placed in a statement (especially at the story climax or when a weapon is
mentioned), the SCAN system would consider this feature indicative of deceptive responding
(Smith, 2001). Fourth, improper use of pronouns (i.e., omission of first-person narrative ‘I’)
may indicate a lack of commitment to the story and weaken the credibility of the narrative
(Sapir, 1987). Pronouns (i.e., words that add connectivity to a statement such as ‘I’, ‘he’, ‘we’
and ‘their’) provide important cues to signaling responsibility of a situation (Smith, 2001).
Ultimately, the SCAN technique states that a lack of pronouns suggests that the writer is
trying to absolve ownership of the experience (Vrij, 2008). Related to this, Sapir (1987) also
stated that if the writer is ambiguous about the subjects of the story, he/she may be trying to
distance themselves from the story. Vague general statements can be characterized as a failure
to introduce people into the story (i.e., using “we” when those involved are not clearly
identified, or changing from using names to “them”), and suggests a lack of social
involvement and introduction. Further, a lack of conviction or memory concerning the
incident (i.e., “I don’t remember”, “There was a talk show on I think”) may suggest that the
claimant is intentionally trying to be vague and does not have the necessary details to provide
a complete narrative (Smith, 2001). This criterion may be coupled with failing to deny
allegations; truthful individuals will directly deny involvement (i.e., “Plain and simple – I did
not do it. I don’t know who did it”; Smith, 2001), whereas those engaging in deception tend
not to do so (Sapir, 1987). The eighth criterion concerns information that is reported out of
sequence. Sapir (1987) argued that deviation from the logical progression of a story may be
indicative of deception. Further, he specified that ambivalent sentences (i.e., “excuse me”,
“should I continue?”) or phrases that seem to give a rationale for the description (i.e.,
“because”, “since”) should be interpreted as possible deception (Sapir, 1987). Spontaneous
corrections (i.e., changes or words crossed out in a statement) also are considered deceptive,
as Sapir (1987) believed that a truthful statement should contain no corrections. Tenth, the
SCAN system evaluates the structure of a statement for balance (Vrij, 2008). It is argued
statements should adhere to the following composition: 20% introduction, 50% central event,
and 30% conclusion. Statements that deviate from this structure are more likely to involve
deceptive tactics (Sapir, 1987). The next criterion involves analysis of subjective and
objective time. Sapir (1987) referred to subjective time as the amount of text that is written to
38 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

explain a certain period of objective time, and that these should correspond if the statement is
to be believed. For example, if an examinee uses 10 lines to explain a 20 minute period but
only uses a few lines to explain 3 hours, the latter part of the statement is more likely to be
deceptive (Smith, 2001). Emphasis on irrelevant or unimportant information reflects whsat
part of a statement a suspect/witness is focusing on, and should be focused on by the SCAN
analysis in the interview for elaboration (Sapir, 1987). Finally, SCAN evaluates “signals of
linguistic sensitivity” in the form of unnecessary connections and missing information (i.e.,
things the writer may not have wanted the reader to know). For example, a man claimed he
had been assaulted by his wife with a wine bottle, and provided a story with many gaps and
exclusion of specific material. It was later revealed that, in actuality, he had assaulted his wife
and she had been acting in self-defense, and that avoiding specific details was necessary in
order to maintain his lie (Smith, 2001). It is important to note that unlike other techniques that
identify criteria present in truthful reports, the majority of SCAN criteria are argued to be
indicative of deception (Sapir, 1987, 2000). In conclusion, most of the SCAN criteria concern
analysis of textual changes, missing information, and the structure of a statement.

4.3.3. Discrimination Rates


The SCAN technique has not been well-studied in the empirical literature, in part due to
Sapir’s (1987, 2000) objection to research validation (see also Korris, 2006). However, there
have been five studies to date (to our knowledge) that have evaluated SCAN’s ability to
discriminate narrative veracity. The first study was conducted by Driscoll (1994), who
utilized SCAN to analyze 30 handwritten statements made by criminal suspects prior to
taking a polygraph test. Participants were instructed to do their best to tell the truth and
provide a statement from the beginning to the end of the event. The SCAN analyst correctly
identified 73% (8/11) of truthful statements and 95% (18/19) deceitful statements. In addition,
the criterion with the greatest discriminatory power was a failure to deny allegations; this
single criterion accurately classified 77% of truth-tellers and 88% of liars (Driscoll, 1994).
That being said, high rates of misclassification for truthful statements and the lack of ground
truth of narratives limits any conclusions that can be drawn from these rates (Vrij, 2008). To
address the issue of ground truth, Porter and Yuille (1996) studied participants that either
committed a mock theft or not. Each participant gave an oral statement of the event, and these
were analyzed using three SCAN criteria: structure of the statement, unnecessary
information/missing information, and tense change. Their results indicated little difference
between truthful and deceptive statements, and that SCAN was ineffective at correctly
discriminating veracity (Porter & Yuille, 1996). While results may have been limited by use
of oral statements and restricted criteria, they suggest that SCAN may only be useful in
particular settings by highly trained analysts. The third SCAN study partially addressed this
issue by evaluating performance across three groups of SCAN-trained police officers, relative
to one group of seasoned detectives (relying on experience) and one group of new police
officers (relying on intuition), both with no SCAN experience (Smith, 2001). All groups
assessed 27 written statements and were asked to decide whether they were truthful,
deceptive, or inconclusive. The first four groups were able to correctly identify at least 80%
of the truthful statements, and 65% of the deceptive statements. The members of the fifth
group were only able to accurately classify 45% of the deceptive statements. While there
appeared to be some utility to the SCAN techniques, Smith (2001) reported that experienced
police officers could perform just as well as those using SCAN techniques. Fourth, Bachenko,
Fact or Fiction?: Discriminating True and False Allegations of Victimization 39

Fitzpatrick, and Schonwetter (2008) analyzed some SCAN criterion (i.e., lack of commitment
to the statement, preference for negative expressions, noun/verb inconsistencies) in a set of
statements, and claimed that false propositions are correctly identified over 75% of the time.
Finally, Nahari, Vrij, and Fisher (in press) utilized the SCAN procedure to analyze truths,
outright lies, and concealment lies. They reported that SCAN performed poorly at classifying
both outright lies (58.5%) and concealment lies (48.8%) when compared with truths in two
classification models (Nahari et al., in press).

4.3.4. Criticisms
Collectively, the empirical evidence does not fully support the use of SCAN as a “lie
detection” tool (e.g., Carroll, 2009; Nahari et al., in press; Vrij, 2008). While trained SCAN
analysts state that it is intended more as an “investigative tool”, they often are heralded as
human lie detectors themselves (Korris, 2006). Related to this, SCAN suffers from a lack of
standardization due to the subjectivity of those employing this technique, although all are to
have taken the training protocol (Vrij, 2008). That said, SCAN does provide investigators
with important informational resources early in an investigation (Smith, 2001). Unfortunately,
the SCAN technique is a set of individual criteria that does not appear to be utilized as a
cohesive system, such that investigators may place more emphasis on differing criteria (Vrij,
2008). Researchers also have argued that standardization problems arise from the lack of
theoretical underpinning and direct contradiction with linguistic theory (e.g., Gordon &
Fleisher, 2006; Heydon, 2011; Shuy, 1998). Further, some of the criteria in the SCAN
technique are in opposition to much empirical literature. For example, SCAN argues that
spontaneous corrections and admissions of lack of memory are indicative of deception,
whereas many studies have confirmed that these features are associated with truthfulness and
reflect recall based on memory rather than imagination (e.g., Blandón-Gitlin et al., 2009;
Porter, Peace, et al., 2007; Porter et al., 1999; Vrij, 2005, 2008; Vrij & Mann, 2006). In fact,
some scholars argue that SCAN is nothing more than a pseudoscientific technique based on
testimonials, post-hoc illuminations, and overstated claims (e.g., Leo, 2008; Shearer, 1999).
This is coupled with active discouragement of research on the SCAN technique, and the
paucity of experimental or field studies on the system (Vrij, 2008). As a result, the application
of SCAN is not recommended at this time and claims of its “success” are unwarranted
without empirical support (e.g., Granhag & Strömwall, 2009; Heydon, 2009; Leo, 2008;
Nahari et al., in press; Shearer, 1999).

4.4. Linguistic Inquiry and Word Count

In response to literature that suggests lies differ linguistically from truths (e.g., DePaulo,
Rosenthal, Rosenkrantz, & Green, 1982; Porter & Yuille, 1996), more formalized methods of
linguistic assessment have been developed. Many of the existing techniques (e.g., CBCA,
RM, SCAN) are time consuming, and formal training an expensive endeavor (e.g., Sapir,
1987, 2000). As such, the possibility of automated text-based assessment methods are gaining
popularity and have been applied to studies concerning social communication (e.g., Gonzales,
Hancock, & Pennebaker, 2010; Hancock, Beaver, et al., 2010) as well as deception (e.g.,
Hancock, 2009; Hancock et al., 2008; Hancock, Woodworth, & Goorha, 2010). Utilizing a
system that can analyze the linguistic features of speech under varying conditions is
40 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

advantageous and provides much information about human behaviour (Pennebaker et al.,
2007).

4.4.1. History/Background
Initially developed in 1993 to study the nuances of the language of disclosure (see
Pennebaker, 1997), the Linguistic Inquiry and Word Count (LIWC) is a text analysis software
program that analyzes the linguistic features of written or spoken statements. Since its release,
the LIWC has been revised twice to update both the software and the dictionary of target
words analyzed by the program (LIWC2001: Pennebaker, Francis, & Booth, 2001;
LIWC2007: Pennebaker et al., 2007). It has been widely utilized in a variety of studies
examining language use in relation to various outcome measures including: personality
characteristics (e.g., Bosson, Swann, & Pennebaker, 2000; Gill, Oberlander, & Austin, 2006),
physical and mental health (e.g., Campbell & Pennebaker, 2003; Chung & Pennebaker, 2007;
Pennebaker, Mayne, & Francis, 1997), social and romantic relations (e.g., Fitzsimmons &
Kay, 2004; Groom & Pennebaker, 2005), intelligence (e.g., Klein & Boals, 2001), disorders
(e.g., Lyons, Mehl, & Pennebaker, 2006; Rude, Gortner, & Pennebaker, 2004), and emotional
disclosure (e.g., Cohn, Mehl, & Pennebaker, 2004; Pasupathi, 2007; Pennebaker, 2003;
Pennebaker & Campbell, 2000). More recently, the LIWC has garnered attention as a possible
deception detection technique (e.g., Bond & Lee, 2005; Hancock et al., 2008; Peace &
Kuchinsky, 2011; Vrij et al., 2007), and recognition has been given to the implications of the
language of trauma on claims of victimization (e.g., Pennebaker & Stone, 2004).

4.4.2. Components of LIWC


The LIWC program is designed to quickly and efficiently analyze individual or multiple
statements by examining textual samples on a word-by-word basis (Pennebaker et al., 2007).
In the most recent version (LIWC2007), each word is compared against a database of more
than 4,500 target words and/or word stems, which are further divided into 80 linguistic
dimensions. These dimensions include: 4 general descriptor categories (e.g., total word
count, words per sentence), 22 standard linguistic dimensions (e.g., pronouns, articles,
auxillary verbs), 32 psychological construct categories (e.g., affect, cognition, perception,
biological processes), 7 personal concern categories (e.g., leisure, home, work), 3
paralinguistic dimensions (e.g., fillers, nonfluencies), and 12 punctuation types (e.g.,
commas, periods, question marks) (Pennebaker et al., 2007). See Table 1 for a listing of the
major categories within the LIWC. According to this procedure, dictionary words may be
categorized in one or more of the subcategories. For example, Pennebaker et al. (2007) state
that “the word cried is part of five word categories: sadness, negative emotion, overall affect,
verb, and past tense verb” (p. 4). As a result, the scale scores for each of these subcategories
would be increased if this word were present. This method of coding within the LIWC is
based on the premise that structural, cognitive, and emotional components in an individual’s
speech vary depending on personal or situational factors (Pennebaker et al., 2007). In
particular, this chapter is concerned with the application of LIWC to deceptive contexts.

4.4.3. Discrimination Rates


One of the first studies to utilize the LIWC for deception detection involved classification
of true and false opinion statements in a series of experiments (Newman et al., 2003). In
Experiments 1-4, college students were asked to either lie or tell the truth about controversial
Fact or Fiction?: Discriminating True and False Allegations of Victimization 41

or opinion-laden topics, such as abortion or feelings about their friends. In Experiment 5,


undergraduate students engaged in a mock theft (or did not steal anything), were accused of
the crime, and were instructed to deny all allegations. All statements were analyzed using the
LIWC, and their results revealed that veracity could be successfully discriminated 67% of the
time (deceptive 68%; truthful 66%), which was significantly greater than the at-chance level
performance (52%) of human judges (Newman et al., 2003). Hancock et al. (2008) analyzed
truths and lies within 264 computer-mediated dyadic exchanges between conversational
partners. Although these authors do not report rates of discriminative ability of the LIWC in
their sample, they did find that liars provided more sensory and overall detail, used more
other-oriented pronouns, and fewer self-references (Hancock et al., 2008). In a recent
evaluation of truthful and deceptive dating profiles, Toma and Hancock (2010) found the
LIWC to classify deceptive profiles more accurately (65.8%) than truthful profiles (61.5%).
Bond and Lee (2005) extended these findings to a criminal population, using the LIWC to
analyze truthful and deceptive statements given by prisoners about a series of crime-related
video stimuli they viewed. Using signal detection analysis, they reported a hit rate of 71.1%
(i.e., lies correctly identified as lies) and a correct rejection rate of 64.5% (i.e., truths correctly
classified as not lies) (Bond & Lee, 2005). In another forensic-related study, Fuller et al.
(2009) analyzed field data involving statements from “persons of interest” (i.e. suspects) in
official military investigations. Corroborating evidence was obtained for all investigated
events used in their study, enabling the researchers to establish the ground truth of each
statement, resulting in a total of 366 statements (79 deceptive, 287 truthful) analyzed via the
LIWC. Their results indicated that veracity was correctly classified in approximately 70% of
statements, and that automated deception detection technique may be a valuable tool for
investigators (Fuller et al., 2009). Recently, Peace and Kuchinsky (2011) evaluated the
linguistic features of true and false allegations of trauma recalled over time, and found that
the LIWC was able to correctly determine veracity for 63.3% of statements during initial
recall and 62.3% at the time of the second recall. In general, LIWC research has suggested
that the following linguistic cues are useful for determining statement veracity: (a)
word/detail quantity, (b) emotion words, (c) pronoun use, and (d) indicators of cognitive
operations and complexity (Hancock et al., 2008). With classification rates of approximately
65-70% (e.g., Adelson, 2004), the LIWC could be a valuable and efficient tool for detecting
deception in a variety of contexts (e.g., Dzindolet & Pierce, 2005; Toma & Hancock, in
press).

4.4.4. Criticisms
Despite the reported success of these studies, LIWC requires more empirical research for
validation in credibility assessment contexts. In many studies, the sensitivity of the LIWC
may be limited by participants’ lack of motivation and/or emotional involvement in their lie.
For example, while Bond and Lee (2005) used a sample of prisoners, the deception scenario
was associated with little consequence for unsuccessful lying. Further, the salience and
personal relevance of the lie was minimal; prisoners were providing truthful and deceptive
statements concerning videos they had watched (Bond & Lee, 2005). That said, the Fuller et
al. (2009) study produced comparable results and is generalizable to high-stakes deception
contexts (e.g., Vrij, 2008). Application of LIWC to true and false allegations is limited (e.g.,
Peace & Kuchinsky, 2011), and warrants further investigation. The other major limitation of
linguistic-based approaches is the lack of context given to words (Bond & Lee, 2005). A
42 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

linguistic analysis may reveal patterns of word use, but fail to interpret the context or
significance of those words. In particular, words that only “make sense” at the phrase- or
statement-level may be inaccurately coded as LIWC only considers the presence or absence
of individual textual elements (Bond & Lee, 2005). For example, phrases such as “down in
the dumps”, “she was in a bad mood”, “he is under a lot of pressure” would all be coded as
containing relativity-space details that reflect a literal interpretation of being down, in, or
under something. The added difficulty is that such colloquialisms are heavily used in our
speech (e.g., Pennebaker et al., 2007; Sornig, 1981). Alternate approaches to examining
verbal cues to deception (such as CBCA, RM, SCAN, and the MAP) do evaluate the
contextual features of statement content. In fact, content-based memory coding may
discriminate between truths and lies differently than linguistic-based approaches (see Peace &
Kuchinsky, 2011).

4.5. Memory Assessment Procedure

Despite the wealth of research on statement analysis techniques, few well-rounded and
thorough approaches to assessing memory characteristics have been derived. The Memory
Assessment Procedure (MAP) was developed by Porter et al. (1999) and modified in later
research when applied to a credibility context (e.g., Peace & Porter, 2011; Porter, Peace, et
al., 2007), with consideration of the need to balance reliability and utility of the measure
(Woolnough & MacLeod, 2001).

4.5.1. History/Background
The MAP was devised by Porter et al. (1999) based on a combination of reality
monitoring and statement analysis criteria. Using elements of the Memory Characteristics
Questionnaire (MCQ; Johnson et al., 1988), as well as from Criteria-Based Content Analysis
(CBCA; Steller & Köhnken, 1989), the MAP provides a global assessment of memory
characteristics. This includes both phenomenological characteristics (subjectively rated by
participants) and qualitative characteristics (objectively assessed by trained coders) (see Table
1). The original procedure involved 12 factors rated subjectively (5) or objectively (7),
including: vividness, confidence, sensory components, amount of detail, repeated details, and
coherence (see Porter, 1998, and Porter et al., 1999, for a detailed discussion of memory
criteria). Former approaches tend to rely on either objective or subjective assessment features
of narratives, whereas the MAP provides a combined approach to studying narrative
differences. In the original research, participants were asked to recall several negative
childhood experiences in a series of interviews involving suggestion and guided imagery
techniques (Porter et al., 1999). Participants reported genuine and fabricated childhood
events, and were repeatedly interviewed and asked to recall a suggested event that had not
been experienced (as confirmed by the participants’ parents). Utilizing the MAP, Porter et al.
(1999) reported differences in the characteristics of memories for real, false
(created/implanted), and fabricated experiences. Specifically, reports of genuine experiences
were rated as more detailed, moderately coherent, vivid, clear, and were associated with
participants admitting gaps in their memories. False memories were rated as coherent, but
were less vivid and held with less confidence. Fabricated reports were associated with an
exaggerated ‘over-the-top’ quality: they were associated with high stress ratings, many
Fact or Fiction?: Discriminating True and False Allegations of Victimization 43

repeated details, high levels of vividness and clarity, but very few reports of lapses in
‘memory’ (Porter et al., 1999). This method of assessing memory has been used in
subsequent research evaluating the characteristics of memories for traumatic (e.g., sexual and
non-sexual traumas, homicide and violent crimes) and positive emotional experiences (e.g.,
Peace & Porter, 2004; Peace, Porter, & ten Brinke, 2008; Porter & Birt, 2001; Porter & Peace,
2007; Woodworth, Porter, ten Brinke, Doucette, Peace, & Campbell, 2009). More recent
studies adapted this procedure to a credibility context and evaluated MAP characteristics as a
function of event veracity (e.g., Peace & Bouvier, 2008). In particular, we modified the MAP
to include an assessment of plausibility (objective) and perceived credibility (subjective)
items, based on narrative structural models, statement analysis research, and its relevance to
the legal system. Previous research has found plausibility to be related to perceptions of
credibility, and the accurate detection of deception (e.g., Hiltunen, 1996; Kraut, 1978;
Landström et al., 2005; Strömwall & Granhag, 2003a). Upon testing, Porter, Peace, et al.
(2007) reported that false allegations of trauma were associated with fewer contextual details
and were rated as less plausible relative to truthful traumas. Further, the coherence of reports
and relevance of details did not help to discriminate veracity as both scored similarly. In a
follow-up study, Peace and Porter (2011) solicited truthful and false allegations of trauma on
three occasions to evaluate both the characteristics and consistency of claims. We found that
truthful traumas were associated with more detail (including contextual and emotional
information) and were rated as more plausible than false allegations. In addition, although
consistency decreased for both narrative types over time, truthful traumas remained more
consistent over a 6 month interval relative to fabricated traumas (Peace & Porter, 2011).
Overall, these findings suggest that memory characteristics for truthful, falsely recalled and
deceptive events are characteristically different and the MAP is sensitive to these variations.

4.5.2. Components of MAP


The MAP is not a psychological scale but rather a series of arbitrary criteria that provide
objective information about the content of autobiographical narratives. The criteria address
how a person reports a memory in narrative format (which can be objectively coded, such as
amount of detail and coherence, references to emotional state, etc.) and how a person
subjectively experiences a memory (items such as the memory’s vividness sensory
components, and stress level) which are rated by participants. Raters using the MAP evaluate
the following objective features of narratives: (1) Number of details (i.e., the number of
distinctive pieces of information provided); (2) Emotional components (i.e., the number of
emotional words used in reference to self and others); (3) Time and place details (i.e., the
number of details provided indicating the time and location(s) of the experience); (4)
Coherence (i.e., rating from 1 [not at all] to 7 [extremely] of the logical flow of the beginning,
middle, and end of the narrative); (5) Relevance (i.e., rating from 1 [not at all] to 7
[extremely] of the extent to which information provided related to the event being described);
(6) Plausibility (i.e., rating from 1 [not at all] to 7 [extremely] of the extent to which the
report is realistic, logical, likely to occur in real life); and (7) Number of words. Previous
research established a high level of inter-rater reliability for all seven criteria (e.g., Peace &
Porter, 2004). This procedure also assesses multiple subjective features of narratives using the
Emotional Memory Survey (EMS; Porter et al., 1999). The EMS allows individuals to
provide subjective ratings of their experiences from 1 (not at all/poor/none) to 7
(extremely/very/many times) on the following criteria: (1) anxiety/stress; (2) talking about
44 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

event; (3) thinking about event; (4) vividness/clarity; (5) overall quality; (6) coherence; (7)
sensory details; (8) confidence in accuracy of memory; (9) perceived credibility; (10)
“turning off” emotions; (11) becoming emotional; (12) detached/distanced; and (13)
emotional intensity. These measures allow for a detailed examination of narrative features
(and self-reported emotionality) as a function of trauma veracity and to test whether reliable
indicators of credibility can be identified.

4.5.3. Discrimination Rates


Only one study, to date, has evaluated the discriminative power of the MAP in a
credibility context. Dissertation research conducted by the first author of this chapter involved
an in-depth investigation of true and false allegations of trauma over time (3 recall periods),
and evaluated whether the MAP had utility in distinguishing report veracity (Peace, 2006).
Overall, both objective and subjective components of the MAP correctly discriminated
between truthful and fabricated narratives of trauma at a rate of 67% at Time 1, 72.4% at
Time 2, and 72.3% at Time 3. The MAP features classified truthful traumas (70.1%, 74.1%,
and 74.7%, respectively) at higher rates than fabricated traumas (63.8%, 70.6%, and 69.9%,
respectively). This finding is consistent with previous research on deception detection (e.g.,
Sporer, 1997; Vrij, Akehurst, et al., 2004b), and may result from a greater prevalence of
specific criterion (e.g., amount of detail) in truthful relative to fabricated reports (e.g., Lucas
& McKenzie, 1999). Taken as a whole, these findings are comparable to those demonstrated
in previous studies using assessments such as CBCA and RM that have evidenced an overall
rate of discrimination of approximately 72% (Granhag & Vrij, 2005). The MAP also
improved in discriminability between Recalls 1 and 2 (approximately 3 months apart),
whereas other techniques have been shown to decrease in sensitivity over time (e.g., Erdmann
et al., 2004). For example, Strömwall and Granhag (2005) found that discrimination ability of
the RM technique decreased over a week interval, with correct classification rates of 85% at
the initial interview and 79% at the second. Other studies, however, have reported increased
discriminative power of content-analysis techniques over successive recalls (e.g., Granhag et
al., 2006; Strömwall et al., 2004; Vrij, 2004); this likely results from a consolidation of details
with repetition (e.g., Suengas & Johnson, 1988; Vrij, 2004).
Assessment of the utility of the MAP in veracity determinations also included an
investigation of what features of narratives had the highest discriminative ability (i.e., which
were predictive of veracity; Peace, 2006). Ratings of plausibility (objective) and credibility
(subjective) were the strongest indicators of narrative type across all three time periods in this
study, and supports past research that has suggested liars tell less convincing stories (DePaulo
et al., 2003). Objective ratings of time and place details (contextual information) and number
of words also served as reliable predictors of narrative type over time (Peace, 2006), and
these features are more common in truthful relative to fabricated reports (Granhag & Vrij,
2005). Similarly, the amount of detail has been consistently evidenced as an indicator of
deception (Vrij, 2005). In the Peace (2006) study, the amount of detail was not a strong
predictor of veracity at Time 1; however, it became more useful as a cue to credibility over
time (Times 2 and 3). This likely resulted from increased investment and ease of providing
detail in fabricated narratives during the initial phase (generation), but increased difficulty in
maintaining this level of detail (from memory) at Times 2 and 3. Related to this, self-rated
coherence also emerged as a predictor of veracity at Times 2 and 3 (Peace, 2006). It is likely
that as participants had more difficulty remembering the details they previously provided in
Fact or Fiction?: Discriminating True and False Allegations of Victimization 45

their fabricated narratives, they felt their stories were less coherent and “put together” relative
to their truthful traumas. In other words, liars experienced “fabrication deflation” relative to
their truthful events (e.g., Polage, 2004). Overall, objective features assessed by the MAP
(e.g., plausibility, time and place details, amount of detail, and number of words) were strong
and reliable predictors of veracity.

4.5.4. Criticisms
One of the major criticisms concerning the Memory Assessment Procedure is that its
discriminative ability has only been tested in one study, independent of 3 separate
applications over the recall intervals (Peace, 2006). The authors intend to utilize the MAP
further and compare discrimination rates across multiple content-analysis techniques within
one sample. A further limitation is that initial testing of this technique was conducted on a
sample that self-selected a wide-range of traumatic events, and that traumas lacked
substantiation (i.e., establishing ground truth) (Peace, 2006). Unfortunately, the majority of
experimental studies on credibility suffer from a lack of ground truth (Vrij, 2004), with the
exception of simulation or video studies which lose the personal salience associated with
memory for trauma (e.g., Landström, Granhag, & Hartwig, 2007). Future studies using the
MAP in populations where traumatic victimization can be supported (or false allegations
admitted) would enhance the application of this procedure.

4.6. Assessment Criteria Indicative of Deception

Similar to the Memory Assessment Procedure (MAP), the Assessment Criteria Indicative
of Deception (ACID) system evaluates credibility criteria drawn from research on CBCA,
RM, and interpersonal deception theory (Colwell, 2007). What differentiates this technique is
the application of these criteria in an investigative interview setting (Colwell et al., 2002).

4.6.1. History/Background
The ACID system is a recent addition to content-based deception detection techniques
and aims to provide a holistic assessment of witness or suspect statements (Mogil, 2005).
This technique relies on a similar theoretical (cognitive) framework as RM, where recall of
genuine experiences is based on perception, temporally ordered information (often
spontaneously inserted in a narrative), and contextual embedding (e.g., Johnson et al., 1988).
Conversely, false experiences are accessed through cognitive operations (i.e., use of our
imagination) and display evidence of internal thought processes in attempts to create a logical
and plausible story (Purdioux, 2010). ACID is predicated upon findings that deceptive
narratives are often less detailed and lack spontaneous corrections (e.g., Bond & DePaulo,
2008; DePaulo et al., 2003; DePaulo, Stone, & Lassiter, 1985) due to the cognitive demand
associated with providing a believable false claim (e.g., Vrij & Mann, 2004, 2006; Vrij et al.,
2008). This approach utilizes reality interview (RI) techniques, which combine cognitive
interviewing principles (i.e., multiple recalls) with increased cognitive demand and an
evaluation of impression management behaviours (Colwell et al., 2002; Colwell, 2007).
Within the interviewing context, research has confirmed that high levels of cognitive load and
self-monitoring of impression are related to impaired deceptive abilities (e.g., Colwell,
Hiscock-Anisman, Memon, Woods, & Michlik, 2006; Hines et al., 2010; Vrij, Fisher, Mann,
46 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

& Leal, 2006). Although this technique is not steeped in philosophical traditions of centuries
past, its empirical foundations appear solid.

4.6.2. Components of ACID


The overall purpose of the ACID technique appears to be an assessment of statement
content for vividness and spontaneity (Colwell, Hiscock-Anisman, Memon, et al., 2007).
Several studies have examined both vividness and spontaneity as major factors, composed of
various other specific criteria (i.e., internal, external, and contextual detail) (e.g., Hines et al.,
2010). In particular, vividness is assessed on the basis of affective reactions and the amount of
various types of detail (i.e., sensory and contextual detail; Ansarra et al., 2009; Suckle-Nelson
et al., 2010), whereas spontaneity refers to enhanced or added detail through the use of
mnemonics and multiple retrieval attempts (Hines et al., 2010). Overall, criteria include
external, internal and contextual details, as well as response length, that are assessed during
free recall and following mnemonic techniques (see Table 1). External details concern
information gained from the senses (i.e., visual and auditory details about who was there,
what happened, and where it happened), whereas internal details refer to cognitive
operations, thoughts, and subjective mood. Contextual details pertain to time and place
information, and describe relations between individuals involved in an event (i.e.,
temporal/spatial relationships) (Colwell, 2007; Colwell, Hiscock-Anisman, Memon, et al.,
2007). Some recent ACID studies also have incorporated evaluation of affective details (i.e.,
emotions of the individual; Memon et al., 2010), coherence (i.e., lacking in contradictions
within context) and type-token ratio (TTR; i.e., the ratio of unique words relative to overall
words in a statement) (Suckle-Nelson et al., 2010). Finally, admission of mistakes/errors has
been used as an ACID criterion in previous research and relates to revealing mistakes in
memory or accurate details (Colwell, 2007). Ultimately, the ACID procedure is designed to
better differentiate truthful and deceptive accounts by using techniques that facilitate recall in
honest subjects and make it more difficult (by increased cognitive load and need for
impression management) for dishonest subjects (Hines et al., 2010).

4.6.3. Discrimination Rates


Although not yet well-studied, the available research to date provides support for the
ACID approach (e.g., Colwell, Hiscock-Anisman, Leach, Corbett, & Urez, 2007; Colwell,
Hiscock-Anisman, Memon, et al., 2007; Memon et al., 2010). In one of the first studies to
formally test the ACID procedure, Colwell (2007) had participants engage in a mock theft or
return a stolen item, and then respond either honestly or dishonestly (such that they avoided
self-incrimination). Overall, they reported that of the nine ACID criteria used in the study
(i.e., Criteria 3-9 in Table 1), only external and contextual details during free recall did not
serve as significant predictors of veracity. His results yielded classification rates (including
error variance) of 78.9% in general, with discrimination of deceptive statements (89.5%)
being more accurate than honest statements (68.4%; Colwell, 2007). In a follow-up study,
Colwell and colleagues (2007) evaluated the ACID in relation to deception detection for
video relative to in-person (i.e., live actor) theft scenarios. They found that inclusion of both
vividness and spontaneity criteria produced the highest rates of classification, with
classification rates (including cross-validation error adjustments) of 62% for videos and 75%
for simulated in-person events (Colwell, Hiscock-Anisman, Memon, et al., 2007). Recently,
Hines et al. (2010) reported an overall discrimination rate of 80% using ACID criteria.
Fact or Fiction?: Discriminating True and False Allegations of Victimization 47

Suckle-Nelson et al. (2010) also have confirmed the accuracy of ACID, such that veracity
was correctly identified in approximately 80% of reports overall (75% truthful; 85.7%
deceptive), and that this value increased to approximately 89% when the gender of the
statement writer was accounted for. In addition, proponents of this technique have found that
training tends to aid individual laypersons in deception detection. Colwell et al. (2009)
utilized principles of the ACID system to train raters about features associated with truthful
(i.e., more detailed, vivid, spontaneous) and deceptive (i.e., less detailed, vivid, and more
rigid, carefully phrased, and consistent) responses. They found that individuals trained in the
ACID technique were able to correctly classify honest and deceptive reports at a rate of 77%,
relative to 57% discrimination for the untrained group. In general, studies that have utilized a
combination of ACID criteria have generally revealed classification accuracy rates that range
between 77% to 95% (e.g., Ansarra et al., 2009; Colwell, 2007; Colwell et al., 2002; Colwell,
Hiscock-Anisman, Memon, et al., 2007; Memon et al., 2010; Suckle-Nelson et al., 2010).
That being said, other research has failed to find differences between honest and dishonest
reports across the main ACID variables of coherence and amount of detail (Mogil, 2005).

4.6.4. Criticisms
Like many of the preceding techniques, the ACID system shows promise, but the results
of studies thus far should be interpreted with caution. Specifically, ACID may have greater
discriminatory power assessing deception in suspect rather than witness statements. Previous
studies of this technique have primarily used scenarios where participants are asked to lie
about engaging in or having witnessed a mock crime (e.g., Colwell, Hiscock-Anisman,
Memon, et al., 2007; Suckle-Nelson et al., 2010). Generating a false claim of personal
victimization may produce different linguistic features (e.g., Peace & Kuchinsky, 2011) that
may not be detected by the ACID criteria. Further, while cognitive load and impression
management associated with false allegations is likely to be high, these claims are often
associated with an added level of emotional investment that may not be present in suspect
statements (e.g., Porter, Peace, et al., 2007; Porter & Yuille, 1995). As a result, the
discriminatory power of ACID in this context is unclear, and should be addressed by future
studies utilizing this technique and comparing its efficacy with more well-supported
procedures (i.e., CBCA/RM). In relation to this, while the studies mentioned above have
employed staged scenarios, they also must consider evaluation of witnesses or victims of
direct and personally salient events (e.g., Peace & Porter, 2011). Experiential differences also
may explain inconsistencies between ACID and other content-analysis techniques, such as a
lack of differentiation according to the amount of detail (e.g., Mogil, 2005) and that statement
consistency is greater in deceptive reports (relative to other research that has found truthful
allegations are more consistent; Peace & Porter, 2011).

4.7. Comparison of Approaches

Techniques such as ACID and MAP, that combine criteria from several approaches, may
be advantageous in their ability to tap into the various cues to both truth and deception. While
few studies have addressed comparison of these specific techniques, much research
comparing CBCA and RM approaches has been conducted (see Masip et al., 2005).
48 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

The difference between CBCA and RM approaches is that RM provides a theoretical


rationale as to why truthful and generated memories differ. We experience real events through
our sensory and perceptual processes, and these are argued to be reflected in memories of
such events (Johnson & Raye, 1981; Sporer, 1997, 2004). Conversely, fabricated events are
conjured from the imagination and are more likely to contain cognitive representations of the
event, such as thoughts or rationales (e.g., Johnson et al., 1993; Johnson & Raye, 1998;
Johnson & Suengas, 1989). A further difference between CBCA and RM is that RM offers
criteria for assessing not only truths but also falsehoods (Vrij, Akehurst, et al., 2004b). Many
studies in both forensic and non-forensic contexts have applied CBCA and RM criteria in
credibility assessment protocols in order to compare the relative effectiveness of approaches
(e.g., Vrij, 2005). Some research has garnered support for CBCA techniques demonstrating
heightened efficacy relative to RM. For example, Santtila, Alkiora, Ekholm, and Niemi
(1999) examined truthful and deceptive statements of negative events (i.e., animal attack,
injection) in children and used logistic regression analysis to determine rates of
discriminability of CBCA and RM criteria. Overall, CBCA was able to distinguish truthful
(69.1%) and deceptive (63.8%) accounts better than RM criteria (61.7% and 66.2%,
respectively). However, a study by Porter and Yuille (1996) evaluated statements of
perpetrators and innocent suspects in a staged “crime” and found that only a few criteria (i.e.,
detail, lack of memory, logical structure) derived from the CBCA approach demonstrated
utility in differentiating truthful and deceptive statements. Conversely, several studies have
compared CBCA and RM assessment procedures across real and imagined events as a
function of the number of recalls (e.g., Granhag et al., 2006; Strömwall et al., 2004). In these
studies, RM served to better discriminate between truthful and imagined events, even after
repeated recalls of an experience. Vrij, Akehurst, et al. (2004a) had participants play a game
with the experimenter and were interrupted by a confederate who came in and wiped some
important information off the blackboard. Participants were later interviewed and instructed
to provide truthful or deceptive accounts (see Vrij et al., 2002; Vrij, Akehurst, et al., 2004b
for similar methodologies). Their results indicated that both CBCA and RM were able to
discriminate truthful and deceptive claims based on their characteristics, with RM performing
slightly better. In addition, knowledge of CBCA criteria lessened the discriminative power of
the technique – truthful and deceptive claims were much more similar with coaching (Vrij,
Akehurst, et al., 2004a). Similarly, Vrij, Akehurst, et al. (2004b) found that both CBCA and
RM techniques successfully discriminated between real and deceptive statements, such that
genuine experiences are associated with higher scores on both measures. They also reported
that cognitive operations (indicative of deception in the RM framework) are more prevalent
when individuals are lying about a personally experienced event. Further, RM was better at
identifying liars versus truth-tellers relative to CBCA (Vrij, Akehurst, et al., 2004b).
Recently, Nahari et al. (in press) also confirmed that RM techniques outperformed SCAN for
discrimination of veracity, with overall rates of 70.7% classification for RM and 58.5% for
SCAN. The use of RM in tandem with CBCA has produced noteworthy correct classification
scores to date: 82.5% truthful, 75% fabricated (see Masip et al., 2005, and Vrij, 2005, for
review). Similar attempts to classify truthful and fabricated claims according to either CBCA
or RM criteria have yielded lower estimates of correct classification. For example, Vrij,
Edward, et al. (2000) reported a rate of 64.1% for fabricated and 70.6% for genuine events
based on several RM sources of information used to differentiate reports (i.e., visual, sound,
Fact or Fiction?: Discriminating True and False Allegations of Victimization 49

spatial, temporal, affective, cognitive). However, use of a combination of these approaches


has revealed promising results (Vrij, 2008).
Other studies have combined criteria from both CBCA (Steller & Köhnken, 1989) and
RM (Johnson & Raye, 1981) approaches into new assessment techniques. For example,
Memon et al. (2010) examined veracity based on two schemes that were variants of ACID
and RM/CBCA. Version 1 included evaluation of external, contextual, affective, and internal
details, whereas Version 2 assessed visual, spatial, temporal, and auditory details, as well as
cognitive operations. Discrimination accuracy for Version 1 (57% true; 75% invented) and
Version 2 (61% true; 78% invented) were not significantly different (Memon et al., 2010). In
a recent study, Peace and Kuchinsky (2011) compared the relative effectiveness of MAP
(memory-based) and LIWC (linguistic-based) criteria for distinguishing veracity in a sample
of true and false trauma allegations. While both the MAP (66.8%) and LIWC (63.3%)
discriminated at rates above chance, a combination of both procedures revealed a
classification accuracy of 73.1%. Further, memory-based criteria (i.e., MAP) became more
predictive over time whether utilized alone or in combination with the LIWC (Peace &
Kuchinsky, 2011).
There also has been a movement to utilize linguistic techniques or coding procedures to
assess memory-based features (e.g., Burgoon & Qin, 2006; Qin et al., 2004; Zhou, Twitchell,
et al., 2004). Zhou and Zhang (2008) reported that linguistic cues (e.g., detail quantity,
language complexity, informality, spontaneous corrections, affect) from both the LIWC
(Pennebaker et al., 2007) and another automated system entitled the Dictionary of Affect (see
Zhou, Burgoon, Nunamaker, & Twitchell, 2004) may be important discriminators of
truthfulness. For example, Qin et al. (2004) found that detail quantity, sentence level
complexity, and informality were promising cues. Specifically, truths were associated with
greater detail and informality, whereas liars used more complex sentences in order to create a
credible impression (Qin et al., 2004). When Fuller et al. (2009) combined the LIWC with
other linguistic techniques (i.e., constructs drawn from interpersonal deception theory, reality
monitoring, and the self-presentational perspective), they reported a classification rate of
74%. Their research suggests that automated deception detection techniques, combined with
other content-based protocols, may be a valuable tool for investigators.

5. FACTORS THAT INFLUENCE CONTENT


“The truth is rarely pure, and never simple”
(Oscar Wilde, The Importance of Being Earnest, 1895/1985)

Although the discriminatory power of the above mentioned techniques have


demonstrated some potential, it also is important to recognize that a variety of factors can
influence content-analysis scores decreasing the utility such tools (e.g., Granhag &
Strömwall, 2009; Köhnken, 1999). As such, provision of truthful or deceptive allegations are
rarely ever without external influence. We have focused our review on the following major
factors: level of experience and familiarity, preparation, motivation, and coaching.
50 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

5.1. Level of Experience and Event Familiarity

Past research often has employed methodologies utilizing scenarios that are witnessed via
videotape followed by memory recall tests (e.g., Akehurst et al., 2001; Landström et al.,
2007; see Porter & ten Brinke, 2010). While these studies attempt to balance the ethical
limitations of exposing individuals to trauma versus the artificiality of a laboratory setting,
research that seeks to determine the veracity of victim/witness statements should consider the
level of experience of the participant. While some of the literature on characteristics of truths
have solicited recall of personal truthful events (e.g., Landry & Brigham, 1992; Porter,
Doucette, Woodworth, Earle, & MacNeil, 2008; Yuille, 1988), others have asked participants
to truthfully describe events witnessed in a video (e.g., Höfer, Köhnken, Hanewinkel, &
Bruhn, 1993; Köhnken et al., 1995; Vrij, Edward, et al., 2000). These types of truths may
contain fundamentally different characteristics, yet few studies have addressed this issue. For
example, Akehurst et al. (2001) that examined the role of direct participation or witnessing
via video a photography session (truthful conditions) and reading about the session and
claiming they were there (fabricated condition). Overall, they noted that fabricated claims
could be distinguished from truthful claims when they were directly experienced, but not
when only observed. Characteristics that reliably discriminated between truthful and
fabricated claims (i.e., were more prevalent in truths) were: reproduction of conversation,
amount of detail, details of interactions, descriptions of mental state, and logical consistency
(Akehurst et al., 2001). Merely having witnessed an event (on video) and reporting truthfully
about it produced lower CBCA scores than directly participating in the event. The authors
argued that these results have “implications for the real world when children and adults use
information from television programmes and videos on which to base false allegations”
(Akehurst et al., 2001, p. 65). Additional research by Alonso-Quecuty and colleagues has
found mixed support for the reality monitoring approach even using live staged events (versus
video or audio-tapes) in child and adult witnesses (e.g., Alonso-Quecuty, 1995; Alonso-
Quecuty et al., 1997; Masip et al., 2005). The level of experience also may have important
implications for the linguistic features of an allegation. For example, narrative research on the
characteristics of direct and indirect speech has indicated that how we quote others or
describe events can vary depending on exposure to the original stimulus (e.g., Banfield,
1973). As such, more studies should address true and false allegations that arise or are based
on indirect relative to direct experiences. In relation to this, statements that are judged by live
observers (relative to video observers) are perceived as more credible and convincing
(Landström et al., 2007).
Familiarity of an event also is relevant to how individuals describe events. For example,
Pezdek et al. (2004) found that CBCA scores were inflated when children reported on painful
medical procedures that were familiar to them rather than unfamiliar. In a follow-up study,
Blandón-Gitlin et al. (2005) had children report on real and fabricated events that were either
familiar or unfamiliar (i.e., sewing a button on a shirt). Their results replicated those of the
previous study – CBCA scores were higher for familiar experiences and were not influenced
by the veracity of the event (Blandón-Gitlin et al., 2005). Although the target event in the
latter study is far removed from traumatic victimization, the implications of the findings are
important to consider when evaluating statements. Take a hypothetical scenario where an
alleged victim is claiming she was sexually assaulted by her ex-boyfriend. Her claim will be
based on her previous experiences with the accused, and refined so that the claim is plausible
Fact or Fiction?: Discriminating True and False Allegations of Victimization 51

within the contextual environment of his life. This is particularly problematic given that a
deceptive claim is embedded within truthful facts about his life (e.g., Vrij, Granhag, et al.,
2010), of which she is very familiar, making the veracity of the allegation difficult to discern.
Further, witnessing or being a victim of past trauma may provide a ‘schema or script’ for
false claims (e.g., Pezdek & Taylor, 2000; Tuckey & Brewer, 2003) that appear believable.
As such, familiarity with the type of event being claimed is important for evaluator’s to
consider when assessing the truthfulness of a claim.

5.2. Preparation and Rehearsal

Within the context of providing a false allegation, most (if not all) individuals will spend
time preparing the deceptive account based purely on imagination, fictional events from
television or novels, or from some grain of truth exaggerated into falsehood (Manzanero &
Diges, 1996). For false claimants, their goal is to tell a plausible ‘story’ and keep the facts
straight (e.g., Vrij et al., 2007; Vrij, 2008). As a result, preparation and rehearsal of the lie
prior to making a false allegation may alter the features of deceptive accounts (DePaulo et al.,
2003; Manzanero & Diges, 1996). For example, O’Hair, Cody, and McLaughlin (1981)
examined verbal and nonverbal features of prepared and spontaneous lies. Those individuals
who prepared a lie had shorter responses and faster response latency, in addition to engaging
in more smiling, head nodding, and body movements. However, during spontaneous
deception, liars exhibited more body adaptors than those who were telling the truth (O’Hair et
al., 1981). In addition, research examining response times when telling lies has indicated that
prepared lies have a shorter latency period than spontaneous ones (Sporer & Schwandt,
2006). In regards to content, Alonso-Quecuty (1992) reported that when participants had 10
minutes to prepare a fabricated narrative, their stories were more richly detailed and had more
reality criteria relative to truthful statements. However, it should be noted that with regards to
preparation and delay, mixed empirical findings have been revealed. Some studies have
reported more contextual and sensory information in deceptive accounts after
preparation/delay (e.g., Alonso-Quecuty, 1992, 1993; Sporer & Küpper, 1995), whereas
others finding more contextual information in truthful reports (e.g., Sporer, 1997).
Collectively, preparation likely leads to a rehearsed quality of memory reports, and has been
associated with a ‘rote’ quality and excessive logical consistency (Lees-Haley, 1984). As a
result, extant research has found that prepared lies are more likely to contain that elusive “ring
of truth” and appear more credible than spontaneous lies (e.g., Bond & DePaulo, 2006;
Manzanero & Diges, 1996). However, in the words of Franklin D. Roosevelt, “Repetition
does not transform a lie into a truth.” (October 26, 1939, radio address).
Given that deceptive features of prepared (relative to spontaneous) lies may be obscured,
detection of such lies may be increasingly difficult (Vrij, 2008). To investigate this, Littlepage
and Pineault (1985) showed undergraduates a videotape of six people telling a lie (three
spontaneous, three prepared). The found that spontaneous lies were more easily detected than
planned lies, and suggested that preparation allows individuals to generate a more plausible
story, and mask the cues to deception they may reveal when unprepared (Littlepage &
Pineault, 1985). Similarly, Strömwall, Granhag and Landström (2007) had undergraduates
assess the veracity of ten truthful and deceptive accounts given by children that were either
prepared or unprepared. Their results indicated that although accuracy judgments were not
52 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

higher than chance overall (51.5%), unprepared statements were identified more accurately
(56.6%) than prepared statements (46.1%). Further, they reported that the same pattern
emerges with judgments of adults’ statements; deception was more difficult to detect in
prepared relative to unprepared statements (Strömwall et al., 2007). These findings also
suggest that cues relied upon for detecting deception are inaccurate for prepared lies, leading
people to make erroneous judgments of truthfulness. Further, when lies are rehearsed and
repeatedly recalled across multiple occasions, the gist of their deceptive claims often is
preserved and the challenge of detection is increased (e.g., Granhag & Strömwall, 2009;
Manzanero & Diges, 1996; Peace & Porter, 2011). Techniques such as the ACID interview
(including multiple recalls, mnemonics, unanticipated questions, and reverse-order recall) are
designed to overcome the influence of preparation and rehearsal, and may assist in veracity
determinations by forcing the deceiver to deviate from their “planned script” and reveal cues
to deception (e.g., Hines et al., 2010; Vrij, Granhag, et al., 2011; Vrij, Granhag, et al., 2010).
More research concerning the role of preparation in deception and its detection is warranted
given the implications for the criminal justice system.

5.4. Level of Motivation

The content of statements also may vary according to the level of motivation of the
prevaricator (e.g., Köhnken, 1996, 1999, 2002). Honest persons are less concerned about
appearing credible to others, whereas this is of utmost importance for the liar. As a result,
liars engage in impression management to make themselves appear truthful (Vrij, 2008). This
translates into deceivers behaving according to how they “think” honest people would, fitting
their statements into a form of “stereotypical” report. However, research has noted that
truthful individuals often provide information that is “contrary-to-stereotype” such as
admitting they can’t remember aspects of the event, having a degree of uncertainty about
some components, and making spontaneous corrections to their report (Ruby & Brigham,
1998). Thus, the absence of such factors can be seen when one is motivated to come across as
sincere and honest when they are being deceptive. Endorsement of behaviours that are
believed to be associated with truthfulness (i.e., truthfulness stereotype) often results in
deceptive claims that are highly detailed and complete, with little logical inconsistency (e.g.,
Ruby & Brigham, 1997, 1998; Vrij et al., 2002). Motivation also varies according to the type
of lie an individual is telling. For example, someone who is making a false allegation of child
sexual abuse should be much more motivated to succeed at lying than someone who is telling
their friend or partner that they like what they are wearing (when, in fact, they do not).
Past research has confirmed verbal and written differences that emerge when individuals
are motivated to appear credible (e.g., Bond & DePaulo, 2006; Porter & Yuille, 1995, 1996;
Porter et al., 1999). In their meta-analysis on cues to deception, Zuckerman and Driver (1985)
reported that highly motivated liars made fewer shifts in position and head movements, more
speech disturbances, and talked more slowly and in a higher pitch than unmotivated liars.
That said, the meta-analysis classified liars as highly motivated if participants were offered a
reward in their respective studies (Zuckerman & Driver, 1985). In a more recent meta-
analysis, DePaulo et al. (2003) reported that motivation increased the frequency with which
cues to deception were revealed, especially when motivations were considered high-stakes
and relevant to the identity of the deceiver rather than pertaining to a monetary reward or
Fact or Fiction?: Discriminating True and False Allegations of Victimization 53

incentive (see also Porter & ten Brinke, 2010). For example, statements in studies with no
explicit motivations to succeed at lying were longer relative to motivated samples (DePaulo et
al., 2003). Essentially, the more motivated someone is to tell and lie and get away with it, the
more deceptive cues they reveal. This paradox has been termed the motivational impairment
effect (MIE), and occurs when liars attempt to simultaneously control all of their verbal and
nonverbal behaviours (DePaulo & Kirkendol, 1989). Vrij (2008) argued that individuals who
are motivated to succeed in lying attempt to engage in more emotional control, which results
in greater cognitive load in monitoring their own behaviour. As a result, motivated liars tend
to display behaviours that are rigid but also contain ‘leakage’ of deception cues. Previous
research has shown that higher levels of motivation for the liar result in lower levels of
successful deceit (see Bond & DePaulo, 2006). For example, DePaulo, Kirkendol, Tang, and
O’Brien (1988) found that liars who were told that conveying their lie was of utmost
importance (relative to little importance) displayed the MIE by exhibiting more cues to
deception. In turn, observers judging the veracity of claims tended to identify motivated liars
more easily as their reports appeared more contrived (DePaulo et al., 1988). However, other
studies have suggested that while motivation may impair nonverbal behaviours, it may
facilitate verbal behaviours. For example, Burgoon and Floyd (2000) manipulated the level of
motivation for individuals lying within a conversational context. They found that motivation
did not impair performance, and that highly motivated participants displayed fewer verbal
cues to deception (i.e., motivation had a facilitative effect). While these outcomes appear
contradictory, it is possible that motivational effects are task-dependent. For example, Pelham
and Neter (1995) found that high motivation facilitated performance for easy tasks and
impaired it for difficult ones. That said, levels of motivation utilized in experimental studies
are rarely comparable to those present in high-stakes situations where individuals are lying
within forensic contexts (Mann et al., 2002; Porter & ten Brinke, 2010).
On another note, level of motivation also is relevant to consider when attempting to
detect deception. Professionals challenged with the task of catching liars often have high
levels of motivation to do so and claim to be proficient at their jobs (e.g., Ask & Granhag,
2007b; Granhag & Strömwall, 2004; O’Sullivan & Ekman, 2004). However, empirical
studies have found that professionals (in general) are not more accurate than laypersons (e.g.,
Ekman et al., 1999), and that highly motivated judges tend to be less accurate (e.g., Ask &
Granhag, 2007b; Porter, McCabe, Woodworth, & Peace, 2007). High motivation is argued to
result in a sort of tunnel vision, that causes participants to have low hit rates and high rates of
false alarms because their judgment is clouded (Porter & ten Brinke, 2009). Overall, high
levels of motivation may be detrimental to for successful deception and deception detection,
thus should be considered when evaluating the features of suspected false allegations (see
Granhag & Strömwall, 2009).

5.5. Coaching

Finally, the role of motivation extends to seeking out knowledge of what makes a
statement more “believable” to produce a more credible, but deceptive, report (Vrij et al.,
2002). In particular, people may obtain information that essentially “coaches” them on how to
successfully lie. Past studies evaluating the influence of having information about credibility
criteria have indicated that people can indeed be trained to give credible but deceptive
54 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

statements that remain undetected (e.g., Vrij et al., 2002; Vrij, Akehurst, et al., 2004b; Vrij,
Kneller, et al., 2000). For example, Vrij, Kneller, et al. (2000) examined how information or
coaching on CBCA criteria altered fabricated claims. Their results indicated that CBCA
scores were comparable across truthful and fabricated accounts, and that an expert trained in
CBCA could not accurately distinguish between truthful accounts and informed liars (Vrij,
Kneller, et al., 2000). In another study, Vrij, Akehurst, et al. (2004b) divided children and
adults into truthful, light coaching, and heavy coaching conditions. Participants witnessed a
black board being erased while they were occupied with a game, and were then told to
provide a truthful witness statement or a deceptive statement based on light/heavy coaching
instructions. Their results indicated that truth-tellers still obtained higher scores than liars
when assessed on CBCA and RM procedures. However, those that were heavily coached
produced higher “truthfulness” scores (Vrij, Akehurst, et al., 2004b). Similarly, Caso, Vrij,
Mann, and DeLeo (2006) used a within subjects design where participants lied or told the
truth (counterbalanced) about an object in a backpack. Deceptive statements were associated
with less detail, fewer interactions, and were unlikely to contain admissions of uncertainty or
memory deficiencies when questioned by an interviewer. However, when coached on CBCA
criteria, they found that participants’ employed these verbal countermeasures to some extent
in their deceptive claims, reporting more references to their mental state and reproductions of
speech (Caso et al., 2006). Further, coaching about the content of fabricated events or ‘what
should have happened’ has been associated with the production of more plausible claims that
are more difficult to detect (e.g., Poole & Lindsay, 1995; Vrij, Akehurst, et al., 2004b; Vrij,
Kneller, et al., 2000). That is, lies become more sophisticated with preparation (Vrij, 2002a).
This has been particularly problematic in false claims of child abuse in parental custody
hearings (e.g., Lyon, Malloy, Quas, & Talwar, 2008; Vrij, 2002b). Essentially, it has been
argued that informing people (or training/coaching) about credibility criteria “nullifies the
discriminative power of CBCA” (Vrij et al., 2002, p. 277). While safeguards concerning the
effects of coaching have been implemented for child testimony (e.g., Lamb, Hershkowitz,
Orbach, & Esplin, 2008; Westcott, Davies, & Bull, 2002), comparable investigation of
coaching appears to be lacking in adult claimants (e.g., Vrij, Akehurst, et al., 2004b).

6. IMPLICATIONS AND CONCLUSIONS


“If we can catalogue the human genome, why can’t we build a truth machine?”
(James Halperin, The Truth Machine, 1996, p. 68)

Understanding the features and consistency of true and false allegations of victimization
is imperative in veracity determinations (e.g., Peace & Porter, 2011). In many cases, judges,
juries, lawyers, and/or the general public are required to assess whether a victim, witness, or
defendant is telling the truth (e.g., Connolly & Gordon, in press; Ekman et al., 1999;
Kaufmann et al., 2003). Unfortunately, people make poor evaluations when detecting
deception (Porter & ten Brinke, 2010; Vrij, Granhag, et al., 2010). Judicial decision-makers
often are misguided by the nature of traumatic claims presented by an alleged victim (e.g.,
Gunnell & Ceci, 2010; Peace & Porter, 2011; Peace, Porter, et al., in press). However,
empirical studies are beginning to shed some light on content-based cues to truthfulness and
deception (e.g., Fisher et al., in press; Memon et al., 2010). In particular, the amount of detail,
Fact or Fiction?: Discriminating True and False Allegations of Victimization 55

contextual embedding, unstructured production, spontaneous corrections, and reproduction of


conversation all appear to be indicators of truthfulness (e.g., Vrij, 2008). Linguistic studies
also suggest that use of negative words and a lack of self-references (i.e., first-person
pronouns) may be indicative of deception (e.g., Bond & Lee, 2005; Hancock et al., 2008;
Newman et al., 2003). Investigators making veracity determinations also should consider
other characteristics of victim/witness statements apart from content, such as plausibility,
consistency, and the individuals’ subjective assessment of their memory. Although some of
these cues may be difficult to measure (i.e., plausibility), a growing body of research has
supported veracity differences in these characteristics (e.g., Fisher et al., in press; Peace &
Porter, 2011). For example, deceptive claims tend to contain more ‘story-like’ details that are
schematic representations of criminal victimization (i.e., sexually assaulted by a stranger in
the woods), relative to truthful accounts that are thorough, specific, and replete with episodic
details unique to each case (e.g., Pezdek & Taylor, 2000; R. v. Norman, 2005).
The presence of extraneous factors that influence statement content also must be taken
into consideration (Vrij, 2008). An alleged victim’s experiences and familiarity with an event
can alter the content and subsequent ‘believability’ of a lie (e.g., Pezdek et al., 2004). Further,
when an individual has time to rehearse and prepare their statement, cues to deception may be
diminished, making a false allegation appear more credible (Bond & DePaulo, 2006). Couple
these factors with the motivation required to successfully “get away” with such high-stakes
lies (i.e., falsely claiming you were victimized) and the process of credibility assessment is
even more challenging. In particular, highly motivated liars may reveal fewer deceptive cues
(e.g., Burgoon & Floyd, 2000) and observers’ often perform worse the more motivated they
are to catch liars (e.g., Porter & ten Brinke, 2010). As a result, it is important to evaluate
suspected false allegations both on specific and global levels (i.e., specific content versus the
overall context and structure/likelihood of the statement).
We caution investigators that relying on ‘gut instinct’ or the ‘feel’ of a statement may
lead to dangerous decisions (e.g., Porter & ten Brinke, 2010), and encourage the use of
empirically-based cues to improve judgment accuracy (e.g., Vrij, Granhag, et al., 2010).
Within the past 30 years, there has been an explosion of research focusing on deceptive cues
and statement analysis techniques (Vrij, 2008). This chapter reviewed the history,
components, discrimination rates, and criticisms of the predominant techniques used to
evaluate credibility based on the content of statements. Although the “success rates” of these
have varied widely, most techniques report overall discrimination rates of between 70-80%.
While this seems promising, statement analysis should not be relied upon as the sole indicator
of truth or deception. Some of the techniques reviewed in this chapter have garnered much
empirical support (e.g., CBCA, RM), and further research is warranted in order to determine
the utility of other techniques (e.g., LIWC, MAP, ACID). These techniques may provide
useful tools for investigators and methods to gain further evidence of veracity, so long as they
are utilized appropriately and without undue influence. Reliance on lie detection procedures
that lack empirical research and make claims contrary to extant literature is ill-advised. For
example, unsupported procedures such as voice-stress analysis and SCAN are becoming more
widely used by police forces in North America, with unforeseen consequences (e.g., Nahari et
al., in press; Vrij, Granhag, et al., 2010). Unquestioning acceptance of “new techniques” can
lead to erroneous judicial decisions and miscarriages of justice (e.g., Porter & ten Brinke,
2009).
56 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

The authors have demonstrated that the topic of true and false allegations is complex, and
remains a significant challenge for police, psychologists, lawyers, judges, and juries.
Although various content-based techniques have been developed, insufficient empirical
evidence exists to be able to reliably discern fact from fiction. Future research on each of
these techniques, including their application to automated deception detection paradigms
(e.g., Toma & Hancock, in press; Zhou & Zhang, 2008; Vrij et al., 2010) may not ultimately
lead to a Truth Machine (Halperin, 1996), but will lead to improvements in our ability to
determine the features and veracity of claims of victimization.

REFERENCES
Adams, S. H. (1996, October). Statement analysis: What do suspects’ words really reveal?
FBI Law Enforcement Bulletin, 12–20. Retrieved from http://www.fbi.gov/stats-
services/publications/law-enforcement-bulletin/leb.
Adelson, R. (2004). Detecting deception. APA Monitor, 35, 70.
Ahrens, C. E. (2006). Being silenced: The impact of negative social reactions on the
disclosure of rape. American Journal of Community Psychology, 38, 263-274.
Akehurst, L., Bull, R., Vrij, A., & Köhnken, G. (2004). The effects of training professional
groups and lay persons to use criteria-based content analysis to detect deception. Applied
Cognitive Psychology, 18, 877-891.
Akehurst, L., Köhnken, G., & Höfer, E. (2001). Content credibility of accounts derived from
live and video presentations. Legal and Criminological Psychology, 6, 65-83.
Alexander, K. W., Quas, J. A., Goodman, G. S., Ghetti, S., Edelstein, R. S., Redlich, A. D., et
al. (2005). Traumatic impact predicts long-term memory for documented child sexual
abuse. Psychological Science, 16, 33-40.
Alonso-Quecuty, M. L. (1992). Deception detection and reality monitoring: a new answer to
an old question? In F. Lösel, D. Bender, & T. Bliesener (Eds.), Psychology and law:
International perspectives (pp. 328-332). Berlin: Walter de Gruyter.
Alonso-Quecuty, M. L. (1993). Psicología forense experimental: el efecto de la demora en la
toma de declaración y el grado de elaboración de la misma sobre los testimonios
verdaderos y falsos [Experimental forensic psychology: the effect of delay and
preparation of a statement upon truthful and deceptive testimonies]. In M. García (Ed.),
Psicología Social Aplicada en los Procesos Jurídicos y Políticos (pp. 81-88). Seville:
Eudema.
Alonso-Quecuty, M. L. (1995). Detecting fact from fallacy in child and adult witness
accounts. In G. Davies, S. Lloyd-Bostock, M. McMurran and C. Wilson (Eds.),
Psychology, Law and Criminal Justice. International Developments in Research and
Practice (pp. 74-80). Berlin: Walter de Gruyter.
Alonso-Quecuty, M. L. (1996). Detecting fact from fallacy in child and adult witness
accounts. In G. Davies, S. Lloyd-Bostock, M. McMurran, & C. Wilson (Eds.),
Psychology, law, and criminal justice: International developments in research and
practice (pp. 74-80). Berlin: Walter de Gruyter.
Alonso-Quecuty, M. L., & Hernández-Fernaud, E. (1997). Tócala otra vez Sam: repitiendo
las mentiras [Play it again Sam: Retelling a lie]. Estudios de Psicología, 57, 29-37.
Fact or Fiction?: Discriminating True and False Allegations of Victimization 57

Alonso-Quecuty, M. L., Hernández-Fernaud, E., & Campos, L. (1997). Child witnesses:


Lying about something heard. In S. Redondo, V. Garrido, J. Perez, & R. Barbaret (Eds.),
Advances is psychology and law (pp. 129- 135). Berlin, DE: Walter de Gruyter.
Aamodt, M. G., & Custer, H. (2006). Who can best catch a liar?: A meta-analysis of
individual differences in detecting deception. The Forensic Examiner, 15, 6-11.
Anderson, S. J., Cohen, G., & Taylor, S. (2000). Rewriting the past: Some factors affecting
the variability of personal memories. Applied Cognitive Psychology, 14, 435-454.
Anderson, D. E., DePaulo, B. M., Ansfield, M. E., Tickle, J. J., & Green, E. (1999). Beliefs
about cues to deception: Mindless stereotypes or untapped wisdom? Journal of
Nonverbal Behavior, 23, 67-89.
Ansarra, R., Colwell, K., Hiscock-Anisman, C., Hines, A., Cole, L., & Kondor, S. (2009).
Augmenting ACID with affective details to assess credibility. Manuscript under review
Anson, D. A., Golding, S. L., & Gully, K. J. (1993). Child sexual abuse allegations:
Reliability of criteria-based content analysis. Law and Human Behavior, 17, 331-341.
Anthony, G., & Watkeys, J. (1991). False allegations in child sexual abuse. Children and
Society, 5, 111-122.
Arbuthnott, K. D., Geelen, C. B., & Kealy, K. L. K. (2002). Phenomenal characteristics of
guided imagery, natural imagery and autobiographical memories. Memory and Cognition,
30, 519–528.
Aristotle. (1908). Treatise on memory and recollection. (J. I. Beare, Trans.). Oxford, UK:
Clarendon. (Original work written c. 335 B.C.)
Arntzen, F. (1983). Psychologie der Zeugenaussage: Systematik der Glaubwürdig-
keitsmerkmale (Psychology of witness statements). Munich, Germany: C. H. Beck.
Ask, K., & Granhag, P. A. (2007a). Hot cognition in investigative judgments: The differential
influence of anger and sadness. Law and Human Behavior, 31, 537-551.
Ask, K., & Granhag, P. A. (2007b). Motivational bias in criminal investigators’ judgments of
witness reliability. Journal of Applied Social Psychology, 37, 561-591.
Associated Press, The. (2005, June 22). Wilbanks: What she was thinking. Retrieved July 17,
2006 from http://www.cbsnews.com/stories/2005/06/22/national/ main703401.shtml.
Augustine. (1956). On Lying. (P. Schaff ,Trans.). Grand Rapids, MI: W. Eerdmans. (Original
work written c. 395).
Bachenko, J., Fitzpatrick, E., & Schonwetter. M. (2008). Verification and implementation of
language-based deception indicators in civil and criminal narratives. Proceedings of the
22nd international conference on computational linguistics (Coling 2008), pp. 41–48.
Banfield, A. (1973). Narrative style and the grammar of direct and indirect speech.
Foundations of Language, 10, 1-39.
Barnier, A. J., Sharman, S. J., McKay, L., & Sporer, S. L. (2005). Discriminating adults’
genuine, imagined, and deceptive accounts of positive and negative childhood events.
Applied Cognitive Psychology, 19, 985-1001.
Bartol, C. R., & Bartol, A. M. (1999). History of forensic psychology. In A. K. Hess & I. B.
Weiner (Eds.), Handbook of forensic psychology (2nd ed., pp. 3-23). New York, NY: John
Wiley.
Bartol, C. R., & Bartol, A. M. (2004). Psychology and law: Theory, research, and application
(3rd ed.). Belmont, CA: Wadsworth.
Bell, B. E., & Loftus, E. F. (1989). Trivial persuasion in the courtroom: The power of (a few)
minor details. Journal of Personality and Social Psychology, 56, 669–679.
58 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

Benedek, E. P., & Schetky, D. H. (1985). Allegations of sexual abuse in child custody and
visitation disputes. In D. H. Schetky & E. P. Benedek (Eds.), Emerging issues in child
psychiatry and the law (pp. 145-156). New York, NY: Brunner/Mazel.
Berman, G. L., & Cutler, B. L. (1996). Effects of inconsistencies in eyewitness testimony on
mock-juror decision making. Journal of Applied Psychology, 81, 170-177.
Berman, G. L., Narby, D. J., & Cutler, B. L. (1995). Effects of inconsistent eyewitness
statements on mock-jurors’ evaluations of the eyewitness, perceptions of defendant
culpability and verdicts. Law and Human Behavior, 24, 581-594.
Binet, A. (1900). La suggestibilité [Suggestibility]. Paris: Schleicher Frères.
Biland, C., Py, J., & Rimboud, S. (1999). Evaluer la sincéité d’un témoin grâce à trois
techniques d’analyse, verbales et non verbale [Evaluating a witness’ sincerity with three
verbal and nonverbal techniques]. Revue Européenne de Psychologie Appliquée, 49, 115-
/121.
Blandón-Gitlin, I., Pezdek, K., Lindsay, D. S., & Hagen, L. (2009). Criteria-based content
analysis of true and suggested accounts of events. Applied Cognitive Psychology, 23,
901-917.
Blandón-Gitlin, I., Pezdek, K., Rogers, M., & Brodie, L. (2005). Detecting deception in
children: An experimental study of the effect of event familiarity on CBCA ratings. Law
and Human Behavior, 29, 187-197.
Bond, C. F., & DePaulo, B. M. (2006). Accuracy of deception judgments. Personality and
Social Psychology Review, 10, 214-234.
Bond, C. F., Jr., & DePaulo, B. M. (2008). Individual differences in judging deception:
Accuracy and bias. Psychological Bulletin, 134, 477-492.
Bond, G. D., & Lee, A. Y. (2005). Language of lies in prison: Linguistic classification of
prisoners’ truthful and deceptive natural language. Applied Cognitive Psychology, 19,
313-329.
Bosson, J. K., Swann, W. B., Jr., & Pennebaker, J. W. (2000). Stalking the perfect measure of
implicit self-esteem: The blind men and the elephant revisited? Journal of Personality
and Social Psychology, 79, 631-643.
Bottoms, B. L., Shaver, P. R., & Goodman, G. S. (1996). An analysis of ritualistic and
religion-related child abuse allegations. Law and Human Behavior, 20, 1-34.
Bradford, R. (1994). Developing an objective approach to assessing allegations of sexual
abuse. Child Abuse Review, 3, 93-101.
Bradford, D., & Goodman-Delahunty, J. (2008). Detecting deception in police investigations:
Implications for false confessions. Psychiatry, Psychology and Law, 15, 105-118.
Bradley, S. (2006, May 9). Sex-crime stats tough to analyze. The Chronicle Herald. Retrieved
June 1, 2006 from http://www.apmlawyers.com/news-40.htm.
Brewer, N., & Burke, A. (2002). Effects of testimonial inconsistencies and eyewitness
confidence on mock-juror judgments. Law and Human Behavior, 26, 353-364.
Brewer, N., Potter, R., Fisher, R. P., Bond, N., & Luszcz, M. A. (1999). Beliefs and data on
the relationship between consistency and accuracy of eyewitness testimony. Applied
Cognitive Psychology, 13, 297-313.
Brown, D., Scheflin, A. W., & Hammond, C. (1998). Memory, trauma treatment, and the
law. New York, NY: Norton.
Fact or Fiction?: Discriminating True and False Allegations of Victimization 59

Buller, D. B., Burgoon, J. K., Buslig, A., & Roiger, J. (1996). Testing Interpersonal
Deception Theory: The language of interpersonal deception. Communication Theory, 6,
268-289.
Burgoon, J. K., Blair, J. P., Qin, T., & Nunamaker, F. Jr. (2003). Detecting deception through
linguistic analysis. Lecture Notes in Computer Science, 103, 91-10.
Burgoon, J. K., & Floyd, K. (2000). Testing for the motivational impairment effect during
deceptive and truthful interaction. Western Journal of Communication, 64, 243-267.
Burgoon, J. K., & Qin, T. (2006). The dynamic nature of deceptive verbal communication.
Journal of Language and Social Psychology, 25, 76-96.
Byron, Lord. (1979). Detached thoughts, No. 51. In L. A. Marchand, Byron's letters and
journals (Vol. 9) (pp. 1821-1822).
Campbell, R. (2006). Rape survivors’ experiences with the legal and medical systems: Do
rape victim advocates make a difference? Violence Against Women, 12, 30-45.
Campbell, R. S. & Pennebaker, J. W. (2003). The secret life of pronouns: Flexibility in
writing style and physical health. Psychological Science, 14, 60-65.
Campbell, M. A., & Porter, S. (2002). Pinpointing reality: How well can people judge true
and mistaken emotional childhood memories? Canadian Journal of Behavioural Science,
34, 217-229.
Canter, D. V., Grieve, N., Nicol, C., & Benneworth, K. (2003). Narrative plausibility: The
impact of sequence and anchoring. Behavioral Sciences and the Law, 21, 251-267.
Carpenter, R. (1990). The statistical profile of language behavior with Machiavellian intent or
while experiencing caution and avoiding self-incrimination. Annals of the New York
Academy of Sciences, 606, 5-16.
Carroll, R. T. (2009). L.S.I. SCAN - Too good to be true. Retrieved March 29, 2011 from
http://www.skepdic.com/refuge/scan.html.
Caso, L., Vrij, A., Mann, S., & De Leo, G. (2006). Deceptive responses: The impact of verbal
and non-verbal countermeasures. Legal and Criminological Psychology, 11, 99-111.
CBC News (2006, June 6). Long legal battle finally over. Retrieved June 7, 2006, from
http://www.cbc.ca/newsatsixns/archives/2006_jun_w1.html.
Chung, C. K., & Pennebaker, J. W. (2007). The psychological functions of function words. In
K. Fiedler (Ed.), Social communication (pp. 343-359). New York, NY: Psychology Press.
Clemens, F., Granhag, P. A., Strömwall, L., Vrij, A., Landström, S., Roos af Hjelmsäter, E.,
& Hartwig, M. (2010). Skulking around the dinosaur: Eliciting cues to children's
deception via strategic disclosure of evidence. Applied Cognitive Psychology, 24, 925-
940.
CNN. (2004, July 1). Student who faked abduction given probation. Retrieved July 12, 2004,
from http://www.cnn.com/2004/LAW/07/01/missing.student.sentence/ index.html.
CNN. (2005, May 26). 'Runaway bride' charged with making false statement. Retrieved July
17, 2006, from http://edition.cnn.com/2005/US/05/25/wilbanks/index.html.
Cohn, M. A., Mehl, M. R., & Pennebaker, J. W. (2004). Linguistic markers of psychological
change surrounding September 11, 2001. Psychological Science, 15, 687-93.
Colwell, K. (2007). Assessment Criteria Indicative of Deception (ACID): An integrated
system of investigative interviewing and detecting deception. Journal of Investigative
Psychology and Offender Profiling, 4, 167-180.
Colwell, K., Hiscock, C. K., & Memon, A. (2002). Interviewing techniques and the
assessment of statement credibility. Applied Cognitive Psychology, 16, 287–300.
60 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

Colwell, K., Hiscock-Anisman, C., Leach, A., Corbett, L., & Urez, E. (2007). Reality
Monitoring criteria and real versus imagined autobiographical memory in children.
Manuscript in preparation.
Colwell, K., Hiscock-Anisman, C., Memon, A., Woods, D., & Michlik, P. M. (2006).
Strategies of impression management among deceivers and truth-tellers: How liars
attempt to convince. American Journal of Forensic Psychology, 24, 31–38.
Colwell, K., Hiscock-Anisman, C., Memon, A., Rachel, A., & Colwell, L. (2007). Vividness
and spontaneity of statement detail characteristics as predictors of witness credibility.
American Journal of Forensic Psychology, 25, 5–30.
Colwell, K., Hiscock-Anisman, C., Memon, A., Colwell, L. H., Taylor, L., & Woods, D.
(2009). Training in Assessment Criteria Indicative of Deception to improve credibility
judgments. Journal of Forensic Psychology Practice, 9, 199-207.
Connolly, D. A., & Gordon, H. M. (in press). He-said-she-said: Contrast effects in credibility
assessments and possible threats to fundamental principles of criminal law. Legal and
Criminological Psychology.
Copleston, F. (1964). A history of philosophy: Modern philosophy: The British philosophers,
Part II: Berkeley to Hume. (Vol. 5). Garden City, NY: Image.
Cory, P. (2001, November 4). The inquiry regarding Thomas Sophonow. Retrieved
September 19, 2006, from http://www.gov.mb.ca/justice/publications/sophonow/.
Coulthard, M. (1994). Powerful evidence for the defense: An exercise in forensic discourse
analysis. In J. Gibbons (Ed.), Language and the law (pp. 414-427). London, UK:
Longman.
Davidson, D. (1990). The structure and content of truth. The Journal of Philosophy, 87, 279-
328.
Davies, G. M. (1994). Statement validity analysis: An art or a science? Commentary on
Bradford. Child Abuse Review, 3, 104-106.
DePaulo, B. M., Kashy, D. A., Kirkendol, S. E., Wyer, M. M., & Epstein, J. A. (1996). Lying
in everyday life. Journal of Personality and Social Psychology, 70, 979-995.
DePaulo, B. M., & Kirkendol, S. E. (1989). The motivational impairment effect in the
communication of deception. In J. C. Yuille (Ed.), Credibility assessment (pp. 51–70).
New York, NY: Kluwer Academic/Plenum Publishers.
DePaulo, B. M., Kirkendol, S. E., Tang, I., & O’Brien, T. P. (1988). The motivational
impairment effect in the communication of deception: Replications and extensions.
Journal of Nonverbal Behavior, 12, 177-202.
DePaulo, B. M., Lindsay, J. J., Malone, B. E., Muhlenbruck, L., Charlton, K., & Cooper, H.
(2003). Cues to deception. Psychological Bulletin, 129, 74–118.
DePaulo, B. M., Rosenthal, R., Rosenkrantz, J., & Green, C. R. (1982). Actual and perceived
cues to deception: A closer look at speech. Basic and Applied Social Psychology, 3, 291–
312.
DePaulo, B. M., Stone, J. I., & Lassiter, G. D. (1985). Deceiving and detecting deceit. In B.
R. Schlenker (Ed.), The self and social life (pp. 323–370). New York: McGraw-Hill.
Dotto, L. (2004, January/February). Liar liar. National, 44-49.
Driscoll, L. N. (1994). A validity assessment of written statements from suspects of criminal
investigations using the SCAN technique. Police Studies, 17, 77-88.
Fact or Fiction?: Discriminating True and False Allegations of Victimization 61

Dudukovic, N. M., Marsh, E. J., & Tversky, B. (2004). Telling a story or telling it straight:
The effects of entertaining versus accurate retellings on memory. Applied Cognitive
Psychology, 18, 125-143.
Dulaney, E. F. Jr. (1982). Changes in language and behavior as a function of veracity. Human
Communication Research, 9, 75-82.
Dzindolet, M. T., & Pierce, L. G. (2005). Using a linguistic analysis tool to detect deception.
Human Factors and Ergonomics Society Annual Proceedings, 5, 563-567.
Ekman, P. (2001). Telling lies: Clues to deceit in the marketplace, politics and marriage (3rd
ed.). New York, NY: Norton.
Ekman, P., & O'Sullivan, M. (1991). Who can catch a liar? American Psychologist, 46, 913-
920.
Ekman, P., O’Sullivan, M., & Frank, M. G. (1999). A few can catch a liar. Psychological
Science, 10, 263-266.
Elliot, R. (2011). The language of deception. Security Management.
http://securitymanagement.com/print/660
Erdmann, K., Volbert, R., & Böhm, C. (2004). Children report suggested events even when
interviewed in a non-suggestive manner: What are its implications for credibility
assessment? Applied Cognitive Psychology, 18, 589-611.
Euale, J., & Turtle, J. W. (1999). Police foundations: Interviewing and investigation. Toronto,
ON: Emond Montgomery.
Everson, M. D., & Boat, B. W. (1989). False allegations of sexual abuse by children and
adolescents. Journal of the American Academy of Child & Adolescent Psychiatry, 28,
230-235.
Feldman Barrett, L., Williams, N. L., & Fong, G. T. (2002). Defensive verbal behavior
assessment. Personality and Social Psychology Bulletin, 28, 776-788.
Fisher, R. P. (1995). Interviewing victims and witnesses of crime. Psychology, Public Policy,
and Law, 1, 732-764.
Fisher, R. P., Vrij, A., & Leins, D. A. (in press). Inconsistency as a predictor of Memory
Inaccuracy and Lying. In B. S. Cooper, D. Griesel, & M. Ternes (Eds.) Applied issues in
investigative interviewing, eyewitness memory, and credibility assessment. New York,
NY: Springer.
Fitzsimmons, G. M., & Kay, A. C. (2004). Language and interpersonal cognition: Causal
effects of variations in pronoun usage on perceptions of closeness. Personality and Social
Psychology Bulletin, 5, 547-557.
Foley, M. A., & Johnson, M. K. (1985). Confusions between memories for performed and
imagined actions: A developmental comparison. Child Development, 56, 1145-1155.
Ford, E. B. (2006). Lie detection: Historical, neuropsychiatric and legal dimensions.
International Journal of Law and Psychiatry, 29, 159-177.
Fuller, C. M., Biros, D. P., & Wilson, R. L. (2009). Decision support for determining veracity
via linguistic-based cues. Decision Support Systems, 46, 695-703.
Garrido, E., Masip, J., & Herrero, C. (2004). Police officers’ credibility judgments: Accuracy
and estimated ability. International Journal of Psychology, 39, 254–275.
Garry, M., Manning, C. G., Loftus, E. F., & Sherman, S. J. (1996). Imagination inflation:
Imagining a childhood event inflates the confidence that it occurred. Psychonomic
Bulletin & Review, 3, 208-214.
62 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

Garry, M., & Polaschek, D. L. L. (2000). Imagination and memory. Current Directions in
Psychological Science, 9, 6-10.
Gill, A. J., Oberlander, J., & Austin, E. (2006). The perception of e-mail personality at zero
acquaintance. Personality and Individual Differences, 40, 497-507.
Glissan, J. L. (1991). Cross-examination: Practice and procedure. Sydney, AU:
Butterworths.
Gnisci, A., Caso, L., & Vrij, A. (2010). Have you made up your story? The effect of suspicion
and liars’ strategies on Reality Monitoring. Applied Cognitive Psychology, 24, 762-773.
Gödert, H. W., Gamer, M., Rill, H.-G., & Vossel, G. (2005). Statement validity assessment:
Inter-rater reliability of criteria-based content analysis in the mock-crime paradigm.
Legal and Criminological Psychology, 10, 225-245.
Goldstein, R. L. (1980). Credibility and incredibility: The psychiatric examination of the
complaining witness. American Journal of Psychiatry, 137, 1238-1240.
Gonzales, A. G., Hancock, J. T., & Pennebaker, J. (2010). Language style matching as a
predictor of social dynamics in small groups. Communication Research, 37, 3-19.
Goodman, G. S., Batterman-Faunce, J. M., Schaaf, J. M., & Kenney, R. (2002). Nearly 4
years after an event: Children’s eyewitness memory and adults’ perceptions of children’s
accuracy. Child Abuse and Neglect, 26, 849–884.
Gordon, N. J., & Fleisher, W. L. (2006). Effective interviewing and interrogation techniques
(2nd Ed.). Boston, MA: Academic Press.
Granhag, P. A., & Strömwall, L. A. (1999). Repeated interrogations – Stretching the
deception detection paradigm. Expert Evidence, 7, 163–174.
Granhag, P. A., & Strömwall, L. A. (2000a). Deception detection: Examining the consistency
heuristic. In C. M. Breur, M. M. Kommer, J. F. Nijboer, & J. M. Reintjes (Eds.), New
trends in criminal investigation and evidence (pp. 309-321). Antwerpen: Intersentia.
Granhag, P. A., & Strömwall, L. A. (2000b). The effects of preconceptions on deception
detection and new answers to why lie-catchers often fail. Psychology, Crime, & Law, 6,
197–218.
Granhag, P. A., & Strömwall, L. A. (2001a). Deception detection: Interrogators’ and
observers’ decoding of consecutive statements. Journal of Psychology: Interdisciplinary
and Applied, 135, 603-620.
Granhag, P. A., & Strömwall, L. A. (2001b). Deception detection based on repeated
interrogations. Legal and Criminological Psychology, 6, 85-101.
Granhag, P. A., & Strömwall, L. A. (2002). Repeated interrogations: Verbal and non-verbal
cues to deception. Applied Cognitive Psychology, 16, 243-257.
Granhag, P. A., & Strömwall, L. A. (2004). The detection of deception in forensic contexts.
Cambridge, UK: Cambridge University Press.
Granhag, P. A., & Strömwall, L. A. (2009). The detection of deceit. In R. N. Kocsis (Ed.),
Applied criminal psychology: A guide to forensic behavioural sciences (pp. 95-120).
Springfield, IL: Charles C. Thomas Publisher, Ltd.
Granhag, P. A., Strömwall, L. A., & Jonsson, A. C. (2003). Partners in crime: How liars in
collusion betray themselves. Journal of Applied Social Psychology, 33, 848-868.
Granhag, P. A., Strömwall, L. A., & Landström, S. (2006). Children recalling an event
repeatedly: Effects on RM and CBCA scores. Legal and Criminological Psychology, 11,
81-98.
Fact or Fiction?: Discriminating True and False Allegations of Victimization 63

Granhag, P. A., Strömwall, L., & Olsson, C. (2001). Fact or fiction? Adults’ ability to assess
children’s veracity. Paper presented at the 11th European Conference on Psychology and
Law, Lisbon, Portugal, June 2001.
Granhag, P. A., & Vrij, A. (2005). Deception detection. In N. Brewer & K. Williams (Eds.),
Psychology and law: An empirical perspective (pp. 43–92). New York, NY: Guilford.
Graveland, B. (2004, January 26). Trial adjourns before it begins. The Globe and Mail.
Retrieved March 19, 2004, from http://www.globeandmail.com.
Green, A. (1986). True and false allegations of sexual abuse in child custody disputes.
Journal of American Academy of Child Psychiatry, 25, 449-456.
Greuel, L. (1992). Police officers’ beliefs about cues associated with deception in rape cases.
In F. Lösel, D. Bender, & T. Bliesener (Eds.), Psychology and law – International
perspectives (pp. 234-239). Berlin: Walter de Gruyter.
Groom, C. J., & Pennebaker, J. W. (2005). The language of love: Sex, sexual orientation, and
language use in online personal advertisements. Sex Roles, 52, 447-461.
Gumpert, C. H., & Lindblad, F. (2001). Expert testimony on child sexual abuse: A qualitative
study of the Swedish approach to statement analysis. Expert Evidence, 7, 279-314.
Gunnell, J. J., & Ceci, S. J. (2010). When emotionality trumps reason: A study of individual
processing style and juror bias. Behavioral Sciences and the Law, 28, 850-877.
Gutheil, T. G., & Sutherland, P. K. (1999). Forensic assessment, witness credibility and the
search for truth through expert testimony in the courtroom. Journal of Psychiatry & Law,
27, 289-312.
Hale, M., Jr. (1980). Human science and social order: Hugo Münsterberg and the origins of
applied psychology. Philadelphia, PA: Temple University Press.
Halliday, L. (1988). Examining false allegations. Campbell River, BC: Ptarmigan Press.
Halperin, J. L. (1996). The truth machine. New York, NY: Ballantine Books.
Hancock, J. T. (2009). Digital deception: The practice of lying in the digital age. In B.
Harrington (Ed.), Deception: Methods, contexts and consequences (pp.109-120). Palo
Alto, CA: Stanford Press.
Hancock, J. T., Beaver, D. I., Chung, C. K., Frazee, J., Pennebaker, J. W., Graesser, A., &
Cai, Z. (2010). Social language processing: A framework for analyzing the
communication of terrorists and authoritarian regimes. Behavioral Sciences of Terrorism
and Political Aggression, 2, 108-132.
Hancock, J. T., Curry, L., Goorha, S., & Woodworth, M. T. (2008). On lying and being lied
to: A linguistic analysis of deception. Discourse Processes, 45, 1-23.
Hancock, J. T., Woodworth, M., & Goorha, S. (2010). See no evil: The effect of
communication medium and motivation on deception detection. Group Decision and
Negotiation. 19, 327-336.
Hartwig, M., Granhag, P. A., Strömwall, L. A., & Kronkvist, O. (2006). Strategic use of
evidence during police interviews: When training to detect deception works. Law and
Human Behavior, 30, 603-619.
Hartwig, M., Granhag, P.A., Strömwall, L.A., & Vrij, A. (2004). Police officers’ lie detection
accuracy: Interrogating freely versus observing video. Police Quarterly, 7, 429–456.
Hartwig, M., Granhag, P. A., Strömwall, L., Wolf, A. G., Vrij, A. & Hjelmsäter, E. (in press).
Detecting deception in suspects: Verbal cues as a function of interview strategy.
Psychology, Crime & Law.
64 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

Hernández-Fernaud, E., & Alonso-Quecuty, M. L. (1997). The cognitive interview and lie
detection: A new magnifying glass for Sherlock Holmes? Applied Cognitive Psychology,
11, 55-68.
Heydon, G. (2009). Are police organizations suspending their disbelief in Scientific Content
Analysis (SCAN)? International Investigative Interviewing Research Group Bulletin, 1,
8-9.
Heydon, G. (2011). The myth of the linguistic lie detector. Unpublished manuscript.
Hiltunen, R. (1996). “Tell me, be you a witch?”: Questions in the Salem witchcraft trials of
1692. International Journal for the Semiotics of Law, 9, 17-37
Hiltunen, R., & Doty, K. (2002). “I will tell, I will tell”: Confessional patterns in the Salem
Witchcraft Trials, 1692. Journal of Historical Pragmatics, 3, 299-335.
Hines, A., Colwell, K., Hiscock-Anisman, C., Garrett, E., Ansarra, R., & Montalvo, L.
(2010). Impression management strategies of deceivers and honest reporters in an
investigative interview. The European Journal of Psychology Applied to Legal Context,
2, 73-90.
Hocking, J. E., Bauchner, J. E., Kaminski, E. P. & Miller, G. R. (1979). Detecting deceptive
communication from verbal, visual, and paralinguistic cues. Human Communication
Research, 6, 33-46.
Höfer, E., Akehurst, L., & Metzger, G. (1996). Reality monitoring: A chance for further
development of the CBCA? Paper presented at the 6th European Conference on
Psychology and Law, Sienna, Italy, August 1996.
Höfer, E., Köhnken, G., Hanewinkel, R., & Bruhn, C. (1993). Diagnostik und attribution von
glaubwurd igkeit. Final report for research grant no. KO 882/4-1 of the Deutsche
Forsagungsgemeinschaft. Kiel, Germany: University of Kiel.
Hollien, H. (1990). The acoustics of crime: The new science of forensic phonetics. New York,
NY: Plenum.
Honts, C. R. (1994). Assessing children’s credibility: Scientific and legal issues in 1994.
North Dakota Law Review, 70, 879–903.
Hume, D. (1737/1886). Treatise on human nature. (A. Black, Trans.). London, UK: William
& Charles Tait. (Original work written c. 1737).
Hume, D. (1739/1964). Treatise on human nature. In T. Green & T. H. Grose (Eds.), David
Hume: The philosophical works. London, UK: Scientia Verlag Aalen.
Hyman, I. E., & Pentland, J. (1996). The role of mental imagery in the creation of false
childhood memories. Journal of Memory & Language, 35, 101-117.
Johnson, M. K. (1983). A multiple-entry, modular memory system. In G. H. Bower (Ed.), The
psychology of learning and motivation: Advances in research and theory (Vol. 17, pp.
81-123). New York, NY: Academic Press.
Johnson, M. K. (1988). Reality monitoring: An experimental phenomenological approach.
Journal of Experimental Psychology: General, 117, 390-394.
Johnson, M. K., Bush, J. G., & Mitchell, K. J. (1998). Interpersonal reality monitoring:
judging the sources of other people’s memories. Social Cognition, 16, 199–224.
Johnson, M. K., Foley, M. A., Suengas, A. G., & Raye, C. L. (1988). Phenomenal
characteristics of memories for perceived and imagined autobiographical events. Journal
of Experimental Psychology: General, 117, 371-376.
Johnson, M. K., Hashtroudi, S., & Lindsay, D. S. (1993). Source monitoring. Psychological
Bulletin, 114, 3-29.
Fact or Fiction?: Discriminating True and False Allegations of Victimization 65

Johnson, M. K., & Raye, C. L. (1981). Reality monitoring. Psychological Review, 88, 67-85.
Johnson, M. K., & Raye, C. L. (1998). False memories and confabulation. Trends in
Cognitive Sciences, 2, 137-145.
Johnson, M. K., & Suengas, A. G. (1989). Reality monitoring judgments of other people's
memories. Bulletin of the Psychonomic Society, 27, 107-110.
Jones, D. P. H., & McGraw, J. M. (1987). Reliable and fictitious accounts of sexual abuse of
children. Journal of Interpersonal Violence, 2, 27-45.
Kanin, E. J. (1994). False rape allegations. Archives of Sexual Behaviour, 23, 81–93.
Kassin, S. M., Meissner, C. A., & Norwick, R. J. (2005). ‘‘I’d know a false confession if I
saw one’’: A comparative study of college students and police investigators. Law and
Human Behavior, 29, 211–227.
Kaufman, F. (2002). Searching for justice: An independent review of Nova Scotia’s response
to reports of institutional abuse. Retrieved April 26, 2006, from
http://www.gov.ns.ca/just/kaufmanreport/fullreport.pdf
Kaufmann, G., Drevland, G. C. B., Wessel, E., Overskeid, G., & Magnussen, S. (2003). The
importance of being earnest: Displayed emotions and witness credibility. Applied
Cognitive Psychology, 17, 21-34.
Kealy, K. L. K., Kuiper, N. A., & Klein, D. N. (2006). Characteristics associated with real
and made-up events: The effects of event valence, event elaboration, and individual
differences. Canadian Journal of Behavioural Science, 38, 158-175.
Kelly, A., Carroll, M., & Mazzoni, G. (2002). Metamemory and reality monitoring. Applied
Cognitive Psychology, 16, 407-428.
Kemp, S., Burt, C. D. B., & Sheen, M. (2003). Remembering dreamt and actual experiences.
Applied Cognitive Psychology, 17, 577-591.
King, B. (2006). The lying ape: An honest guide to a world of deception. Cambridge, UK:
Icon Books Ltd.
Klein, K., & Boals, A. (2001). Expressive writing can increase working memory capacity.
Journal of Experimental Psychology: General, 130, 520-533.
Kleinmuntz, B., & Szucko, J. J. (1984). Lie detection in ancient and modern times: A call for
contemporary scientific study. American Psychologist, 39, 766-776.
Knapp, M. L., Hart, R. P., & Dennis, H. S. (1974). An exploration of deception as a
communication construct. Human Communication Research, 1, 15-29.
Köhnken, G. (1989). Behavioral correlates of statement credibility: Theories, paradigms and
results. In H. Wegener, F. Lösel, & J. Haisch (Eds.), Criminal behavior and the justice
system: Psychological perspectives (pp. 271-289). New York, NY: Springer-Verlag.
Köhnken, G. (1996). Social psychology and the law. In G. R. Semin & K. Fiedler (Eds.),
Applied social psychology (pp. 257-282). London, UK: Sage Publications.
Köhnken, G. (1999, July). Statement Validity Assessment. Paper presented at the pre-
conference program of applied courses “Assessing credibility” organized by the
European Association of Psychology and Law, Dublin, Ireland.
Köhnken, G. (2002). A German perspective on children’s testimony. In H. L. Westcott, G. M.
Davies, & R. H. C. Bull (Eds.), Children’s testimony: A handbook of psychological
research and forensic practice (pp. 233-244). Chichester, UK: Wiley.
Köhnken, G., Schimossek, E., Aschermann, E. & Höfer, E. (1995). The cognitive interview
and the assessment of the credibility of adults’ statements. Journal of Applied
Psychology, 80, 671–684.
66 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

Köhnken, G., & Steller, M. (1988). The evaluation of the credibility of child witness
statements in the German procedural system. Issues in Criminological and Legal
Psychology, 13, 37–45.
Korn, J. H. (1997). Illusions of reality: A history of deception in social psychology. Albany,
NY: State University of New York Press.
Korris, N. (2006). Ultimate disclosure: Linguistic lie detection techniques for audit
professionals. Workshop handout. Edmonton, AB: The Sponsorship Group Ltd.
Kramer, H., & Sprenger, J. (1979). The malleus maleficarum. (M. Sommers, Trans.). New
York, NY: Benjamin Blom Inc. (Original work written c. 1486).
Kraut, R. E. (1978). Verbal and nonverbal cues in the perception of lying. Journal of
Personality and Social Psychology, 36, 380–391.
Labov, W. (1972). Language in the inner city: Studies in the Black English vernacular.
Philadelphia, PA: Blackwell.
Lamb, M. E., Hershkowitz, I., Orbach, Y., & Esplin, P. W. (2008). Tell me what happened:
Structured investigative interviews of child victims and witnesses. West Sussex, England:
John Wiley & Sons.
Lamb, M. E., Sternberg, K. J., Esplin, P. W., Hershkowitz, I., Orbach, Y., & Hovav, M.
(1997). Criterion-based content analysis: A field validation study. Child Abuse and
Neglect, 21, 255-264.
Landry, K., & Brigham, J. C. (1992). The effect of training in Criteria-Based Content
Analysis on the ability to detect deception in adults. Law and Human Behavior, 16, 663-
675.
Landström, S., Granhag, P. A., & Hartwig, M. (2005). Witnesses appearing live versus on
video: Effects on observers’ perception, veracity assessments and memory. Applied
Cognitive Psychology, 19, 913-933.
Landström, S., Granhag, P. A., & Hartwig, M. (2007). Children’s live and videotaped
testimonies: How presentation mode affects observers’ perception, assessment and
memory. Legal and Criminological Psychology, 12, 333-348.
Larsson, A. S., & Granhag, P. A. (2005). Interviewing children with the cognitive interview:
Assessing the reliability of statements based on observed and imagined events.
Scandinavian Journal of Psychology, 46, 49-57.
Leal, S., Vrij, A., Mann, S., & Fisher, R. (2010). Detecting true and false opinions: The
Devil’s Advocate approach as a lie detection aid. Acta Psychologica, 134, 323-329.
Lesce, T. (1990). Police products handbook. Englewood Cliffs, NJ: Prentice-Hall.
Lees-Haley, P. R. (1989). Malingering post-traumatic stress disorder on the MMPI. Forensic
Reports, 2, 89-91.
Leins, D., Fisher, R. P., Vrij, A., Leal, S., & Mann, S. (in press). Using sketch drawing to
induce inconsistency in liars. Legal and Criminological Psychology.
Leo, R. A. (2008). Police interrogation and American justice. Cambridge, MA: Harvard
University Press.
Leonhardt, C. (1930). Psychologische Beweisführung in Ansehung existenzstreitiger
Vorgänge [Psychological evidence in the case of doubtful events]. Archiv für die
Gesamte Psychologie, 75, 545-558.
Levine, T. R., Sun Park, H., & McCornack, S. A. (1999). Accuracy in detecting truths and
lies: Documenting the ‘Veracity Effect’. Communication Monographs, 66, 125–144.
Fact or Fiction?: Discriminating True and False Allegations of Victimization 67

Lieppe, M. R., & Romanczyk, A. (1989). Reactions to child (versus adult) eyewitnesses: The
influence of jurors’ preconceptions and witness behavior. Law and Human Behavior, 13,
103-132.
Lipian, M. S., Mills, M. J., & Brantman, A. (2004). Assessing the veracity of children’s
allegations of abuse: A psychiatric overview. International Journal of Law and
Psychiatry, 27, 249-263.
Littlepage, C. E., & Pineault, M. A. (1985). Detection of deception of planned and
spontaneous communications. Journal of Social Psychology, 125, 195-201.
Littmann, E., & Szewczyk, H. (1983). Zu einigen Kriterien und Ergebnissen
forensischpsychologischer Glaubwürdigkeitsbegutachtung von sexuell missbrauchten
Kindern und Jugendlichen [The child witness: So the courts abuse children?]. Forensia,
4, 55-72.
Liu, M, Granhag, P. A., Landström, S, Roos af Hjelmsäter, E., Strömwall, L. A. & Vrij, A.
(2010). “Can you remember what was in your pocket when you were stung by a bee?” –
Eliciting cues to deception by asking the unanticipated. Open Criminology Journal, 3,
31-36.
Loftus, E. F. (2003a). Memory in Canadian courts of law. Canadian Psychology, 44, 207-
212.
Loftus, E. F. (2003b). Our changeable memories: Legal and practical implications. Nature
Reviews: Neuroscience, 4, 231-234.
Lucas, R., & McKenzie, I. (1999). The detection of dissimulation: Lies, damned lies and
SVA. International Journal of Police Science and Management, 1, 347–359.
Lykken, D. T. (1998). A tremor in the blood: Uses and abuses of the lie detector. New York,
NY: Plenum.
Lyon, T. D., Malloy, L. C., Quas, J. A., & Talwar, V. A. (2008). Coaching, truth induction,
and young maltreated children’s false allegations and false denials. Child Development,
29, 914-929.
Lyons, E. J., Mehl, M. R., & Pennebaker, J. W. (2006). Linguistic self-presentation in
anorexia: Differences between pro-anorexia and recovering anorexia internet language
use. Journal of Psychosomatic Research, 60, 253-256.
MacLean, N. (1979). Rape and false accusations of rape. Police Surgeon, 15, 29–40.
Magner, E. S. (2000). Proving sexual assault. Behavioral Sciences and the Law, 18, 217-246.
Malloy, L. C., & Lamb, M. E. (2010). Biases in judging victims and suspects whose
statements are inconsistent. Law and Human Behavior, 34, 46-48.
Mann, E. (1985). The assessment of credibility of sexually abused children in criminal court
cases. American Journal of Forensic Psychiatry, 15, 9-15.
Mann, S. M., Vrij, A., & Bull, R. (2002). Suspects, lies, and videotape: An analysis of
authentic high-stake liars. Law and Human Behavior, 26, 365–376.
Mann, S., Vrij, A., & Bull, R. (2004). Detecting true lies: Police officers’ ability to detect
deceit. Journal of Applied Psychology, 89, 137–149.
Manzanero, A. L., & Diges, M. (1996). Effects of preparation on internal and external
memories. In G. Davies, S. Lloyd-Bostock, M. McMurran, & C. Wilson (Eds.),
Psychology, law, and criminal justice: International developments in research and
practice (pp. 56-63). Berlin, DE: Walter de Gruyter.
Marxsen, D., Yuille, J. C., & Nisbet, M. (1995). The complexities of eliciting and assessing
children’s statements. Psychology, Public Policy, and Law, 1, 450-460.
68 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

Masip, J., Sporer, S. L., Garrido, E., & Herrero, C. (2005). The detection of deception with
the reality monitoring approach: A review of the empirical evidence. Psychology, Crime
& Law, 11, 99-122.
Mazzoni, G., & Memon, A. (2003). Imagination can create false autobiographical memories.
Psychological Science, 14, 186-188.
McDowell, C. P. (1992). Rape allegations: Sorting the wheat from the chaff. Police Surgeon,
36, 29–39.
McMenamin, G. R. (2003). Forensic linguistics: Advances in forensic stylistics. Boca Raton,
FL: CRC Press.
McNally, R. J. (2003). Remembering trauma. Cambridge, MA: Belknap Press/Harvard
University Press.
Memon, A., Fraser, J., Colwell, K., Odinot, G., & Mastroberardino, S. (2010). Distinguishing
truthful from invented accounts using reality monitoring criteria. Legal and
Criminological Psychology, 15, 177-194.
Memon, A., Vrij, A., & Bull, R. (2003). Psychology and law: Truthfulness, accuracy, and
credibility (2nd ed.). Chichester, UK: John Wiley & Sons Ltd.
Merckelbach, H. (2004). Telling a good story: Fantasy proneness and the quality of fabricated
memories. Personality and Individual Differences, 37, 1371-1382.
Migueles, M., & García-Bajos, E. (2006). Influence of the typicality of the actions in a
mugging script on retrieval-induced forgetting. Psicológica, 27, 119-135.
Mitchell, K. J., & Johnson, M. K. (2009). Source monitoring 15 years later: What have we
learned from fMRI about the neural mechanisms of source memory? Psychological
Bulletin, 135, 638-677.
Mittermaier, C. J. A. (1834/1841) Treatise in German; tr. English, by S. T. Cushing, Effect of
Drunkenness on Criminal Responsibility. Edinburgh, Scotland.
Mogil, C. G. (2005). Detecting deception using various reality monitoring techniques.
Unpublished Masters Thesis, Pepperdine University, Malibu, California, United States.
Montaigne, Michel de. (1946). The Essays. (G. B. Ives, Ed. & Trans.). New York: The
Heritage Press. (Originally published 1580).
Münsterberg, H. (1908). On the witness stand: Essays on psychology and crime. New York,
NY: Clark Boardman.
Nahari, G., Vrij, A., & Fisher, R. P. (in press). Does the truth come out in the writing? SCAN
as a lie detection tool. Law and Human Behavior.
Nanavati, D. (2010). A brief history of lies. Cornwall, UK: Footsteps Press.
Newman, M. L., Pennebaker, J. W., Berry, D. S., & Richards, J. M. (2003). Lying words:
Predicting deception from linguistic styles. Personality and Social Psychology Bulletin,
29, 665-675.
Nuland, S. B. (2005). Leonardo da Vinci. New York, NY: Penguin.
O'Hair, H. D., Cody, M. J., & McLaughlin, M. L. (1981). Prepared lies, spontaneous lies,
Machiavellianism, and nonverbal communication. Human Communication Research, 7,
325-339.
Orbach, Y., & Lamb, M. E. (1999). Assessing the accuracy of a child's account of sexual
abuse: A case study. Child Abuse & Neglect, 23, 91-98.
Osgood C. E. (1960). Some effects of motivation on style of encoding. In T. Sebeok (Ed.),
Style in language (pp. 293-306). Cambridge, MA: MIT Press.
Fact or Fiction?: Discriminating True and False Allegations of Victimization 69

O’Sullivan, M., & Ekman, P. (2004). The wizards of deception detection. In P. A. Granhag &
L. Strömwall (Eds.), The detection of deception in forensic contexts (pp. 269–286).
Cambridge, UK: Cambridge University Press.
Parker, A. D., & Brown, J. (2000). Detection of deception: Statement Validity Analysis as a
means of determining truthfulness or falsity of rape allegations. Legal and
Criminological Psychology, 5, 237-259.
Pasupathi, M. (2007). Telling and the remembered self: Linguistic differences in memories
for previously disclosed and undisclosed events. Memory, 15, 258-270.
Peace, K. A. (2006). Truthful versus fabricated accounts of victimization: A prospective
investigation of the consistency of traumatic memory reports of trauma symptom profiles.
Unpublished doctoral dissertation, Dalhousie University, Halifax, Nova Scotia, Canada.
Peace, K. A., & Bouvier, K. A. M. (2008). Alexithymia, dissociation, and social desirability:
Investigating individual differences in the narrative content of false allegations of trauma.
Journal of Offender Rehabilitation, 47, 138-167.
Peace, K. A., Brower, K. L., & Rocchio, A. (2011). Truth may not be stranger than fiction:
The influence of bizarre details and fantasy proneness on perceptions of eyewitness
credibility. Manuscript in preparation.
Peace, K. A., Kasper, R., & Forrester, D. (2011). Tall tales over time: Investigating the
factual consistency of truthful and false allegations of trauma. Manuscript in preparation.
Peace, K. A., & Kuchinsky, M. (2011). Pin the tale on the liar: Investigating the utility of
content-based and linguistic features in credibility determinations over time. Manuscript
in preparation.
Peace, K. A., & Masliuk, K. A. (2011). Do motivations for malingering matter? Symptoms of
malingered PTSD as a function of motivation and trauma type. Psychological Injury and
Law, 4, 44-55.
Peace, K. A., & Porter, S. (2004). A longitudinal investigation of the reliability of memories
for trauma and other emotional experiences. Applied Cognitive Psychology, 18, 1143-
1159.
Peace, K. A., & Porter, S. (2011). Remembrance of lies past: A comparison of the features
and consistency of truthful and fabricated trauma narratives. Applied Cognitive
Psychology, 25, 414-423.
Peace, K. A., Porter, S., & Almon, D. F. (in press). Sidetracked by emotion: Observers’
ability to discriminate genuine and fabricated sexual assault allegations. Legal and
Criminological Psychology.
Peace, K. A., & Porter, S., & ten Brinke, L. (2008). Are memories for sexually traumatic
events “special”? A within-subjects investigation of trauma and memory in a clinical
sample. Memory, 16, 10-21.
Peace, K. A., & Sinclair, S. M. (in press). Cold-blooded lie catchers?: An investigation of
psychopathy, emotional processing, and deception detection. Legal and Criminological
Psychology.
Pelham, B. W., & Neter, E. (1995). The effect of motivation of judgment depends on the
difficulty of the judgment. Journal of Personality and Social Psychology, 68, 581-594.
Penfold, P. S. (1995). Mendacious moms or devious dads?: Some perplexing issues in child
custody/ sexual abuse allegation disputes. Canadian Journal of Psychiatry, 40, 337-341.
Pennebaker, J. W. (1997). Writing about emotional experiences as a therapeutic process.
Psychological Science, 8, 162-166.
70 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

Pennebaker, J. W. (2003). The social, linguistic, and health consequences of emotional


disclosure. In J. Suls & K. A. Wallston (Eds.), Social psychological foundations of health
and illness (pp 288-313). Malden, MA: Blackwell Publishing.
Pennebaker, J. W. & Campbell, R. S. (2000). The effects of writing about traumatic
experience. Clinical Quarterly, 9, 17-21.
Pennebaker, J. W., Chung, C. K., Ireland, M., Gonzales, A., & Booth, R. J. (2007). The
development and psychometric properties of LIWC2007. Austin, TX: LIWC.net.
Pennebaker, J. W., Francis, M. E., & Booth, R. J. (2001). Linguistic Inquiry and Word Count
(LIWC): LIWC 2001 manual. Mahwah, NJ: Erlbaum.
Pennebaker, J. W., Mayne, T., & Francis, M. E. (1997). Linguistic predictors of adaptive
bereavement. Journal of Personality and Social Psychology, 72, 863-871.
Pennebaker, J. W., & Stone, L. D. (2004). Translating traumatic experiences into language:
Implications for child abuse and long-term health. In L. J. Koenig, L. S. Doll, A.
O’Leary, & W. Pequegnat (Eds.), From child sexual abuse to adult sexual risk: Trauma,
revictimization, and intervention (pp 201-216). Washington, DC: American
Psychological Association.
Peterson, C. (2007). Reliability of child witnesses: A decade of research. The Canadian
Journal of Police & Security Services, 5, 142-151.
Peterson, C. (2010). ‘And I was very very crying’: Children’s self-descriptions of distress as
predictors of recall. Applied Cognitive Psychology, 24, 909-924.
Pezdek, K., Morrow, A., Blandón-Gitlin, I., Goodman, G. S., Quas, J. A., Saywitz, K. J., et al.
(2004). Detecting deception in children: event familiarity affects criterion-based content
analysis ratings. Journal of Applied Psychology, 89, 119–126.
Pezdek, K., & Taylor, J. (2000). Discriminating between accounts of true and false events. In
D. F. Bjorklund (Ed.), False-memory creation in children and adults (pp. 69–91).
Mahwah, NJ: Erlbaum.
Piper, A. (1993). “Truth serum” and “recovered memories” of sexual abuse: A review of the
evidence. The Journal of Psychiatry and Law, 21, 447-471.
Plato. (1998). The Republic, Book VI. (B. Jowett, Trans.). Champaign, IL: Project Gutenberg.
(Original work written c. 360 BC).
Polage, D. C. (2004). Fabrication deflation? The mixed effects of lying on memory. Applied
Cognitive Psychology, 18, 455-465.
Poole, D. A., & Lindsay, D. S. (1995). Interviewing preschoolers: Effects of nonsuggestive
techniques, parental coaching, and leading questions on reports of nonexperienced
events. Journal of Experimental Child Psychology, 60, 129-154.
Poole, D. A., & White, L. T. (1995). Tell me again and again: Stability and change in
repeated testimonies of children and adults. In M. Zaragoza, J. R. Graham, G. C. N. Hall,
R. Hirschman, & Y. S. Ben-Porath (Eds.), Memory and testimony in the child witness
(pp. 24-43). Thousand Oaks, CA: Sage.
Porter, S. (1998). An architectural mind: The nature of real, created, and fabricated
memories for emotional childhood events. Unpublished doctoral dissertation, University
of British Columbia, Vancouver, British Columbia, Canada.
Porter, S., & Birt, A. R. (2001). Is traumatic memory special? A comparison of traumatic
memory characteristics with memory for other emotional life experiences. Applied
Cognitive Psychology, 15, 101–117.
Fact or Fiction?: Discriminating True and False Allegations of Victimization 71

Porter, S., Campbell, M. A., Birt, A. R., & Woodworth, M. T. (2003). “He said, she said”: A
psychological perspective on historical memory evidence in the courtroom. Canadian
Psychology, 44, 190-206.
Porter, S., Campbell, M. A., Stapleton, J., & Birt, A. R. (2002). The influence of judge, target,
and stimulus characteristics on the accuracy of detecting deceit. Canadian Journal of
Behavioural Science, 34, 172-185.
Porter, S., Doucette, N. L., Earle, J., Woodworth, M., & MacNeil, B. (2008). Halfe the world
knowes not how the other halfe lies: Investigation of verbal and non-verbal signs of
deception exhibited by criminal offenders and non-offenders. Legal and Criminological
Psychology, 13, 27-38.
Porter, S., Juodis, M., ten Brinke, L., Klein, R., & Wilson, K. (2010). Evaluation of a brief
deception detection training program. Journal of Forensic Psychiatry & Psychology, 21,
66-76.
Porter, S., McCabe, S., Woodworth, M., & Peace, K. A. (2007). "Genius is 1% inspiration
and 99% perspiration" …or is it? An investigation of the effects of motivation and
feedback on deception detection. Legal and Criminological Psychology, 12, 297-309.
Porter, S., & Peace, K. A. (2007). The scars of memory: A prospective, longitudinal
investigation of the consistency of traumatic and positive emotional memories in
adulthood. Psychological Science, 18, 435-441.
Porter, S., Peace, K. A., Douglas, R. L., Doucette, N. L. (in press). Recovered memory
evidence in the courtroom: Facts and fallacies. In J. Ziskin, S. Anderer, and D. Faust
(Eds.), Coping with Psychiatric and Psychological Testimony. Oxford University Press.
Porter, S., Peace, K. A., & Emmett, K. (2007). You protest too much, methinks: Investigating
the features of truthful and fabricated reports of traumatic experiences. Canadian Journal
of Behavioural Science, 39, 79-91.
Porter, S., & ten Brinke, L. (2009). Dangerous decisions: A theoretical framework for
understanding how judges assess credibility in the courtroom. Legal and Criminological
Psychology, 14, 119-134.
Porter, S., & ten Brinke, L. (2010). The truth about lies: What works in detecting high-stakes
deception? Legal and Criminological Psychology, 15, 57-75.
Porter, S., Woodworth, M., & Birt, A. R. (2000). Truth, lies, and videotape: An investigation
of the ability of federal parole officers to detect deception. Law and Human Behavior, 24,
643-658.
Porter, S., & Yuille, J. C. (1995). Credibility assessment of criminal suspects through
statement analysis. Psychology, Crime, and Law, 1, 319–331.
Porter, S., & Yuille, J. C. (1996). The language of deceit: An investigation of the verbal clues
to deception in the interrogation context. Law and Human Behavior, 20, 443-458.
Porter, S., Yuille, J. C., & Lehman, D. R. (1999). The nature of real, implanted, and
fabricated memories for emotional childhood events: Implications for the recovered
memory debate. Law and Human Behavior, 23, 517-537.
Purdioux, L. (2010). Deception detection research: A precursory look at research methods
used to investigate hunches, speculations, and theories of the lie. The Forensic Therapist,
9, 3-7.
Qin, J. J., Goodman, G. S., Bottoms, B. L., & Shaver, P. R. (1998). Repressed memories of
ritualistic and religion-related abuse. In S. Lynn & K. McConkey (Eds.), Truth in memory
(pp. 260-283). New York, NY: Guilford Press.
72 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

Qin, T., Burgoon, J., & Nunamaker, J. F., Jr. (2004). An exploratory study on promising cues
in deception detection and application of decision tree. Proceedings of the 37th Hawaii
International Conference on System Sciences, 1-10.
Raitt, F. E., & Zeedyk, M. S. (2003). False memory syndrome: Undermining the credibility of
complainants in sexual offences. International Journal of Law and Psychiatry, 26, 453-
471.
Raskin, D., & Esplin, P. (1991a). Statement Validity Assessment: Interview procedures and
content analysis of children’s statements of sexual abuse. Behavioral Assessment, 13,
265-291.
Raskin, D., & Esplin, P. (1991b). Assessment of children’s statements of sexual abuse. In J.
Doris (Ed.), The suggestibility of children’s recollections (pp. 153–164). Washington,
DC: American Psychological Association.
Raskin, D. C., & Steller, M. (1989). Assessing credibility of allegations of child sexual abuse:
Polygraph examinations and statement analysis. In H. Wegener, F. Lösel, & J. Haisch
(Eds.), Criminal behavior and the justice system: Psychological perspectives (pp. 290-
302). New York, NY: Springer-Verlag.
Rassin, E. (1999). Criteria based content analysis: The less scientific road to truth. Expert
Evidence, 7, 265-278.
Rassin, E., & van der Sleen, J. (2005). Characteristics of true versus false allegations of
sexual offences. Psychological Reports, 97, 589-598.
Read, J. D., & Desmarais, S. L. (2009a). Lay knowledge of eyewitness issues: A Canadian
evaluation. Applied Cognitive Psychology, 23, 301-326.
Read, J. D., & Desmarais, S. L. (2009b). Expert psychology testimony on eyewitness
identification: A matter of common sense? In B. L. Cutler (Ed.), Expert testimony on the
psychology of eyewitness identification (pp. 115-141). New York, NY: Oxford University
Press.
Read, J. D., & Lindsay, D. S. (2000). ‘Amnesia’ for summer camps and high school
graduation: Memory work increases reports of prior periods of remembering less. Journal
of Traumatic Stress, 13, 129-148.
Regina v. Ambrose (2000), ABCA 264.
Regina v. Francois (1994), 2 S.C.R. 827.
Regina v. Heatherington (2004a), ABPC 116.
Regina v. Heatherington (2004b), ABPC 191.
Regina v. Henry and Manning (1969), 53 Cr App R 150 at 153.
Regina v. Hudon (1996), 187 A.R. 345 (C.A.).
Regina v. Lukasik (1982), 22 Alta. L.R. (2d) 222 (C.A.).
Regina v. Norman (2005), ONCJ 338.
Regina v. Malik & Bagri (2005), B.C.S.C. 350.
Regina v. Marquard (1993), 4 S.C.R. 223.
Roberts, K. P., & Lamb, M. E. (2010). Reality-monitoring characteristics in confirmed and
doubtful allegations of child sexual abuse. Applied Cognitive Psychology, 24, 1049-1079.
Roberts, K. P., Lamb, M. E., Zale, J. L., & Randall, D. W. (1998, March). Qualitative
differences in children's accounts of confirmed and unconfirmed incidents of sexual
abuse. Paper presented at the biennial meeting of the American Psychology-Law Society,
Redondo Beach, CA.
Fact or Fiction?: Discriminating True and False Allegations of Victimization 73

Ruby, C. L., & Brigham, J. C. (1997). The usefulness of the criteria-based content analysis
technique in distinguishing between truthful and fabricated allegations. Psychology,
Public Policy, and Law, 3, 705-737.
Ruby, C. L., & Brigham, J. C. (1998). Can Criteria-Based Content Analysis distinguish
between true and false statements of African-American speakers? Law and Human
Behavior, 22, 369-388.
Rude, S. S., Gortner, E. M., & Pennebaker, J. W. (2004). Language use of depressed and
depression-vulnerable college students. Cognition & Emotion, 18, 1121-1133.
Salhany, R. E. (1991). Cross examination: The art of the advocate (Rev. Ed.). Toronto, ON:
Butterworths.
Santtila, P., Alkiora, P., Ekholm, M., & Niemi, P. (1999). False confession to robbery: The
rules of suggestibility, anxiety, memory disturbance and withdrawal symptoms. Journal
of Forensic Psychiatry, 10, 399-415.
Santtila, P., Roppola, H., Runtti, M., & Niemi, P. (2000). Assessment of child witness
statements using criteria-based content analysis (CBCA): The effects of age, verbal
ability, and interviewer’s emotional style. Psychology, Crime, & Law, 6, 159-179.
Sapir, A. (1987). The LSI course on scientific content analysis (SCAN). Phoenix, AZ:
Laboratory for Scientific Interrogation.
Sapir, A. (1995). The VIEW guidebook: Verbal inquiry – The effective witness. Phoenix, AZ:
Laboratory of Scientific Interrogation.
Sapir, A. (2000). The LSI course on scientific content analysis (SCAN). Phoenix, AZ:
Laboratory for Scientific Interrogation.
Schooler, J. W., Gerhard, D., & Loftus, E. F. (1986). Qualities of the unreal. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 12, 171-181.
Scott, W. (1810). Marmion: A tale of flodden field, a romance in six cantos. Edinburgh,
Scotland: Archibald Constable and Company.
Seelau, S. M., & Wells, G. L. (1995). Applied eyewitness research: The other mission. Law
and Human Behavior, 19, 317–322.
Semmler, C. A., & Brewer, N. (2002). Effects of mood and emotion on juror processing and
judgments. Behavioral Sciences and the Law, 20, 423-436.
Shapiro, D. (1989). Psychotherapy of neurotic character. New York, NY: Basic Books.
Shearer, R. A. (1999, May/June). Statement analysis scan or scam? Skeptical Inquirer.
Sheridan, L. P., & Blaauw, E. (2004). Characteristics of false stalking reports. Criminal
Justice and Behavior, 31, 55-72.
Shuy, R. W. (1998). The language of confession, interrogation, and deception. Thousand
Oaks, CA: Sage Publications.
Singleton, D. (2002). Helping a jury understand witness deception. In J. E. Alatis, H. E.
Hamilton, & A. H. Tan (Eds.), Round table on language and linguistics (pp. 176-189).
Washington, DC: Georgetown University Press.
Sjöberg, R. L. (1997). False allegations of satanic abuse: Case studies from the witch panic in
Rättvik 1670-71. European Child & Adolescent Psychiatry, 6, 219-226.
Sjöberg, R. L. (2002). False claims of victimization: A historical illustration of a
contemporary problem. Nordic Journal of Psychology, 56, 132-136.
S. L. B. v. G. A. (2004), ABCA 366.
74 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

Smith, N. (2001). Reading between the lines: An evaluation of the scientific content analysis
technique (SCAN). Police Research Series paper 135. London: UK Home Office,
Research, Development and Statistics Directorate.
Sornig, K. (1981). Lexical innovation: A study of slang, colloquialisms and casual speech. In
H. Parret and J. Verschueren (Eds.), Pragmatics & beyond (Vol II). Amsterdam, The
Netherlands: John Benjamins Publishing Company.
Spencer, J., & Flin, R. (1990). The evidence of children, the law and the psychology. London,
UK: Blackstone.
Sporer, S. L. (1997). The less travelled road to truth: verbal cues in deception detection in
accounts of fabricated and self-experienced events. Applied Cognitive Psychology, 11,
373-397.
Sporer, S. L. (2004). Reality monitoring and the detection of deception. In P.-A. Granhag, &
L. Strömwall (Eds.), Deception detection in forensic contexts (pp. 64–102). Cambridge,
UK: Cambridge University Press.
Sporer, S. L., & Hamilton, S. C. (1996). Should I believe this? Reality monitoring of invented
and self-experienced events from early and late teenage years. Poster presented at the
NATO Advanced Study Institute. Port de Bourgenay, France, June 1996.
Sporer, S. L., & Küpper, B. (1995). Realitätsüberwachung und die Beurteilung des
Wahrheitsgehaltes von Erzählungen: Eine experimentelle Studie [Reality monitoring and
the judgment of credibility of stories: an experimental study]. Zeitschrift für
Sozialpsychologie, 26, 173-193.
Sporer, S. L., & Schwandt, B. (2006). Paraverbal indicators of deception: A meta-analytic
synthesis. Applied Cognitive Psychology, 20, 421–446.
Sporer, S. L., & Sharman, S. J. (2006). Should I believe this? Reality monitoring of accounts
of self-experienced and invented recent and distant autobiographical events. Applied
Cognitive Psychology, 20, 837-854.
Stein, N. L., & Glenn, C. G. (1979). An analysis of story comprehension in elementary school
children. In R. O. Freedle (Ed.), Discourse processing: Multidisciplinary perspectives
(pp. 53-120). New York, NY: Ablex Publishing.
Steller, M. (1989). Recent developments in statement analysis. In J. C. Yuille (Ed.),
Credibility assessment (pp. 135-154). Dordrecht, The Netherlands: Kluwer Academic
Publishers.
Steller, M., & Köhnken, G. (1989). Criteria-based statement analysis. In D. C. Raskin (Ed.),
Psychological methods in criminal investigation and evidence (pp. 217–245). New York:
Springer.
Steller, M., Werttershaus, P., & Wolf, T. (1988). Empirical validation of criteria-based
content analysis. Paper presented at the NATO Advanced Study Institute on Credibility
Assessment, Maratea, Italy.
Stern, W. (1902). Zur Psychologie der Aussage (Witness Psychology). Zeitschrift fur die
gesamte Strafrechtswissenschaft, 22, 315-370.
Stern, W. (1903). Beiträge zur Psychologie der Aussage. (Contributions to the Psychology of
Testimony). Leipzig: Verlag Barth.
Stern, W. (1904). Die Aussage als geistige Leistung und als Verhörsprodukt [The statement
as a mental achievement and product of interrogation]. Beiträge zur Psychologie der
Aussage, 3, 269-415.
Fact or Fiction?: Discriminating True and False Allegations of Victimization 75

Stern, W. (1910). Abstracts of lectures in the psychology of testimony and on the study of
individuality. American Journal of Psychology, 21, 270-282.
Strömwall, L. A., Bengtsson, L., Leander, L., & Granhag, P. A. (2004). Assessing children’s
statements: The impact of a repeated experience on CBCA and RM ratings. Applied
Cognitive Psychology, 18, 653-668.
Strömwall, L. A., & Granhag, P. A. (2003a). How to detect deception? Arresting the beliefs
of police officers, prosecutors and judges. Psychology, Crime & Law, 9, 19-36.
Strömwall, L. A., & Granhag, P. A. (2003b). Affecting the perception of verbal cues to
deception. Applied Cognitive Psychology, 17, 35-49.
Strömwall, L. A., & Granhag, P. A. (2005). Children’s repeated lies and truths: Effects on
adults’ judgments and reality monitoring scores. Psychiatry, Psychology and Law, 12,
345-356.
Strömwall, L. A., & Granhag, P. A., & Landström, S. (2007). Children’s prepared and
unprepared lies: Can adults see through their strategies? Applied Cognitive Psychology,
21, 457-471.
Stuesser, L. (1993). An introduction to advocacy. Sydney, AU: The Law Book Company.
Suckle-Nelson, J. A., Colwell, K., Hiscock-Anisman, C., Florence, S., Youschak, K. E., &
Duarte, A. (2010). Assessment Criteria Indicative of Deception (ACID): Replication and
gender differences. The Open Criminology Journal, 3, 23-30.
Suengas, A. G., & Johnson, M. K. (1988). Qualitative effects of rehearsal on memories for
perceived and imagined complex events. Journal of Experimental Psychology: General,
117, 377-389.
ten Brinke, L. & Porter, S. (in press). Discovering deceit: Applying laboratory and field
research in the search for truthful and deceptive behaviour. In B. S. Cooper, D. Griesel, &
M. Ternes (Eds.) Applied issues in investigative interviewing, eyewitness memory, and
credibility assessment. New York, NY: Springer.
Terr, L. C. (1994). Unchained memories: True stories of traumatic memories, lost and found.
New York, NY: BasicBooks.
Thoennes, N., & Tjaden, P. G. (1990). The extent, nature, and validity of sexual abuse
allegations in custody/visitation disputes. Child Abuse & Neglect, 14, 151-163.
Toma, C. L. & Hancock, J. T. (2010). Reading between the lines: Linguistic cues to deception
in online dating profiles. Proceedings of the ACM conference on Computer-Supported
Cooperative Work (CSCW 2009), 5-8.
Trankell, A. (1972). Reliability of evidence. Stockholm, Sweden: Beckmans.
Trovillo, P. V. (1939). A history of lie detection. Journal of Criminal Law & Criminology,
29, 848-881.
Toma, C. L. & Hancock, J. T. (in press). What lies beneath: The linguistic traces of deception
in online dating profiles. Journal of Communication.
Toma, C. L. & Hancock, J. T. (2010). Reading between the lines: Linguistic cues to deception
in online dating profiles. Proceedings of the ACM conference on Computer-Supported
Cooperative Work (CSCW 2009), 5-8.
Tuckey, M. R., & Brewer, N. (2003). How schemas affect eyewitness memory over repeated
retrieval attempts. Applied Cognitive Psychology, 17, 785-800.
Turtle, J. W., & Yuille, J. C. (1994). Lost but not forgotten details: Repeated eyewitness
recall leads to reminiscence but not hypermnesia. Journal of Applied Psychology, 79,
260–271.
76 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

Tye, M. C., Amato, S. L., Honts, C. R., Kevitt, M. K., & Peters, D. (1999). The willingness of
children to lie and the assessment of credibility in an ecologically relevant laboratory
setting. Applied Developmental Science, 3, 92-109.
Undeutsch, U. (1967). Beurteilung der Glaubhaftigkeit von Aussagen. In U. Undeutsch (Ed.),
Handbuch der Psychologie Vol. 11: Forensische Psychologie (pp. 26-181). Göttingen,
DE: Hogrefe.
Undeutsch, U. (1982). Statement reality analysis. In A. Trankell (Ed.), Reconstructing the
past: The role of psychologists in criminal trials (pp. 27-56). Kluwer: Deventer.
Undeutsch, U. (1984). Courtroom evaluation of eyewitness testimony. International Review
of Applied Psychology, 33, 51-67.
Undeutsch, U. (1989). The development of Statement Reality Analysis. In J. C. Yuille (Ed.),
Credibility assessment (pp. 101-119). Dordrecht, The Netherlands: Kluwer Academic
Press.
Vorauer, J. D., & Ross, M. (1999). Self-awareness and feeling transparent: Failing to
suppress one’s self. Journal of Experimental Social Psychology, 35, 415-440.
Vrij, A. (1994). The impact of information and setting on detection of deception by police
detectives. Journal of Nonverbal Behavior, 18, 117-136.
Vrij, A. (2002a). Telling and detecting lies. In N. Brace & H. L. Westcott (Eds.), Applying
psychology (pp. 179-241). Milton Keynes: Open University.
Vrij, A. (2002b). Deception in children: A literature review and implications for children’s
testimony. In H. L. Westcott, G. M. Davies, & R. H. C. Bull (Eds.), Children’s testimony:
A handbook of psychological research and forensic practice (pp. 175-194). New York,
NY: John Wiley & Sons.
Vrij, A. (2004). Why professionals fail to catch liars and how they can improve. Legal and
Criminological Psychology, 9, 159-181.
Vrij, A. (2005). Criteria-based content analysis: A qualitative review of the first 37 studies.
Psychology, Public Policy, and Law, 11, 3–41.
Vrij, A. (2008). Detecting lies and deceit: Pitfalls and opportunities. Chichester, UK: John
Wiley & Sons Ltd.
Vrij, A., & Akehurst, L. (1998). Verbal communication and credibility: Statement Validity
Assessment. In A. Memon, A. Vrij, & R. Bull (Eds.), Psychology and law: Truthfulness,
accuracy and credibility (pp. 3-26). Maidenhead, UK: McGraw-Hill.
Vrij, A., Akehurst, L., Soukara, S., & Bull, R. (2002). Will the truth come out? The effect of
deception, age, status, coaching, and social skills on CBCA scores. Law and Human
Behavior, 26, 261-283.
Vrij, A., Akehurst, L., Soukara, S., & Bull, R. (2004a). Detecting deceit via analyses of verbal
and nonverbal behaviour in children and adults. Human Communication Research, 30, 8-
41.
Vrij, A., Akehurst, L., Soukara, S., & Bull, R. (2004b). Let me inform you how to tell a
convincing story: CBCA and reality monitoring scores as a function of age, coaching,
and deception. Canadian Journal of Behavioural Science, 36, 113-126.
Vrij, A., Edward, K., & Bull, R. (2001a). Police officers’ ability to detect deceit: The benefit
of indirect deception detection measures. Legal and Criminological Psychology, 6, 185-
196.
Vrij, A., Edward, K., & Bull, R. (2001b). People’s insight into their own behaviour and
speech content while lying. British Journal of Psychology, 92, 373-389.
Fact or Fiction?: Discriminating True and False Allegations of Victimization 77

Vrij, A., Edward, K., & Bull, R. (2001c). Stereotypical verbal and nonverbal responses while
deceiving others. Personality and Social Psychology Bulletin , 27, 899-909.
Vrij, A., Edward, K., Roberts, K., & Bull, R. (2000). Detecting deceit via analysis of verbal
and nonverbal behavior. Journal of Nonverbal Behavior, 24, 239-263.
Vrij, A., Evans, H., Akehurst, L., & Mann, S. (2004). Rapid judgements in assessing verbal
and nonverbal cues: Their potential for deception researchers and lie detection. Applied
Cognitive Psychology, 18, 283-296.
Vrij, A., Fisher, R., Mann, S., & Leal, S. (2006). Detecting deception by manipulating
cognitive load. Trends in Cognitive Sciences, 10, 141–142.
Vrij, A., Granhag, P. A., Mann, S., & Leal, S. (2011). Outsmarting the liars: Towards a
cognitive lie detection approach. Current Directions in Psychological Science, 20, 28-32.
Vrij, A., Granhag, P. A., & Porter, S. (2010). Pitfalls and opportunities in nonverbal and
verbal lie detection. Psychological Science in the Public Interest, 11, 89-121.
Vrij, A., Harden, F., Terry, J., Edward, K., & Bull, R. (2001). The influence of personal
characteristics, stakes and lie complexity on the accuracy and confidence to detect deceit.
In R. Roesch, R. R. Corrado, & R. J. Dempster (Eds.), Psychology in the courts:
International advances in knowledge (pp. 289-304). London, UK: Routledge.
Vrij, A., & Heaven, S. (1999). Vocal and verbal indicators of deception as a function of lie
complexity. Psychology, Crime & Law, 5, 203-215.
Vrij, A., Kneller, W., & Mann, S. (2000). The effect of informing liars about Criteria-Based
Content Analysis on their ability to deceive CBCA raters. Legal and Criminological
Psychology, 5, 57-70.
Vrij, A., Leal, S., Mann, S., & Granhag, P. A. (2011). A comparison between lying about
intentions and past activities: Verbal cues and detection accuracy. Applied Cognitive
Psychology, 25, 212-218.
Vrij, A., & Mann, S. (2001a). Telling and detecting lies in a high-stake situation: The case of
a convicted murderer. Applied Cognitive Psychology, 15, 187-203.
Vrij, A., & Mann, S. (2001b). Who killed my relative? Police officers’ ability to detect real-
life high-stake lies. Psychology, Crime, and Law, 7, 119-132.
Vrij, A., & Mann, S. (2004). Detecting deception: The benefit of looking at a combination of
behavioral, auditory and speech content related cues in a systematic manner. Group
Decision and Negotiation, 13, 61-79.
Vrij, A., & Mann, S. (2006). Criteria-Based, Content Analysis: An empirical test of its
underlying processes. Psychology, Crime, & Law, 12, 337–349.
Vrij, A., Mann, S. A., Fisher, R. P., Leal, S., Milne, R., & Bull, R. (2008). Increasing
cognitive load to facilitate lie detection: The benefit of recalling an event in reverse order.
Law and Human Behavior, 32, 253-265.
Vrij, A., Mann, S., Kristen, S., & Fisher, R. P. (2007). Cues to deception and ability to detect
lies as a function of police interview styles. Law and Human Behavior, 31, 499-518.
Vrij, A., & Semin, G. R. (1996). Lie experts' beliefs about nonverbal indicators of deception.
Journal of Nonverbal Behavior, 20, 65-80.
Wagenaar, W. A., van Koppen, P. J., & Crombag, H. F. M. (1993). Anchored narratives: The
psychology of criminal evidence. New York, NY: St. Martin’s Press.
Wellman, F. L. (1986). The art of cross-examination: With the cross-examinations of
important witnesses in some celebrated cases. New York, NY: Dorset Press.
78 Kristine A. Peace, Krista L. Brower, and Ryan D. Shudra

Wells, G. L., & Loftus, E. F. (1991). Commentary: Is this child fabricating? Reactions to a
new assessment technique. In J. Doris (Ed.), The suggestibility of children's recollections
(pp. 168-171). Washington, DC: American Psychological Association.
Wells, G. L., Small, M., Penrod, S. D., Malpass, R. S., Fulero, S. M., & Brimacombe, C. A.
E. (1998). Eyewitness identification procedures: Recommendations for lineups and
photospreads. Law & Human Behavior, 22, 603–607.
Westcott, H. L., Davies, G. M., & Bull, R. H. C. (2002). Children’s testimony: A handbook of
psychological research and forensic practice. West Sussex, England: John Wiley &
Sons.
Wigmore, J. H. (1909). Professor Münsterberg and the psychology of testimony: Being a
report of the case Cokestone v. Münsterberg. Illinois Law Review, 3, 399-455.
Wilde, O. (1985). The importance of being earnest and other plays (Signet Classic). New
York, NY: Penguin Group. (Original work written c. 1895).
Winkel, F. W., Vrij, A., Koppelaar, L., & Van der Steen, J. (1991). Reducing secondary
victimization risks and skilled police intervention: Enhancing the quality of police rape
victim encounters through training programmes. Journal of Police and Criminal
Psychology, 7, 2-11.
Woodworth, M., Porter, S., ten Brinke, L., Doucette, N. L., Peace, K., & Campbell, M. A.
(2009). A comparison of memory for homicide, non-homicidal violence, and positive life
experiences. International Journal of Law and Psychiatry, 32, 329-334.
Woolnough, P. S., & MacLeod, M. D. (2001). Watching the birdie watching you: Eyewitness
memory for actions using CCTV recordings of actual crimes. Applied Cognitive
Psychology, 15, 395-411.
Wrightsman, L. S., & Porter, S. (2006). Forensic psychology (1st Canadian Ed.). Toronto,
ON: Thomson Nelson.
Yarmey, A. D. (1984). Age as a factor in eyewitness memory. In G. L. Wells & E. F. Loftus
(Eds.), Eyewitness testimony: Psychological perspectives (pp. 142-154). Cambridge, UK:
Cambridge University Press.
Yuille, J. C. (1988). The systematic assessment of children's testimony. Canadian
Psychology, 29, 247-262.
Yuille, J. C., & Cutshall, J. (1989). Analyses of the statements of victims, witnesses and
suspects. In J. C. Yuille (Ed.), Credibility assessment (pp. 175-195). Dordrecht: Kluwer
Academic Press.
Yuille, J. C., & Farr, V. (1987). Statement validity analysis: Systematic approach to the
assessment of children’s allegations of sexual abuse. British Columbia Psychology, Fall,
19-27.
Yuille, J. C., Tymofievich, M., & Marxsen, D. (1995). The nature of allegations of child
sexual abuse. In T. Ney (Ed.), True and false allegations of child sexual abuse:
Assessment and case management (pp. 21-46). Philadelphia, PA: Brunner/Mazel.
Zaparniuk, J., Yuille, J. C., & Taylor, S. (1995). Assessing the credibility of true and false
statements. International Journal of Law and Psychiatry, 18, 343-352.
Zhou, L., Burgoon, J. K., Nunamaker, J. F., Jr., & Twitchell, D. (2004). Automated
linguistics based cues for detecting deception in text-based asynchronous computer-
mediated communication: An empirical investigation. Group Decision & Negotiation, 13,
81-106.
Fact or Fiction?: Discriminating True and False Allegations of Victimization 79

Zhou, L., Twitchell, D., Burgoon, J., Qin, T. & Nunamaker, J. F., Jr. (2004). A comparison of
classification methods for predicting deception in computer mediated communication.
Journal of Management Information Systems, 20, 139-165.
Zhou, L., & Zhang, D. (2008). Following linguistic footprints: Automatic deception detection
in online communication. Communications of the ACM, 51, 119-122.
Zuckerman, M., & Driver, R. E. (1985). Telling lies: Verbal and nonverbal correlates of
deception. In A. W. Siegman & S. Feldstein (Eds.), Multichannel integrations of
nonverbal behavior (pp. 129-147). Hillsdale, NJ: Erlbaum.

View publication stats

You might also like