Updated Meta-Analysis of The MMPI-2 Fake Bad Scale: Verified Utility in Forensic Practice

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/7377446

Updated Meta-analysis of the MMPI-2 Fake Bad Scale: Verified Utility in


Forensic Practice

Article  in  The Clinical Neuropsychologist · February 2006


DOI: 10.1080/13854040500459322 · Source: PubMed

CITATIONS READS

80 903

3 authors, including:

Nathaniel W Nelson Jerry J Sweet


University of St. Thomas NorthShore University HealthSystem
82 PUBLICATIONS   1,387 CITATIONS    182 PUBLICATIONS   4,053 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Longitudinal changes in brain and behavior associated with military training View project

Personality's Influence on Engagement in Rehabilitation: Collaboration & Expectation (PIERCE) View project

All content following this page was uploaded by Jerry J Sweet on 25 March 2015.

The user has requested enhancement of the downloaded file.


The Clinical Neuropsychologist, 24: 701–724, 2010
http://www.psypress.com/tcn
ISSN: 1385-4046 print/1744-4144 online
DOI: 10.1080/13854040903482863

UPDATED META-ANALYSIS OF THE MMPI-2


SYMPTOM VALIDITY SCALE (FBS): VERIFIED
UTILITY IN FORENSIC PRACTICE

Nathaniel W. Nelson1,2, James B. Hoelzle1, Jerry J. Sweet3,4,


Paul A. Arbisi1,2, and George J. Demakis5
1
Minneapolis VA Medical Center, Minneapolis, MN, 2Department of
Psychiatry, University of Minnesota, Minneapolis, MN, 3Northshore University
Health System, Evanston, IL, 4University of Chicago, Pritzker School of
Medicine, Chicago, IL, and 5University of North Carolina Charlotte, NC, USA

Clinical research interest in the symptom reporting validity scale currently known as the
Minnesota Multiphasic Personality Inventory-2 (MMPI-2) Symptom Validity Scale
(FBS) has continued to be strong, with multiple new publications annually in peer-reviewed
journals that publish psychological and neuropsychological assessment research. Related to
this growth in relevant literature, the present study was conducted to update the Nelson,
Sweet, and Demakis (2006b) FBS meta-analysis. A total of 83 FBS studies (43 new
studies) were identified, and 32 (38.5%) met inclusion criteria. Analyses were conducted
on a pooled sample of 2218 over-reporting and 3123 comparison participants. Large
omnibus effect sizes were observed for FBS, Obvious-Subtle (O-S), and the Dissimulation
Scale-Revised (Dsr2) scales. Moderate effect sizes were observed for the following scales:
Back Infrequency (Fb), Gough’s F-K, Infrequency (F), Infrequency Psychopathology
(Fp), and Dissimulation (Ds2). Moderator analyses illustrate that relative to the F-family
scales, FBS exhibited larger effect sizes when (1) effort is known to be insufficient and
(2) evaluation is conducted in the context of traumatic brain injury. Overall, current results
summarize an extensive literature that continues to support use of FBS in forensic
neuropsychology practice.

Keywords: Minnesota Multiphasic Personality Inventory-2; MMPI-2; Response validity assessment;


Forensic neuropsychology.

INTRODUCTION
The MMPI-2 Symptom Validity Scale, formerly entitled the Fake Bad Scale
(FBS; Lees-Haley, English, & Glenn, 1991) was initially developed as a symptom
reporting validity indicator within the personal injury context to simultaneously
evaluate exaggerated post-injury distress and under-reporting of pre-incident
personality problems. Research since the time of the scale’s initial development
has proliferated, with earlier studies suggesting that it may be differentially sensitive

Address correspondence to: Nathaniel W. Nelson, Ph.D., Minneapolis VA Medical Center,


Minneapolis, MN 55417, USA. E-mail: nels5363@umn.edu
Supplementary data (Appendices 1 and 2) are published online alongside this article at: www.
psypress.com/tcn
Accepted for publication: November 9, 2009.

ß 2010 Psychology Press, an imprint of the Taylor & Francis group, an Informa business
702 NATHANIEL W. NELSON ET AL.

to exaggerated somatic symptoms (e.g., Larrabee, 1998) and other emotional


symptoms unrelated to psychotic disturbance. Critics of FBS subsequently faulted
the scale both for its ‘‘narrow focus’’ (Rogers, Sewell, Martin, & Vitacco, 2003) and
potentially high false positive identification of exaggerated psychological symptoms
(Butcher, Arbisi, Atlas, & McNulty, 2003). The latter paper generated a healthy
debate in 2003 and 2004 among clinical psychologists and neuropsychologists
regarding the merits and limitations of the scale (Arbisi & Butcher, 2004; Greve &
Bianchini, 2004; Lees-Haley & Fox, 2004).
To clarify these issues, Nelson, Sweet, and Demakis (2006b) completed a
meta-analytic investigation of FBS studies that permitted evaluation of over-
reporting (not necessarily malingering) and comparison groups. FBS demonstrated
a large overall effect size between over-reporting and comparison groups (d ¼ 0.96).
Results suggested that the scale performed as well, if not superior to, other scales in
detecting exaggerated psychological symptoms. The authors argued that FBS was
especially useful when applied in the context for which it was originally developed
(i.e., personal injury and forensic neuropsychology setting).
In the years that followed publication of Nelson et al. (2006b), the FBS
literature continued to grow and generally supported its use in neuropsychology
settings (cf. Bianchini, Etherton, Greve, Heinly, & Meyers, 2008; Greve, Bianchini,
Love, Brennan, & Heinley, 2006; Wygant et al., 2007). In fact, survey data from
practitioners provided implied support for FBS, as it was found to be among the
most frequently relied upon validity scales in clinical neuropsychology (Sharland &
Gfeller, 2007). Although FBS was initially developed independently from the
University of Minnesota Press Test Division, the collective body of evidence
supported a 2007 decision to include FBS as one of the standard MMPI-2 validity
scales (see rationale at www.upress.umn.edu/tests/mmpi2_fbs.html). The FBS liter-
ature also supported the recent development of a revised version (FBS-r or
‘‘Symptom Validity Scale’’), which is now included in the MMPI-2 Restructured
Form (MMPI-RF; Ben-Porath & Tellegen, 2008).
In spite of this accumulated support of FBS within the scholarly literature,
and acknowledgment by the University of Minnesota Press Test Division regarding
the scale’s value, a minority of MMPI-2 researchers, primarily outside of clinical
neuropsychology, remain steadfast in their criticism of FBS (Butcher, Gass,
Cumella, Kally, & Williams, 2008; Williams, Butcher, Gass, Cumella, & Kally,
2009). Interested readers are referred to a detailed rebuttal by Ben-Porath,
Greve, Bianchini, and Kaufmann (2009) who suggested that the analyses and
conclusions of Butcher et al. were based on faulty premises and a selective review of
FBS literature and findings. Relevant to the present meta-analysis, Butcher et al.
also expressed concerns regarding the prior meta-analytic study (Nelson et al.,
2006b). The interested reader can access a separate document (Appendix 1)
containing an extended rebuttal to Butcher et al. at the publisher’s website for The
Clinical Neuropsychologist (found within http://www.informaworld.com/), where it
is located as a separate file next to the online PDF of this article.
The current study again focuses on empirical evidence by providing an
updated meta-analytic examination of the FBS literature to inform clinical and
forensic application of the scale. Specifically, the current study examines: (1) whether
more recent and cumulative FBS effect sizes are stable relative to those reported in
UPDATED FBS META-ANALYSIS 703

the Nelson et al. (2006b) meta-analysis, and (2) moderator variables that may
impact FBS effect size magnitudes and that of other validity scales. We believed that
the latter comparisons, which had not been investigated in the 2006 meta-analysis,
could prove useful in understanding the contrasting circumstances and conditions in
which FBS and other validity scales operate.

METHOD
Selection and description of studies
Meta-analysis is a statistical strategy of summarizing results of multiple
samples or studies to produce an overall effect size estimate (Lipsey & Wilson, 2001;
Rosenthal, 1994; Schmidt & Hunter, 2003). Meta-analysis allows the researcher to
address specific hypotheses by ‘‘synthesizing the results of available studies’’
(Wilkinson & the Task Force on Statistical Inference, 1999, p. 594). Methods
employed to identify relevant studies directly impact the outcome of meta-analytic
results. For example, divergent findings from meta-analyses (e.g., Rogers et al.,
2003; Nelson et al., 2006b) may, in part, reflect different levels of restrictiveness used
in developing inclusion and exclusion criteria. Application of ‘‘relaxed criteria’’ may
identify a greater number of studies from which to draw conclusions, but is limited
in that the approach may introduce errors and biases within the meta-analysis based
on less rigorous methodology (Lipsey & Wilson, 2001, p. 18). Application of
‘‘restrictive criteria’’ results in the inclusion of only the most methodologically
sound studies, even if it restricts the extent to which findings may generalize. As in
the Nelson et al. (2006b) study, we incorporated restrictive criteria to reduce
confounds in determining effect size differences between over-reporting and com-
parison groups across MMPI-2 validity scales. We reasoned that since FBS was
developed specifically for forensic use, application of restrictive criteria would
optimally demonstrate the scale’s ability to discriminate between over-reporting and
comparison groups, even if the approach may limit conclusions regarding FBS
utility outside of the forensic context (e.g., routine clinical samples).
Multiple methods were employed to identify potentially relevant FBS studies
published since Nelson et al. (2006b). The primary method of identifying relevant
studies was to search multiple online databases (PsycINFO, Medline, Dissertation
Abstracts International), using search terms such as ‘‘MMPI-2 Fake Bad Scale’’,
‘‘FBS’’, ‘‘malingering’’, ‘‘symptom exaggeration’’, and ‘‘response bias’’. References
from previously published papers and abstracts from national conferences—
American Academy of Clinical Neuropsychology; Division 40 (Clinical
Neuropsychology) of the American Psychological Association; International
Neuropsychological Society; National Academy of Neuropsychology—that inves-
tigated FBS were also reviewed. When published works, dissertations, or abstracts
did not disclose sufficient information to determine whether the study met inclusion
criteria, authors were contacted in an attempt to obtain additional information.
Care was taken to ensure that unpublished dissertations and conference presenta-
tions were not redundant with studies subsequently published.
Figure 1 clearly illustrates that FBS literature has proliferated since its
inception. There has been an especially steep increase in frequency of studies
704 NATHANIEL W. NELSON ET AL.

Figure 1 Frequency of Symptom Validity Scale (FBS) studies by year.

most recently. The current meta-analysis identified a total of 83 FBS studies


(1991–present), 43 of which were not previously available to Nelson and colleagues.
This represents a 51.8% increase in FBS literature since the time of the previous
publication. Of the 83 studies, 32 met restrictive inclusion criteria, which is a 40.6%
net increase in FBS studies included in the current analysis relative to the
19 included in the 2006 meta-analysis. Table 1 reports the studies considered
relevant to the current meta-analysis, as determined by the same a priori criteria
employed in the previous meta-analysis. Table 2 conveys MMPI-2 validity scale
effect sizes from each study included. Only those scales that had been examined in
the previous study were included. Blanchard, McGrath, Pogge, and Khadivi (2003)
was the only pre-2006 study identified that met inclusion criteria, which had not
been identified previously.
Methods of determining effect size differences for over-reporting and
comparison groups, and for demographic variables, were consistent with method-
ology used previously (see Nelson et al., 2006b, for details). Across the 32 studies
included, the total number of participants was 5341, with 2218 over-reporting
participants and 3123 comparison group participants. Each study contributed a
single effect size to the omnibus effect size generated for each MMPI-2 validity scale.
In the event that an investigation was comprised of multiple groups, various
iterations of over-reporting and comparison group differences were generated.
These multiple effect sizes were then averaged across iterations to identify a single
contributing effect size for the respective study. Noteworthy, in this context,
over-reporting and comparison groups are not mutually exclusive, as some groups
were included as both over-reporting and comparison groups for different weighted
effect sizes within the same study. The Bianchini et al. (2008) study is illustrative.
This study examined MMPI-2 validity scale differences among groups defined as:
(1) pain patients with definite malingered-pain-related disability (MPRD; Bianchini,
Greve, & Glynn, 2005), (2) controls or normals instructed to simulate pain,
(3) patients presenting with incentive only who were not malingering, and
(4) patients without incentive who were not malingering. In this study, the
‘‘incentive only’’ non-malingering group was initially used as a comparison group to
the simulation and definite MPRD groups, and also as an ‘‘over-reporting’’ group
UPDATED FBS META-ANALYSIS 705

Table 1 Studies examining the MMPI-2 Symptom Validity Scale (FBS)

Included Excluded

Arbisi et al. (2006)þ Arnold et al. (2005) B3


Bagby et al. (2000)* Bianchini et al. (2006) B3
Bianchini et al. (2008)þ Boone & Lu (1999) A2, B3
Binder et al. (2006)þ Burandt (2006) B1
Blanchard et al. (2003)þ Bury & Bagby (2002) A2
Charles (1999)* Butcher et al. (2003) A1,2
Cramer (1995)* Crawford (2004) A1,3
Crawford et al. (2006)þ Curtiss et al. (2008) B3
Dearth et al. (2005)* Crespo et al. (2007) (in Spanish)
Dukarm (2006)þ Dean et al. (2008) A2
Efendov et al. (2008)þ Demakis et al. (2008) A2
Elhai et al. (2001)* Downing et al. (2008) A2
Gervais et al. (2007a)þ Elhai et al. (2000) A3
Greiffenstein et al. (2002)* Eyler et al. (2000) A1,2
Greiffenstein et al. (2004)* Fox et al. (1995) A2, B1
Greve et al. (2006)* Gervais et al. (2005) A3
Guez et al. (2005)* Gervais et al. (2007b) A3, B3
Iverson et al. (1995, 2002)* Gervais et al. (2008) A3, B3
Larrabee (2003a)* Greiffenstein & Baker (2008) A2, B3
Lees-Haley (1992)* Greve et al. (2008) B3
Lees-Haley et al. (1991)* Grillo et al. (1994) A1,2
Martinez et al. (2005)þ Henry et al. (2006) B3
Meyers et al. (2002)* Henry et al. (2008) A3, B3
Miller & Donders (2001)* Hinojosa (1993) A1
Nelson et al. (2005, 2007b)* Horwitz et al. (2006) B3
Ross et al. (2004)* Larrabee (1998) A2
Sellers et al. (2006)þ Larrabee (2003b) A3
Thomas & Youngjohn (2009)þ Larrabee (2003c) A3
Tsushima & Tshushima (2001)* Larrabee (2003d) A3
Wegman et al. (2005)* Larrabee (2008) A3
Whitney et al. (2008)þ Lees-Haley (1997) A2
Wygant et al. (2007)þ McCarthy (2004) A2,3
McCarthy & Heilbronner (2005) A2
Millis et al. (1995) A3
Mitchell (2008), A1,2
Nelson et al. (2006a) A2
Nelson et al. (2007a) A2
Posthuma & Harper (1998) B2
Putnam et al. (1998) A3
Rawls et al. (2008) A2
Rogers et al. (1995) A2
Shea (2006) A1
Sisung (2006) B3
Slick et al. (1996) A1
Soetaert et al. (2008) A2
Staudenmeyer & Phillips (2007) A2
Sweet et al. (2006a) A3
Sweet et al. (2006b) A3
Van Gaasbeek et al. (2001) A2, B3
Vanderslice-Barr et al. (2008) A1,2
Wygant (2008) A3

(See next page for Inclusion Criteria, Exclusion Criteria, and Table Notes.)
706 NATHANIEL W. NELSON ET AL.

Table 1 (continued)

A. Inclusion criteria:
1. The study contained appropriate MMPI-2 data (i.e., ns, means, standard deviations) that would
allow computation of effect sizes, or availability of other psychometric data (e.g., t, F, p values) that
would allow for FBS effect size estimations.
2. FBS was reported with adult participants in two or more independent groups. One of these groups
was obtained in a forensic context known to increase the likelihood of over-reporting of symptoms (e.g.,
personal injury, workers’ compensation, disability claims, or a simulation condition designed to mimic a
forensic context, criminal defendant), and there was independent evidence (i.e., not the MMPI-2) that the
level of symptom over-reporting was greater than comparison group(s). Comparison groups were
(a) either evaluated within a context in which the likelihood of over-reporting would not reasonably be
expected (e.g., a normal control group; a clinical group evaluated outside of a forensic setting), or (b) if in
an over-reporting context, there was a likelihood that the comparison group(s) did not over-report
symptoms to the same extent as the over-reporting group.
3. If the same data were included in multiple published studies, it was only represented once. Abstracts
of conference presentations known to have been subsequently published in peer-reviewed journals were
only represented once.
B. Exclusion criteria:
1. Studies involving civil litigants, claimants, or criminal defendants that probably contained an
unknown number of malingerers, which were compared only to other groups of exclusively malingering
civil litigants, claimants, or criminal defendants (thus, precluding the calculation of an over-reporting
effect size), and for which there were no means of determining a differential degree of over-reporting
between groups.
2. Comparison groups evaluated within a context in which under-reporting of symptoms (e.g., per-
sonnel selection, custody evaluation) would be expected.
3. Study designs that employed FBS as an a priori measure of symptom validity to assign group
membership were not included. Study designs that selected the comparison group on the basis of
significantly elevated MMPI-2 scores.
Table Notes:
Every study in the left column met all inclusion criteria, did not meet any of the exclusion criteria, and
contributed an independent mean effect size to the current meta-analysis. Every study in the right column
either did not meet one or more of the inclusion criteria or met one or more of the exclusion criteria.
Studies in bold represent studies published since the previous meta-analysis.
*Denotes studies previously included in Nelson et al. (2006b).
þDenotes studies examined subsequent to Nelson et al. (2006b).

when compared with the non-malingering ‘‘no incentive’’ group. Consistent with
methodology incorporated in the initial meta-analysis, rather than delete studies such
as this because of dependent effect sizes, effect sizes were included with the
assumption that important information relevant to FBS differences may be obtained.
Means for age, education, and gender (% male) were summed across
over-reporting and comparison groups. Studies that did not report these data, or
whose report of demographics was unclear, did not contribute to these variables.
The mean age for the overall over-reporting group was 37.0 (SD ¼ 7.2) years, and
the mean age of overall comparison group was 39.0 (SD ¼ 6.2) years. The mean level
of education for the over-reporting group was 12.8 (SD ¼ 1.1) years, and 13.5
(SD ¼ 1.1) for the comparison group. The mean percentage of male participants for
the over-reporting group was 56.2% (SD ¼ 25.2), and 57.8% (SD ¼ 23.1) for the
comparison group. These demographic variables were not significantly different
between over-reporting and comparison groups.
Table 2 Study-specific Overall Weighted Mean MMPI-2 Validity Scale Effect Sizes

Study and samples FBS L F K Fb Fp F-K O-S Ds2 Dsr2

Arbisi et al. (2006)þ (Veteran claimants: simulating PTSD versus honest) 0.51 0.51 1.52 0.37 1.22 1.96
Bagby et al. (2000)* (Depressed patients, depressed simulators, 1.43 2.12 2.37# 0.97 1.67 2.06
non-depressed patients)
Bianchini et al. (2008)þ (Pain: no incentive, incentive only, Definite 1.88 0.36 1.21 0.50 1.44 0.79 1.06 1.11 1.47
MPRD, simulators)
Binder et al. (2006)þ (MCS claimants, 1.60 0.47 0.35 0.01
nonepileptic and epileptic comparison groups)
Blanchard et al. (2003)þ (Psychiatric patients, student psychiatric simulators, 0.98 0.06 2.12 0.91 1.52 3.04# 3.39# 1.64 2.29 1.91
student forensic simulators)
Charles (1999)* (Litigants: honest, equivocal, malingering) 1.01 0.23 1.23 0.40 1.14 0.84 1.06 0.93
Cramer (1995)* (Honest students, psychotic simulator students, 0.98 1.22 1.28 1.11 1.01
neurotic simulator students)
Crawford et al. (2006)þ (Depression: simulating students versus 1.11 0.26 0.48 0.06 0.56 0.29 0.30 0.46
honest patients)
Dearth et al. (2005)* (Head-injured, community simulators, controls) 1.38 1.92 1.32 1.19 1.60
Dukarm (2006)þ (TBI simulators versus controls) 1.77
Elhai et al. (2001)* (Fake PTSD, genuine PTSD) 0.46 1.10 1.41 1.37 0.86 1.03
Efendov et al. (2008)þ (PTSD claimants vs Remitted) 0.72 0.40 0.58 0.13
UPDATED FBS META-ANALYSIS

Gervais et al. (2007)þ (Claimants: good vs. poor effort) 0.60 0.60 0.96 0.55
Greiffenstein et al. (2002)* (Litigant atypical mild head injury, 1.13 0.13 0.14
litigant moderate-severe head injury)
Greiffenstein et al. (2004)* (Litigant severe PTS, litigant 1.08 0.21 0.15 0.03
improbable PTS, non-litigant PTS)
Greve et al. (2006)* (TBI: no incentive, incentive only, 0.96 0.14 0.80 0.52 0.89 0.52 0.75 0.91 0.74
suspect MND, likely MND, probable MND, definite MND)
Guez et al. (2005)* (Whiplash litigant, control) 3.52# 0.88# 1.01 0.38

(continued)
707
708

Table 2 Continued

Study and samples FBS L F K Fb Fp F-K O-S Ds2 Dsr2

Iverson et al. (1995, 2002)* (Inmate simulators, controls, 1.02 0.14 0.87 0.41 1.08 1.80
medical patients, substance abusers)
Larrabee (2003a)* (Definite MND, closed head injury litigants) 1.80 0.02 0.53 0.20 0.53 0.39 0.17 0.65 0.27
Lees-Haley (1992)* (Litigant pseudo-PTSD, litigant controls) 1.71 2.03 2.43 3.02#
Lees-Haley et al. (1991)* (Litigant malingerer, litigant non-malingerer, 1.71
MVA simulator, toxic exposure simulator, stress simulator)
Martinez et al. (2005)þ (Litigating malingerers versus non-lit patients) 1.64
Meyers et al. (2002)* (Litigant and 0.62 0.75 0.37 0.72 0.82 0.90
non-litigant cognitive complaint groups)
Miller & Donders (2001)* (Litigant and 0.47
non-litigant head injury groups)
Nelson et al. (2005, 2007b)* (Litigant and 0.63 0.52 0.23 0.27 0.01 0.28
non-litigant cognitive complaint groups)
Ross et al. (2004)* (Litigant and non-litigant head injury groups) 2.84 0.24 0.80 0.25 0.52
Sellers et al. (2006)þ (Simulated cognitive and 2.96#
psych symptoms versus student controls)
NATHANIEL W. NELSON ET AL.

Thomas & Youngjohn (2009)þ (TBI, SVT pass versus fail) 1.03 0.09 0.79 0.23 0.44 0.39
Tsushima & Tsushima (2001)* (Litigant and non-litigant patients) 0.60 0.09 0.11 0.16 0.03
Wegman et al. (2005)* (Litigant malingerer and honest litigants) 1.75 0.76 0.31
Whitney et al. (2008)þ (VA outpatients: Good versus poor effort) 0.22 0.49 0.63 0.57
Wygant et al. (2007)þ (Litigants: Good versus poor effort) 0.94 1.60

Note. MCS ¼ multiple chemical sensitivity; MND ¼ malingered neurocognitive dysfunction (Slick et al., 1999); MPRD ¼ malingered pain-related dysfunction;
PTSD ¼ post-traumatic stress disorder; PTS ¼ post-traumatic stress; TBI ¼ traumatic brain injury; *Denotes studies previously included in Nelson et al. (2006);
þ
Denotes studies examined subsequent to Nelson et al. (2006b); #Denotes statistical outlier (42 SD above effect size population mean).
UPDATED FBS META-ANALYSIS 709

Select subsamples were excluded from analyses when groups were identified as
having similar incentive to exaggerate symptoms (Bianchini et al., 2008; Blanchard
et al., 2003; Dukarm, 2006; Efendov, Sellbom, & Bagby, 2008). For example, in the
Blanchard et al. (2003) study, validity scale comparisons between the ‘‘forensic
feigners’’ and ‘‘psychiatric feigners’’ were not made, as the groups were conceived of
as having similar incentive to exaggerate symptoms. However, differences were
observed for each group relative to the inpatient sample without known financial
incentives. Effect sizes generated from Efendov et al. (2008) were generated between
the PTSD claimant group and the remitted trauma group, but only when the latter
group completed the MMPI-2 under standard instructions (i.e., not after direction
to simulate PTSD, which would have resulted in comparison of two over-reporting
groups). Similarly, for Bianchini et al. (2008), the definite MPRD group was
not compared with the ‘‘simulated pain’’ group, as both are associated with
over-reporting presentations. In the Dukarm (2006) study, personal communication
with the author (via email, 2/19/09) indicated that an unknown proportion of
participants in the ‘‘TBI only’’ group was also seeking financial remuneration
related to their injuries. The group was therefore excluded, and an effect size was
derived only between the ‘‘TBI simulators’’ group and the student control group.
The Wygant et al. (2007) study included both civil claimant and criminal forensic
samples. It was determined that a portion of the civil claimant group was redundant
with data reported elsewhere (Henry, Heilbronner, Mittenberg, & Enders, 2006;
Henry, Heilbronner, Mittenberg, Enders, & Stanczal, 2008; Nelson, Sweet, &
Heilbronner, 2005, 2007b). As such, comparison was only made between the
criminal forensic samples with insufficient effort versus those with adequate effort.

Meta-analytic methods
Meta-analytic methods were consistent with those conducted in 2006; please
refer to Nelson et al. (2006b) for a more elaborate discussion of these methods.
Briefly, in addition to FBS, MMPI-2 validity scales from the prior study were
re-examined. These included traditional validity scales (L, F, K, Fb) and other
commonly interpreted validity scales including Infrequency Psychopathology scale
(Fp; Arbisi & Ben-Porath, 1995), F-K (Gough, 1950), Obvious minus Subtle Scale
(O-S; Greene, 1991; Wiener, 1948), and revisions of the Dissimulation scale (Gough,
1954, 1957) for the MMPI-2 (Ds2; Dsr2). Standardized unbiased mean differences
were calculated between all over-reporting and comparison groups as described by
Lipsey and Wilson (2001, pp. 48–49). This resulted in an overall effect size (d) that
was then corrected, as necessary, to minimize bias related to small sample sizes via
Hedges’ (1981) unbiased effect size estimate. Standard errors of each effect size were
then computed, and the mean standard error within each study was assigned to the
composite validity scale effect sizes for each study. Effect sizes were transformed via
their relative inverse variance weights (Lipsey & Wilson, 2001, p. 36), which ensures
that studies consisting of larger samples have a greater composite effect size
contribution than studies comprised of smaller samples. Mean effect sizes were then
banded by 95% confidence intervals.
Validity scale effect size distributions were inspected to identify outliers that
may have disproportionately influenced scales of interest. Individual validity scale
710 NATHANIEL W. NELSON ET AL.

effect sizes that were two or more standard deviations above the omnibus effect size
mean were excluded. On this basis, FBS effect sizes were not included for Guez,
Brannstrom, Nyberg, Toolanen, and Hildingsson (2005) and Sellers, Byrne, and
Golus (2006), with magnitudes of 3.52 and 2.96, respectively. Table 2 includes
studies according to effect size contributions and demarcates outliers for each of the
other MMPI-2 validity scales.
FBS moderator analyses were then conducted. Insufficient cognitive effort
(i.e., known versus unknown) was selected, given evidence in the neuropsychological
literature that FBS may have a differentially stronger relationship with exaggerated
cognitive symptoms and effort performances than other MMPI-2 validity scales
(cf. Nelson et al., 2007a; Slick et al., 1996). Other study moderators included type of
over-reporting group (i.e., litigant/claimant/defendant, simulator, or combined
groups), type of response bias comparison (i.e., over-reporting group versus no
over-reporting group; over-reporting group versus lesser over-reporting group;
over-reporting group versus combined no over-reporting group/lesser over-
reporting group), gender (e.g., Lees-Haley, 1992), and condition associated with
over-reporting (i.e., TBI, post-traumatic stress, chronic pain, mixed, and unclear).
Publication status was also included as a moderator to ensure that the findings of
published versus unpublished works were not meaningfully different (i.e., to address
the ‘‘file drawer problem’’, see Lipsey & Wilson, 2001).
An additional analysis not previously conducted in 2006 was comparison of
FBS and F-family effect size differences according to moderator variables that are
directly relevant to forensic neuropsychology. These comparisons were made to
determine whether FBS and the F-family might have differential effectiveness
depending on the context of evaluation (e.g., forensic neuropsychology; cf. Wygant
et al., 2007) and the condition examined (e.g., somatic/cognitive symptoms versus
post-traumatic stress).

RESULTS
Composite effect sizes
Table 3 shows composite effect sizes for FBS and the remaining MMPI-2
validity scales. Effect sizes greater than or equal to .80 are typically considered to be
‘‘large’’, .50 to .79 are ‘‘medium’’, and .20 to .49 are ‘‘small’’ effect sizes (Cohen,
1992). Under this assumption, FBS yielded a ‘‘large’’ effect size (.95), as did O-S (1.00)
and Dsr2 (1.03). Medium effect sizes were observed for F (.71), Fb (.68), F-K (.69),
Ds2 (.62), and Fp (.51). Small effect sizes were observed for L (.16) and K (.27).

Moderator analyses
FBS variability was significantly greater than expected through sampling error
alone (Q ¼ 160.5, p5.001), and confirmed the need for examination of moderator
analyses (see Table 4). Three moderators yielded statistically significant (p5.001)
between-category variability: insufficient cognitive effort, gender, and condition
associated with symptom over-reporting. Of particular interest, studies whose
over-reporting groups were known to have demonstrated insufficient effort
UPDATED FBS META-ANALYSIS 711

Table 3 Meta-Analytic Data for MMPI-2 Validity Scales

Confidence Interval
MMPI-2
Validity Scale k d SE lower 95% upper 95% QT

Dsr2 27 1.03 .09 .86 1.21 28.2


O-S 36 1.00 .08 .85 1.15 13.2
FBS 75 .95 .04 .87 1.03 160.5**
F 65 .71 .04 .63 .79 231.5**
F-K 47 .69 .06 .58 .80 162.1**
Fb 50 .68 .05 .57 .78 106.3**
Ds2 13 .62 .07 .47 .76 142.2**
Fp 41 .51 .05 .41 .61 133.7**
K 38 .27 .07 .40 .14 25.2
L 38 .16 .07 .03 .29 10.9

Note. k ¼ number of effect sizes; QT ¼ total within-scale heterogeneity. **p5.001.

demonstrated a larger FBS effect size difference (d ¼ 1.16) than studies that did not
assess effort status (d ¼ 0.83). Publication status was not a statistically significant
moderating variable. Additionally, FBS effect sizes obtained from the studies not
previously considered in the FBS literature were strikingly consistent with those that
originally contributed to the Nelson et al. (2006b) study (previous effect size
d ¼ 0.96; current effect size d ¼ 0.95).
Variability for F, F-K, Fb, Fp, and Ds2 was also significant (p5.001) and
therefore warranted follow-up moderator analyses. These supplementary data
(Appendix 2) are published online alongside this article at www.psypress.com/tcn.
Known effort status was a significant moderator for F (Q ¼ 5.48, p5.05), F-K
(Q ¼ 25.56, p5.001), Fb (Q ¼ 8.96, p5.01), and Ds2 (Q ¼ 8.70, p5.01). Overall
effect sizes for known effort status on F and F-K were less (.59, .31, respectively)
than those observed in the unknown effort status condition (.79, .90, respectively).
Fb and Ds2 showed larger effect sizes in the known insufficient effort group (.88,
1.60, respectively) relative to the unknown effort status group (.55, .57, respec-
tively). Fp did not yield a significant moderating effect for effort status.
Notably, whereas FBS did not show differences between litigant and
simulating groups, type of over-reporting sample (i.e., litigant/claimant, simulation,
or combined) was a significant moderating variable for F (Q ¼ 77.41, p5.001), F-K
(Q ¼ 34.58, p5.001), Fb (Q ¼ 40.60, p5.001), Fp (Q ¼ 52.35, p5.001), and Ds2
(Q ¼ 101.64, p5.001). Simulating samples uniformly showed larger effect size
differences relative to litigant samples on the latter scales. Gender was a significant
moderating variable for F (Q ¼ 62.18, p5.001), F-K (Q ¼ 22.22, p5.001), Fb
(Q ¼ 18.81, p5.001), and Ds2 (Q ¼ 101.64, p5.001), with larger effect size
differences demonstrated for majority female samples on F, Fb, and Ds2. A
larger effect size was observed on F-K for predominately male samples, and Fp did
not yield a significant moderating effect for gender.
Condition associated with over-report was a significant moderating variable
for F (Q ¼ 29.40, p5.001), F-K (Q ¼ 45.34, p5.001), Fb (Q ¼ 17.45, p5.01),
Fp (Q ¼ 20.75, p5.001), and Ds2 (Q ¼ 16.60, p5.01). Ds2 showed a large effect size
712 NATHANIEL W. NELSON ET AL.

Table 4 Fake Bad Scale Moderator Analyses

Confidence Interval

Moderatora k d Lower 95% Upper 95% QW QB


b
Cognitive effort 16.06***
Known to be insufficient 33 1.16 1.03 1.30 88.27
Unknown 42 .83 .73 .93 56.17
Type of over-reporting sample 2.46
Litigant or Claimant 40 .93 .84 1.03 126.54
Simulatorc 18 .91 .72 1.11 13.20
Combined Litigant and Simulatorc 17 1.15 .89 1.42 18.30
Gender 19.69***
50% of sample male 48 .79 .69 .90 54.54
550% of sample male 17 1.11 .97 1.25 81.85
Unclear 10 1.23 1.01 1.46 4.42
Condition associated with over-reporting 22.26***
Traumatic brain injury 26 1.28 1.12 1.44 64.42
Post-traumatic stress 7 .86 .66 1.05 24.02
Chronic pain 6 .85 .58 1.11 12.71
Mixedd 33 .82 .71 .94 37.09
Unclear 3 1.01 .60 1.42 0.00
Publication Status 1.26
Published 68 .93 .85 1.02 144.26
Unpublished 7 1.07 .85 1.29 14.98
Pre or Post-Nelson et al. (2006b)
Previously Included 56 .96 .87 1.06 119.11 .29
New to Present Study 19 .92 .78 1.06 41.10

Notes. k ¼ number of effect sizes; QW ¼ within-category heterogeneity, QB ¼ between-category heter-


ogeneity.
a
Each moderator comparison is orthogonal; each study’s effect size(s) contribute(s) to only one
moderator category.
b
For the known category, studies either: (1) included a priori cognitive effort measures and/or the Slick
et al. (1999) criteria of malingered neurocognitive dysfunction to establish a likelihood of cognitive
malingering in one of the two study groups, or (2) reported post-hoc analyses that discriminated
over-reporting and comparison groups on the basis of insufficient/sufficient cognitive effort.
c
For the simulator category, subjects were confederates, often students, enrolled in experimental
studies and instructed to feign or exaggerate symptoms.
d
Includes individuals who had multiple conditions, and groups in whom individuals had different
conditions. These included, mixed psychiatric conditions (e.g., depression, anxiety), traumatic brain
injury, simulated ‘‘neurosis’’ and ‘‘psychosis’’, neurotoxicity, etc.
***p5.001.

for traumatic brain injury (1.60) relative to post-traumatic stress (1.03) and various
mixed conditions (.47). The majority of the F-family demonstrated larger effect sizes
for symptoms of post-traumatic stress (F ¼ 1.05, F-K ¼ 1.23, and Fp ¼ .96).
Whereas publication status was not a significant moderator for FBS, F-K, or
Fb, significantly larger effect sizes were observed for published versus unpublished
papers on Ds2 (.80 versus .30) and on F (.74 versus .36).
Based on inspection of effect size differences among F, F-K, Fb, Fp, and Ds2
since the prior study in 2006, time of publication (i.e., pre- or post-2006) was also
examined as a moderator on these scales. Figure 2 juxtaposes composite MMPI-2
UPDATED FBS META-ANALYSIS 713

Figure 2 MMPI-2 Validity Scale effect sizes. Error bars reflect 95% confidence intervals.

Figure 3 Symptom Validity Scale (FBS) and F-family effect sizes in Effort and TBI groups. k ¼ number
of effect sizes. k for FBS and F-family scales for above moderators was as follows. For known insufficient
effort, k was 33 for FBS, 31 for F, 28 for Fb, 30 for Fp, and 23 for F-K. For traumatic brain injury (TBI),
k was 26 for FBS, 23 for F, 20 for Fb, 21 for Fp, and 17 for F-K. Error bars reflect 95% confidence
intervals.

weighted validity scale effect sizes between Nelson et al. (2006b) and the current
updated meta-analysis. This moderating variable was statistically significant for
F (Q ¼ 7.97, p5.01), Fb (Q ¼ 24.41, p5.001), Fp (Q ¼ 15.01, p5.001), and Ds2
(Q ¼ 36.17, p5.001). These results suggest that the sample of studies completed
since the 2006 meta-analysis produced meaningfully different results on these scales
(i.e., larger effect sizes) relative to those that were previously included.
To examine differential effect sizes of FBS and the F-family in forensic
neuropsychology, two moderator variables were compared across the scales: known
insufficient cognitive effort and traumatic brain injury as the condition associated
with presentation. Figure 3 shows the results of these analyses, which had not been
undertaken in the previous meta-analysis. FBS demonstrated greater effect size
differences between over-reporting and comparison groups when effort was known
to be insufficient (d ¼ 1.16) and in the evaluation of purported traumatic brain
injury (d ¼ 1.28) than any of the F-family scales.
714 NATHANIEL W. NELSON ET AL.

DISCUSSION
The present study was conducted to summarize the MMPI-2 FBS literature
from 1991 to the present, with substantial growth of more than a 50% increase in
FBS studies identified since the Nelson et al. (2006b) meta-analysis. The current
FBS composite effect size is large (d ¼ 0.95) and stable relative to previous findings
in 2006 (d ¼ 0.96). The cumulative FBS literature suggests that the scale continues to
differentiate groups as well as, and under certain conditions superior to, other
MMPI-2 validity scales (including all of the F-family scales). In particular, two
factors that are particularly relevant to practicing neuropsychologists, effort status
and TBI, substantially moderated FBS magnitudes. Although scales within the F
family at times also showed moderate to large effect size differences related to effort
and TBI, these were invariably lower than that of FBS.
The revised Dissimulation Scale (Dsr2) and the Obvious-Subtle items (O-S)
were the only other validity scales examined to demonstrate large effect sizes across
studies, and effect sizes for Dsr2 and O-S were slightly larger than FBS. Based on
the limited number of new effect size comparisons and inspection of 95% confidence
intervals associated with these effect sizes (see Figure 2), it is plausible that mean
effect sizes for these scales are less stable than estimates derived for other scales
examined. Only two studies not previously examined (Bianchini et al., 2008;
Blanchard et al., 2003) contributed effect sizes to Dsr2 and O-S to the updated
results. Nevertheless, there may be something unique about these scales that might
warrant additional investigation. For example, recent exploratory factor analysis
(Nelson et al., 2007) found that Dsr2 loaded comparably on distinct dimensions that
reflected ‘‘psychotic’’ and ‘‘non-psychotic’’ symptoms. The scale was most strongly
correlated with F (.82), but also showed moderate size correlation with FBS (.49)
and a large correlation with the Response Bias Scale (RBS) (.67) (Gervais,
Ben-Porath, Wygant, & Green, 2007a), a more recently developed measure of
exaggerated cognitive symptoms. The authors concluded that, ‘‘Dsr2 is likely a
more general indicator of response validity, whereas the other psychological validity
scales [i.e., F-family, FBS] appear to reflect more specific validity dimensions’’
(Nelson et al., 2007, p. 447). In other words, it is possible that Dsr2 is uniquely
sensitive to exaggeration of a blend of cognitive, somatic, and psychotic symptoms.
To better understand why O-S might have demonstrated an overall effect size
comparable to FBS, the item content of each scale was inspected. Of the 43 FBS
items, 21 (48.8%) are overlapping with O-S. In contrast, only 12 of the 60 F items
(20%) overlap with O-S. This suggests that FBS and O-S may evaluate some similar
dimensions of symptom over-reporting relative to the F-family. However, whether
item overlap alone accounts for similar FBS and O-S effect sizes remains unclear
and may warrant follow-up study.
A unique characteristic of the current study relative to the 2006 study is that
moderator analyses were conducted not only for FBS, but also for other validity
scales that demonstrated significant within-scale heterogeneity of variance (F, F-K,
Fb, Fp, and Ds2). These analyses permit greater understanding regarding
comparability of validity scales in specific applications. For example, relative to
the F-family scales, FBS demonstrated greater effect sizes when effort was known to
be insufficient (d ¼ 1.16) and when traumatic brain injury was the condition
UPDATED FBS META-ANALYSIS 715

associated with over-reporting of symptoms (d ¼ 1.28). These findings demonstrate


that FBS shows relatively greater utility than the F-family as a response validity
measure within the context of common forensic neuropsychology applications.
In general, F, F-K, Fb, Fp, and Ds2 were affected by a greater number of
moderators than FBS. A meaningful difference was observed with regard to the type
of over-reporting sample examined (litigant, simulator, or combined litigant/
simulator). This variable was not a meaningful moderator for FBS, but was
significant for each of the F-family scales and Ds2. Effect sizes were uniformly
larger for the F-family and Ds2 scales when studies included a simulating sample.
Comparison of item content between FBS (e.g., greater proportion of non-psychotic
symptoms) and the F-family (e.g., greater proportion of items reflecting psychotic
symptoms) may account for this. Individuals (often college students) instructed to
feign psychotic symptoms may have a more difficult time doing so in a sophisticated
or realistic fashion than individuals evaluated in the ‘‘real world’’. Alternatively,
findings could suggest that somatic and other non-psychotic symptoms represented
on FBS are more common and familiar to simulators, and less likely to be
recognized as relevant to exaggeration relative to the psychotic items represented on
the F-family scales. This finding also suggests that FBS findings derived from
simulation studies may be more generalizable to the litigation context.
The finding that time of publication (pre- and post-2006) was a significant
moderating variable for the F-family scales and Dsr2, but not for FBS warrants
elaboration. Recent studies on a subset of the scales are in some way fundamentally
different from those previously examined. Perhaps among the updated literature are
studies conducted to identify exaggerated psychotic symptoms, which are more
likely to be identified on the F-family and Dsr2 scales, rather than FBS.
For example, Blanchard et al. (2003), in which simulators were instructed to feign
psychiatric symptoms, resulted in disproportionately large F-family effect sizes
relative to FBS. Similarly, a greater effect size for Fp relative to FBS was reported
by Wygant et al. (2007) in a study that included a criminal forensic sample with
sufficient versus insufficient effort (the civil litigant group was not examined in the
current study because of redundant data with another study).
Comparisons of FBS with the F family related to post-traumatic stress (PTS)
and post-traumatic stress disorder (PTSD) are tentative because of the small
number of available studies. As shown in Table 3, only seven effect size comparisons
were available for PTS and PTSD. Based only on available data, although FBS
shows a large effect size (.86), the F-family scales produced even larger effect sizes
(ranging from .91 to 1.23). Arbisi, Ben-Porath, and McNulty (2006) and Efendov
et al. (2008) were the only two new studies to include FBS in detection of
exaggerated post-traumatic stress. The context in which the criterion A event
transpired may determine the type of post-traumatic stress symptoms exaggerated.
For example, in the context of forensic neuropsychology, post-traumatic stress may
be attributed to a motor vehicle accident that also resulted in traumatic brain injury.
In this scenario, somatic and cognitive symptoms may be disproportionately
exaggerated relative to psychotic symptoms. In military and veteran settings,
combat-related events not related to blast exposure may not include the same degree
of somatic and cognitive symptoms, and elevated F-family scales may be of greater
relevance than FBS. Additional data are needed to clarify this possible distinction.
716 NATHANIEL W. NELSON ET AL.

Limited data also restricts an understanding of whether FBS may be


differentially effective in the exaggeration of chronic pain (Bianchini et al., 2008)
and warrants additional study. On an item-content level, it seems intuitive that the
somatic symptoms represented on FBS would be of greater relevance to what has
been termed ‘‘somatic malingering’’ (Larrabee, 1998) compared with the ‘‘psy-
chotic’’ symptoms represented on the F-family scales. However, based on the two
studies that contributed to the chronic pain moderator effect sizes (Bianchini et al.,
2008; Meyers, Millis, & Volkert, 2002), the F-family effect sizes were relatively wide
ranging (.45 to 1.44) compared to FBS (.85).
One noteworthy difference between the Nelson et al. (2006) study and the
current study is that gender was currently, but not previously, found to be a
significant moderating variable for FBS in over-reporting and comparison groups.
This finding suggests that differential cut-scores may be indicated for men versus
women, as has been suggested by previous authors (Greiffenstein, Fox, &
Lees-Haley, 2007; Lees-Haley, 1992). The need for differential cut-scores for men
and women may also be appropriate given clear gender differences apparent within
the normative FBS data. For example, a raw score FBS cut-off of 25 amounts to a
T-score of 85 for men and only a T-score of 77 for women. FBS raw score of 30,
which has been identified as almost certainly suggestive of ‘‘promotion of suffering’’
(Greiffenstein et al., 2007, p. 229), amounts to a T-score of 98 for men and 89 for
women. Butcher et al. (2008) have criticized FBS on the basis of gender bias.
However, evidence of a gender difference is not evidence of gender bias.
For example, differences observed between African American and Caucasian
groups in MMPI-2 mean score elevations are not due to either significant slope or
intercept predictive bias, but related to meaningful differences between the groups
(Arbisi, Ben-Porath, & McNulty, 2002). Additionally, for decades MMPI and
MMPI-2 scale raw scores when manually scored are placed on completely separate
norm-based graphs because of the relevance of gender to some of the scales (though
this will not be relevant to the MMPI-2-RF since it employs non-gender-based
norms). Even within the present meta-analytic results gender differences are not
unique to FBS. Gender in the case of F was a significant moderator (p5.001). For
predominantly male samples (i.e.,450%), the current effect size for F was .54 while
the effect size for F in predominantly female samples (550% female) was 1.11. Fb is
also noteworthy in that it yielded an effect size of .57 in predominantly male
samples, and 1.18 in predominantly female samples. Not exclusive to personality
assessment, neuropsychologists are very familiar with normative data that is
stratified by gender for some of our commonly used tests.
The current study did not explicitly compare FBS with other newer generation
MMPI-2 validity scales that have been developed specifically to predict insufficient
effort on neuropsychological evaluation (Response Bias Scale, RBS; Gervais et al.,
2007a) and exaggeration of somatic or ‘‘pseudoneurological’’ symptoms
(Henry-Heilbronner Index, HHI; Henry et al., 2006, 2008). Unfortunately, only a
limited number of studies were available for consideration of inclusion in the
current analysis. However, preliminary studies of both of these scales have been
encouraging, and suggest the possibility that they may eventually be shown to have
equal or greater utility than FBS in the forensic neuropsychological context.
UPDATED FBS META-ANALYSIS 717

In summary, the present updated meta-analysis continues to suggest that FBS


is appropriate for use as a validity measure. FBS shows particular effectiveness
when effort is known to be insufficient and in the context of traumatic brain injury.
Practitioners who rely on FBS can be assured that the now extensive literature
strongly supports application of FBS in forensic neuropsychology practice.

AUTHOR NOTE
Supplementary data (Appendices 1 and 2) are published online alongside this
article at: www.psypress.com/tcn

REFERENCES

*Denotes reference included in Nelson et al. (2006) and current study.


þDenotes reference included in current study only.

Arbisi, P. A., & Ben-Porath, Y. S. (1995). An MMPI-2 infrequent response scale for use
with psychopathological populations: The infrequency-psychopathology scale, F(p).
Psychological Assessment, 7, 424–431.
Arbisi, P. A., Ben-Porath, Y. S., & McNulty, J. (2002). A comparison of MMPI-2 validity in
African American and Caucasian psychiatric inpatients. Psychological Assessment, 14,
3–15.
þArbisi, P. A., Ben-Porath, Y. S., & McNulty, J. (2006). The ability of the MMPI-2 to detect
feigned PTSD within the context of compensation seeking. Psychological Services, 3,
249–261.
Arbisi, P. A., & Butcher, J. N. (2004). Failure of the FBS to predict malingering of somatic
symptoms: Response to critiques by Greve and Bianchini and Lees-Haley and Fox.
Archives of Clinical Neuropsychology, 19, 341–345.
Arbisi, P. A., & Butcher, J. N. (2004). Psychometric perspectives on detection of malingering
of pain: Use of the Minnesota Multiphasic Personality Inventory-2. Clinical Journal of
Pain, 20, 383–391.
Arnold, G., Boone, K. B., & Wen, J. (February, 2005). Prevalence, Fake Bad Scale Scores,
and noncredible performance in litigating and non-litigating patients with a 1-3/3-1 MMPI-
2 Code Type. Poster session presented at the 33rd annual meeting of the International
Neuropsychology Society, St. Louis, MO.
*Bagby, R. M., Nicholson, R. A., Buis, T., & Bacchiochi, J. R. (2000). Can the MMPI-2
validity scales detect depression feigned by experts? Assessment, 7, 55–62.
Ben-Porath, Y. S., Greve, K. W., Bianchini, K. J., & Kaufmann, P. M. (2009). The MMPI-2
symptom validity scale (FBS) is an empirically-validated measure of over-reporting in
personal injury litigants and claimants: Reply to Butcher et al. (2008). Psychological
Injury and the Law, 1, 62–85.
Ben-Porath, Y. S., & Tellegen, A. (2008). Minnesota Multiphasic Personality Invertory-2
Restructured Form: Manual for administration, scoring and interpretation. Minneapolis,
MN: University of Minnesota Press.
Berry, D. T. R., & Schipper, L. J. (2007). Detection of feigned psychiatric symptoms during
forensic neuropsychological examinations. In G. J. Larrabee (Ed.), Assessment of
malingered neuropsychological deficits. (pp. 226–263). NY: Oxford University Press.
718 NATHANIEL W. NELSON ET AL.

Bianchini, K. J., Curtis, K. L., & Greve, K. W. (2006). Compensation and malingering in
traumatic brain injury: A dose-response relationship? The Clinical Neuropsychologist, 20,
831–847.
þBianchini, K. J., Etherton, J. L., Greve, K. W., Heinly, M. T., & Meyers, J. E. (2008).
Classification accuracy of MMPI-2 validity scales in the detection of pain-related
malingering: A known-groups approach. Assessment, 15, 435–449.
Bianchini, K. J., Greve, K. W., & Glynn, G. (2005). On the diagnosis of malingered pain-
related disability: Lessons from cognitive malingering research. Spine Journal, 5, 404–417.
Bianchini, K. J., Houston, R. J., Greve, K. W., Irvin, T. R., Black, F. W., Swift, D. A., et al.
(2003). Malingered neurocognitive dysfunction in neurotoxic exposure: An application
of the Slick Criteria. Journal of Occupational and Environmental Medicine, 45, 1087–1099.
þBinder, L. M., Storzbach, D., & Salinsky, M. C. (2006). MMPI-2 profiles of persons with
multiple chemical sensitivity. The Clinical Neuropsychologist, 20, 848–857.
þBlanchard, D. D., McGrath, R. E., Pogge, D. L., & Khadivi, A. (2003). A comparison of
the PAI and MMPI-2 as predictors of faking bad in college students. Journal of
Personality Assessment, 80, 197–205.
Boone, K. B., & Lu, P. H. (1999). Impact of somatoform symptomatology on credibility of
cognitive performance. The Clinical Neuropsychologist, 13, 414–419.
Burandt, C. (2006). Detecting incomplete effort on the MMPI-2: An examination of the fake
bad scale in electrical injury. Dissertations Abstracts International, 67, 2216.
Bury, A. S., & Bagby, R. M. (2002). The detection of feigned uncoached and coached
posttraumatic stress disorder with the MMPI-2 in a sample of workplace accident
victims. Psychological Assessment, 14, 472–484.
Butcher, J. N., Arbisi, P. A., Atlis, M. M., & McNulty, J. L. (2003). The construct validity of
the Lees-Haley Fake Bad Scale: Does this scale measure somatic malingering and feigned
emotional distress? Archives of Clinical Neuropsychology, 18, 473–485.
Butcher, J. N., Dahlstrom, W. G., Graham, J. R., Tellegen, A., & Kaemmer, B. (1989). Manual
for administration and scoring of the MMPI-2. Minneapolis: University of Minnesota Press.
Butcher, J. N., Gass, C. S., Cumella, E., Kally, Z., & Williams, C. L. (2008). Potential for bias
in MMPI-2 assessments using the Fake Bad Scale (FBS). Psychological Injury and the
Law, 1, 191–209.
Butcher, J. N., & Perry, J. N. (2008). Personality assessment in treatment planning: Use of the
MMPI-2 and BTPI. New York, NY: Oxford University Press.
*Charles, T. (1999). Usefulness of the Minnesota Multiphasic Personality Inventory-2 in
detection of deception in a personal injury type forensic population. Dissertation
Abstracts International, 60, 5221.
Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155–159.
*Cramer, K. M. (1995). The effects of description clarity and disorder type on MMPI-2 fake-
bad validity indices. Journal of Clinical Psychology, 51, 831–840.
Crawford, E. F. (2004). Multi-method assessment of malingered depression and physical
disability using the MMPI-2 and Rorschach inkblot method. Dissertation Abstractions
International, 65, 2090.
þCrawford, E. F., Greene, R. L., & Dupart, T, Bongar, B., & Childs, H. (2006). MMPI-2
assessment of malingered emotional distress related to a workplace injury: A mixed
group validation. Journal of Personality Assessment, 86, 217–221.
Crespo, G. S., Gomez, F. J., Barragan, V. M., & Rueda, A. A. (2007). The contribution of the
fake bad scale (FBS) to the Spanish adaptation of the MMPI-2. Revista de Psicologia
General y Aplicada, 60, 299–313.
Curtiss, K. L., Thompson, L. K., Greve, K. W., & Bianchini, K. J. (2008). Verbal fluency
indicators of malingering in traumatic brain injury: Classifcation accuracy in known
groups. The Clinical Neuropsychologist, 22, 930–945.
UPDATED FBS META-ANALYSIS 719

*Dearth, C. S., Berry, D. T. R., Vickery, C. D., Vagnini, V. L., Baser, R. E., Orey, S. A., et al.
(2005). Detection of feigned head injury symptoms on the MMPI-2 in head injured
patients and community controls. Archives of Clinical Neuropsychology, 20, 95–110.
Dean, A. C., Boone, K. B., Kim, M. S., Curiel, A. R., Martin, D. J., Victor, T. L., et al.
(2008). Examination of the impact of ethnicity on the Minnesota Multiphasic
Personality Inventory–2 (MMPI-2) fake bad scale. The Clinical Neuropsychologist,
22, 1054–1060.
Demakis, G. J. (2003). A meta-analytic review of the sensitivity of the Wisconsin Card
Sorting Test to frontal and lateralized frontal brain damage. Neuropsychology, 17,
255–264.
Demakis, G. J., Gervais, R. O., & Rohling, M. L. (2008). The effect of failure on cognitive
and psychological symptom validity tests in litigants with symptoms of post traumatic
stress disorder. The Clinical Neuropsychologist, 22, 879–895.
Downing, S. K., Denney, R. L., Spray, B. J., Houston, C. M., & Halfaker, D. H. (2008).
Examining the relationship between the Restructured Scales and the Fake Bad Scale of
the MMPI-2. The Clinical Neuropsychologist, 22, 680–688.
þDukarm, P. D. (2006). Detecting simulated cognitive impairment with MMPI-2
neurocorrection scales. Dissertations Abstracts International, 67, 3447.
þEfendov, A. A., Sellbom, M., & Bagby, R. M. (2008). The utility and comparative
incremental validity of the MMPI-2 and Trauma Symptom Inventory validity scales in
the detection of feigned PTSD. Psychological Assessment, 20, 317–326.
Elhai, J. D., Gold, P. B., Frueh, B. C., & Gold, S. N. (2000). Cross-validation of the MMPI-2
in detecting malingered posttraumatic stress disorder. Journal of Personality Assessment,
75, 449–463.
*Elhai, J. D., Gold, S. N., Sellers, A. H., & Dorfman, W. I. (2001). The detection of
malingered posttraumatic stress disorder with MMPI-2 Fake Bad Indices. Assessment, 8,
221–236.
Eyler, V. A., Diehl, K. W., & Kirkhart, M. (2000). Validation of the Lees-Haley Fake Bad
Scale for the MMPI-2 to detect somatic malingering among personal injury litigants
[Abstract]. Archives of Clinical Neuropsychology, 15, 834–835.
Fox, D. D., Gerson, A., & Lees-Haley, P. R. (1995). Interrelationship of MMPI-2 validity
scales in personal injury claims. Journal of Clinical Psychology, 51, 42–47.
Frueh, B. C., Gold, P., & de Arellano, M. A. (1997). Symptom overreporting in combat
veterans evaluated for PTSD: Differentiation on the basis of compensation status.
Journal of Personality Assessment, 68, 369–384.
þGervais, R. O., Ben-Porath, Y. S., Wygant, D. B., & Green, P. (2007a). Development and
validation of a response bias scale (RBS) for the MMPI-2. Assessment, 14, 196–208.
Gervais, R. O., Ben-Porath, Y. S., Wygant, D. B., & Green, P. (2008). Differential sensitivity
of the response bias scale (RBS) and MMPI-2 validity scales to memory complaints. The
Clinical Neuropsychologist, 22, 1061–1079.
Gervais, R. O., Lees-Haley, P. R., & Ben-Porath, Y. S. (2007b). Predicting SVT performance
with the MMPI-2-RF FBS-r, RBS, and Fs scales. Poster presented at the 27th Annual
Conference of the National Academy of Neuropsychology, Scottsdale, AZ. [Abstract
published in the Archives of Clinical Neuropsychology, 22, 873.]
Gervais, R., Wygant, D. B., & Ben-Porath, Y. S. (2005, October). Word Memory Test
(WMT) performance and MMPI-2 validity scales and indices. Presented at the Annual
Meeting of the National Academy of Neuropsychology, Tampa Bay, FL.
Gough, H. G. (1950). The F minus K dissimulation index for the MMPI. Journal of
Consulting Psychology, 14, 408–413.
Gough, H. G. (1954). Some common misconceptions about neuroticism. Journal of
Consulting Psychology, 18, 287–292.
720 NATHANIEL W. NELSON ET AL.

Gough, H. G. (1957). California Psychological Inventory manual. Palo Alto, CA: Consulting
Psychologists Press.
Greene, R. L. (1991). The MMPI-2/MMPI: An interpretive manual. Boston: Allyn and
Bacon.
Greiffenstein, M. F., & Baker, W. J. (2001). Comparison of premorbid and postinjury
MMPI-2 profiles in late postconcussion claimants. The Clinical Neuropsychologist, 15,
162–170.
Greiffenstein, M. F., & Baker, W. J. (2008). Validity testing in dually diagnosed post-traumatic
stress disorder and mild closed head injury. The Clinical Neuropsychologist, 22, 565–582.
*Greiffenstein, M. F., Baker, W. J., Axelrod, B., Peck, T. A., & Gervais, R. (2004). The Fake
Bad Scale and MMPI-2 F-family in detection of implausible psychological trauma
claims. The Clinical Neuropsychologist, 18, 573–590.
Greiffenstein, M. F., Baker, W. J., & Gola, T. (1994). Validation of malingered amnesia
measures with a large clinical sample. Psychological Assessment, 6, 218–224.
Greiffenstein, M. F., Baker, W. J., & Gola, T. (1996). What kind of faking does the Fake Bad
Scale measure? American Psychology-Law Society Newsletter [APA Convention Issue].
*Greiffenstein, M. F., Baker, W. J., Gola, T., Donders, J., & Miller, L. J. (2002). The FBS
in atypical and severe closed head injury litigants. Journal of Clinical Psychology, 58,
1591–1600.
Greiffenstein, M. F., Fox, D., & Lees-Haley, P. R. (2007). The MMPI-2 Fake Bad Scale in
detection of noncredible brain injury claims. In K. Boone (Ed.), Assessment of feigned
cognitive impairment: A neuropsychological perspective (pp. 210–235). NY: Guilford
Publications.
Greiffenstein, M. F., Gola, T., & Baker, W. J. (1995). The MMPI-2 validity scales versus
domain specific measures in the detection of factitious brain injury. The Clinical
Neuropsychologist, 9, 230–240.
Greve, K. W., & Bianchini, K. J. (2004). Response to Butcher et al., The construct validity of
the Lees-Haley Fake Bad Scale. Archives of Clinical Neuropsychology, 19, 337–339.
*Greve, K. W., Bianchini, K. J., Love, J. M., Brennan, A., & Heinley, M. T. (2006).
Sensitivity and specificity of MMPI-2 validity scales to malingered neurocognitive
dysfunction in traumatic brain injury. The Clinical Neuropsychologist, 20, 491–512.
Greve, K. W., Ord, J., Curtis, K. L., Bianchini, K. J., & Brennan, A. (2008). Detecting
malingering in traumatic brain injury and chronic pain: A comparison of three forced-
choice symptom validity tests. The Clinical Neuropsychologist, 22, 896–918.
Grillo, J., Brown, R. S., Hilsabeck, R., Price, J. R., & Lees-Haley, P. R. (1994). Raising
doubts about claims of malingering: Implications of relationships between MCMI-II and
MMPI-2 performances. Journal of Clinical Psychology, 50, 651–655.
*Guez, M., Brannstrom, R., Nyberg, L., Toolanen, G., & Hildingsson, C. (2005).
Neuropsychological functioning and MMPI-2 profiles in chronic neck pain: A
comparison of whiplash and non-traumatic groups. Journal of Clinical and
Experimental Neuropsychology, 27, 151–163.
Hedges, L. V. (1981). Distribution theory for Glass’s estimator of effect size and related
estimators. Journal of Educational Statistics, 6, 107–128.
Henry, G. K., Heilbronner, R. L., Mittenberg, W., & Enders, C. (2006). The Henry-
Heilbronner index: A 15-item empirically derived MMPI-2 subscale for identifying
probable malingering in personal injury litigants and disability claimants. The Clinical
Neuropsychologist, 20, 786–797.
Henry, G. K., Heilbronner, R. L., Mittenberg, W., Enders, C., & Stanczal, S. R. (2008).
Comparison of the Lees-Haley Fake Bad Scale, Henry-Heilbronner Index, and
Restructured Clinical Scale 1 in identifying noncredible symptom reporting. The
Clinical Neuropsychologist, 22, 919–929.
UPDATED FBS META-ANALYSIS 721

Hinojosa, L. (1993). The MMPI-2 and malingering: A study aimed at refining the detection of
deception. Dissertation Abstracts International, 54, 2203.
Horwitz, J. E., Fisher, J. M., & McCaffrey, R. J. (2006, October). Do validity scales of
the MMPI predict performance on symptom validity tests? Preliminary findings.
Poster presented at the 26th Annual Meeting of the National Academy of
Neuropsychology, San Antonio, TX. [Abstract published in Archives of Clinical
Neuropsychology, 21, 551.]
Iverson, G. L., & Binder, L. M. (2000). Detecting exaggeration and malingering in
neuropsychological assessment. Journal of Head Trauma Rehabilitation, 15, 829–858.
*Iverson, G. L., Franzen, M.D., & Hammond, J. A. (1995). Examination of inmates’ ability
to malinger on the MMPI-2. Psychological Assessment, 7, 118–121.
*Iverson, G. L., Henrichs, T. F., Barton, E. A., & Allen, S. (2002). Specificity of the MMPI-2
Fake Bad Scale as a marker for personal injury malingering. Psychological Reports, 90,
131–136.
Iverson, G. L., & Lange, R. T. (2006). Detecting exaggeration and malingering
in psychological injury claims. In W. J. Koch, K. S. Douglas, T. L. Nichols, &
M. L. O’Neill (Eds.), Psychological injuries: Forensic assessment, treatment, and law. NY:
Oxford University Press.
Lamberty, G. J. (2008). Understanding somatization in the practice of clinical neuropsychology.
New York: Oxford University Press.
Lanyon, R. I., & Almer, E. R. (2002). Characteristics of compensable disability patients who
choose to litigate. Journal of the American Academy of Psychiatry and the Law, 30,
400–404.
Larrabee, G. J. (1997). Neuropsychological outcome, post concussion symptoms, and
forensic considerations in mild closed head trauma. Seminars in Clinical
Neuropsychiatry, 2, 196–206.
Larrabee, G. J. (1998). Somatic malingering on the MMPI and MMPI-2 in personal injury
litigants. The Clinical Neuropsychologist, 12, 179–188.
*Larrabee, G. J. (2003a). Detection of symptom exaggeration with the MMPI-2 in litigants
with malingered neurocognitive dysfunction. The Clinical Neuropsychologist, 17, 54–68.
Larrabee, G. J. (2003b). Exaggerated MMPI-2 symptom report in personal injury litigants
with malingered neurocognitive deficit. Archives of Clinical Neuropsychology, 18,
673–686.
Larrabee, G. J. (2003c). Exaggerated pain report in litigants with malingered neurocognitive
dysfunction. The Clinical Neuropsychologist, 17, 395–401.
Larrabee, G. J. (2003d). Detection of malingering using atypical performance patterns on
standard neuropsychological tests. The Clinical Neuropsychologist, 17, 410–425.
Larrabee, G. J. (2007). Evaluation of exaggerated health and injury symptomatology. In G.
J. Larrabee (Ed.), Assessment of malingered neuropsychological deficits. (pp. 264–286).
New York: Oxford University Press.
Larrabee, G. J. (2008). Aggregation across multiple indicators improves the detection
of malingering: Relationship to likelihood ratios. The Clinical Neuropsychologist, 22,
666–679.
*Lees-Haley P. R. (1992). Efficacy of MMPI-2 validity scales and MCMI-II modifier scales
for detecting spurious PTSD claims: F, F-K, Fake Bad Scale, ego strength, subtle-
obvious subscales, DIS, and DEB. Journal of Clinical Psychology, 48, 681–689.
Lees-Haley, P. R. (1997). MMPI-2 base rates for 492 personal injury plaintiffs: Implications
and challenges for forensic assessment. Journal of Clinical Psychology, 53, 745–755.
Lees-Haley, P. R., & Dunn, J. T. (1994). The ability of naı̈ve subjects to report symptoms of
mild brain injury, post-traumatic stress disorder, major depression, and generalized
anxiety disorder. Journal of Clinical Psychology, 50, 252–256.
722 NATHANIEL W. NELSON ET AL.

*Lees-Haley P. R., English L.T., & Glenn W. J. (1991). A Fake Bad Scale on the MMPI-2 for
personal injury claimants. Psychological Reports, 68, 203–210.
Lees-Haley, P. R., & Fox, D. D. (2004). Commentary on Butcher, Arbisi, Atlis, and McNulty
(2003) on the Fake Bad Scale. Archives of Clinical Neuropsychology, 19, 333–336.
Lees-Haley, P. R., Iverson, G. L., Lange, R. T., Fox, D. D., & Allen III, L. M. (2002).
Malingering in forensic neuropsychology: Daubert and the MMPI-2. Journal of Forensic
Neuropsychology, 3, 167–203.
Lipsey, M. W., & Wilson, D. B. (1993). The efficacy of psychological, educational,
and behavioral treatment: Confirmation from meta-analysis. American Psychologist, 48,
1181–1209.
Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks, CA: Sage.
Martens, M., Donders, J., & Millis, S. R. (2001). Evaluation of invalid response sets after
traumatic head injury. Journal of Forensic Neuropsychology, 2, 1–18.
þMartinez, G., Mittenberg, W., Gass, C. S., & Quintar, B. (2005, October). Validation of the
MMPI-2 FBS scale in clinical malingerers and nonlitiganting patients. Poster presented at
the 25th Annual Meeting of the National Academy of Neuropsychology, Tampa, FL.
McCarthy, J. R. (2004). A comparison of MMPI-2 symptom validity scales with mild
traumatic brain injury litigants versus migraine and chronic back pain patients.
Dissertation Abstracts International, 65, 3171.
McCarthy, J., & Heilbronner, R. (February, 2005). A comparison of MMPI-2 symptom
validity scales with mild traumatic brain injury litigants versus migraine and chronic back
pain patients. Poster session presented at the 33rd annual meeting of the International
Neuropsychology Society, St. Louis, MO.
*Meyers, J. E., Millis, S. R., & Volkert, K. (2002). A validity index for the MMPI-2. Archives
of Clinical Neuropsychology, 17, 157–169.
*Miller, L. J., & Donders, J. (2001). Subjective symptomatology after traumatic head injury.
Brain Injury, 15, 297–304.
Millis, S. R., Putnam, S. H., & Adams, K. M. (1995, March). Neuropsychological malingering
and the MMPI-2: Old and new indicators. Paper presented at the 30th Annual
Symposium on Recent Developments in the Use of the MMPI, MMPI-2 and MMPI-A,
St. Petersburg Beach, FL.
Mitchell, C. A. (2008). Relationship between the performance on the MMPI-2 fake bad scale
and the word memory subtests in the detection of malingering during forensic brain
trauma evaluations. Dissertations Abstracts International, 68, 6975.
Nelson, N. W., Parsons, T. D., Grote, C. L., Smith, C. A., & Sisung, J. R. (2006a). The
MMPI-2 Fake Bad Scale: Concordance and specificity of true and estimated scores.
Journal of Clinical and Experimental Neuropsychology, 28, 1–12.
Nelson, N. W., Sweet, J. J., Berry, D. T. R., Bryant, F. B., & Granacher, R. P. (2007a).
Response validity in forensic neuropsychology: Exploratory factor analytic evidence of
distinct cognitive and psychological constructs. Journal of the International
Neuropsychological Society, 13, 440–449.
Nelson, N. W., Sweet, J. J., & Demakis, G. J. (2006b). Meta-Analysis of the MMPI-2 Fake
Bad Scale: Utility in forensic practice. The Clinical Neuropsychologist, 20, 39–58.
*Nelson, N. W., Sweet, J. J.,& Heilbronner, R. L. (2005, June). Examination of the
new MMPI-2 Response Bias Scale (Gervais): Relationship with standard and
supplementary MMPI-2 validity scales. Poster session presented at the American
Academy of Clinical Neuropsychology, Minneapolis, MN. [Abstract published in
The Clinical Neuropsychologist, 19, 154.]
Nelson, N. W., Sweet, J. J., & Heilbronner, R. L. (2007b). Examination of the new MMPI-2
Response Bias Scale (Gervais): Relationship with MMPI-2 validity scales. Journal of
Clinical and Experimental Neuropsychology, 29, 67–72.
UPDATED FBS META-ANALYSIS 723

Posthuma, A. B., & Harper, J. F. (1998). Comparison of MMPI-2 responses of child custody
and personal injury litigants. Professional Psychology: Research and Practice, 29,
437–443.
Rawls, K. R., Rohling, M. L., & Langhinrichsen-Rohling, J. (2008, October). Use of the FBS
scale of the MMPI-2 in PTSD referrals seeking compensation or not: Assessment of
MMPI-2 symptom validity scales. Poster presented at the 28th Annual Conference of the
National Academy of Neuropsychology, New York, NY. [Abstract published in
Archives of Clinical Neuropsychology, 23, 749.]
Rogers, R., Sewell, K. W., Martin, M. A., & Vitacco, M. J. (2003). Detection of feigned mental
disorders: A meta-analysis of the MMPI-2 and malingering. Assessment, 10, 160–177.
Rogers, R., Sewell, K. W., & Salekin, R. T. (1994). A meta-analysis of malingering on the
MMPI-2. Assessment, 1, 227–237.
Rogers, R., Sewell, K. W., & Ustad, K. L. (1995). Feigning among chronic outpatients on the
MMPI-2: A systematic examination of fake-bad indicators. Assessment, 2, 81–89.
Rosenthal, R. (1979). The ‘file drawer problem’ and tolerance for null results. Psycholoigcal
Bulletin, 86, 638–641.
Rosenthal, R. (1994). Parametric measures of effect size. In H. Cooper & L. V. Hedges (Eds.),
The handbook of research synthesis (pp. 231–244). New York: Sage.
*Ross, S. R., Millis, S. R., Krukowski, R. A., Putnam, S. H., & Adams, K. M. (2004).
Detecting probable malingering on the MMPI-2: An examination of the Fake Bad
Scale in mild head injury. Journal of Clinical and Experimental Neuropsychology, 26,
115–124.
Schmidt, F. L., & Hunter, J. E. (2003). Meta-analysis. In I. Weiner (Ed. In Chief) J. Graham,
& J. Naglieri (Eds.), Handbook of Psychology: Volume 12, Research methods in
psychology. New York: John Wiley & Sons.
þSellers, S., Byrne, M. K., & Golus, P. (2006). The detection of malingered psychopathology
and cognitive deficits: Employing the fake bad scale and the Raven’s standard
progressive matrices. Psychiatry, Psychology, & Law, 13, 91–99.
Sharland, M. J., & Gfeller, J. D. (2007). A survey of neuropsychologists’ beliefs and practices
with respect to the assessment of effort. Archives of Clinical Neuropsychology, 22,
213–223.
Shea, C. (2006). A comparison of performance on malingering assessments between
neuropsychological patients involved in litigation and non-litigious neuropsychological
patients. Dissertations Abstracts International, 67, 560.
Sisung, J. R. J. (2006). Predictive utility of the MMPI-2 Fake Bad Scale on Wisconsin card
sorting test performance during forensic traumatic brain evaluations. Dissertations
Abstracts International, 66, 6330.
Slick, D. J., Hopp, G., Strauss, E., & Spellacy, F. J. (1996). Victoria Symptom Validity Test:
Efficiency for detecting feigned memory impairment and relationship to neuropsycho-
logical tests and MMPI-2 validity scales. Journal of Clinical and Experimental
Neuropsychology, 18, 911–922.
Slick, D. J., Sherman, E. M., & Iverson, G. L. (1999). Diagnostic criteria for malingered
neurocognitive dysfunction: Proposed standards for clinical practice and research.
The Clinical Neuropsychologist, 13, 545–561.
Soetaert, D. K., Baade, L. E., Heinrichs, R. J., & Morgan, C. (2008). Examination of the
MMPI-2 Fake Bad Scale in two inpatient samples. Poster presented at the 28th Annual
Conference of the National Academy of Neuropsychology, New York, NY. [Abstract
published in Archives of Clinical Neuropsychology, 23, 711.]
Staudenmayer, H., & Phillips, S. (2007). MMPI-2 validity, clinical, and content scales and the
Fake Bad Scale for personal injury litigants claiming idiopathic environmental
intolerance. Journal of Psychosomatic Research, 62, 61–72.
724 NATHANIEL W. NELSON ET AL.

Strauss, E., Sherman, M. S., & Spreen, O. (2006). A compendium of neuropsychological tests:
Administration, norms, and commentary. NY: Oxford University Press.
Sweet, J. J., Malina, A., & Ecklund-Johnson, E. (2006a). Application of the new MMPI-2
malingered depression scale to individuals undergoing neuropsychological evaluation:
Relative lack of relationship to secondary gain and failure on validity indices.
The Clinical Neuropsychologist, 20, 541–551.
Sweet, J. J., Nelson, N. W., Berry, D.T.R., Granacher, R., & Heilbronner, R.L. (2006b,
June). Impact on MMPI-2 of Cognitive Effort on the Victoria Symptom Validity
Test, Letter Memory Test, and Test of Memory Malingering. Poster session at the 4th
Annual Conference of the American Association of Clinical Neuropsychology. [Abstract
published in The Clinical Neuropsychologist, 20, 206.]
þThomas, M. L., & Youngjohn, J. R. (2009). Let’s not get hysterical: Comparing the MMPI-
2 validity, clinical, and RC scales in TBI litigants tested for effort. The Clinical
Neuropsychologist, 23, 1067–1084.
*Tsushima, W. T., & Tsushima, V. G. (2001). Comparison of the Fake Bad Scale and other
MMPI-2 validity scales with personal injury litigants. Assessment, 8(2), 205–212.
Vanderslice-Barr, J., Lynch, J. K., & McCaffrey, R. J. (2008, October). Assessment of MMPI-
2 symptom validity scales. Poster presented at the 28th Annual Conference of the
National Academy of Neuropsychology, New York, NY. [Abstract published in
Archives of Clinical Neuropsychology, 23, 706.]
Van Gaasbeek, J. K., Denney, R. L., & Harmon, J. (2001). Another look at an MMPI-2
neurocorrective factor in forensic cases: Utility of the Fake Bad Scale. [Abstract published
in the Archives of Clinical Neuropsychology, 16, 813.]
*Wegman, T. J., Clark, J. A., Schipper, L. J., & Berry, D. T. R. (February, 2005). Possible
contributions of MMPI-2 validity indicators to the detection of malingered neurocognitive
dysfunction. Poster session presented at the 33rd annual meeting of the International
Neuropsychology Society, St. Louis, MO.
þWhitney, K. A., Davis, J. J., Shepard, P. H., & Herman, S. M. (2008). Utility of the
response bias scale (RBS) and other MMPI-2 validity scales in predicting TOMM
performance. Archives of Clinical Neuropsychology, 23, 777–786.
Wiener, D. N. (1948). Subtle and obvious keys for the MMPI. Journal of Consulting
Psychology, 12, 164–170.
Wilkinson, L. and the Task Force on Statistical Inference (1999). Statistical methods in
psychology journals: Guidelines & Explanations. American Psychologist, 54, 594–604.
Williams, C. L., Butcher, J. N., Gass, C. S., Cumella, E., & Kally, Z. (2009). Inaccarucies
about the MMPI-2 Fake Bad Scale in the reply by Ben-Porath, Greve, Bianchini, and
Kaufman (2009). Psychological Injury & the Law, 2, 182–197.
Wygant, D. B. (2008). Validation of the MMPI-2 infrequent somatic complaints (Fs) scale.
Dissertation Abstractions International, 68, 6975.
þWygant, D. B., Sellbom, M., Ben-Porath, Y. S., Stafford, K. P., Freeman, D. B., &
Heilbronner, R. L. (2007). The relation between symptom validity testing and MMPI-2
scores as a function of forensic evaluation context. Archives of Clinical Neuropsychology,
22, 488–489.

View publication stats

You might also like