Professional Documents
Culture Documents
Interpreting The Meaning of Multiple Symptom Validity Test Failure
Interpreting The Meaning of Multiple Symptom Validity Test Failure
Interpreting The Meaning of Multiple Symptom Validity Test Failure
Tara L. Victor , Kyle B. Boone , J. Greg Serpa , Jody Buehler & Elizabeth A.
Ziegler
To cite this article: Tara L. Victor , Kyle B. Boone , J. Greg Serpa , Jody Buehler & Elizabeth A.
Ziegler (2009) Interpreting the Meaning of Multiple Symptom Validity Test Failure, The Clinical
Neuropsychologist, 23:2, 297-313, DOI: 10.1080/13854040802232682
Download by: [Palo Alto University] Date: 12 April 2016, At: 13:46
The Clinical Neuropsychologist, 23: 297–313, 2009
http://www.psypress.com/tcn
ISSN: 1385-4046 print/1744-4144 online
DOI: 10.1080/13854040802232682
INTRODUCTION
Assessing symptom validity or negative response bias in the context of forensic
and neuropsychological assessment is becoming increasingly important as civil,
criminal, and disability-related proceedings continue to rely heavily on the results of
such evaluations, and the stakes are often quite high (Taylor, 1999). However,
reliance on single measures of effort is problematic due to the fact that individual
effort tests have imperfect sensitivity and specificity. As a result, many investigators
have recommended use of multiple symptom validity indicators interspersed
throughout a test battery (Boone, 2007; Iverson & Franzen, 1996; Larrabee,
2003a; Orey, Cragar, & Berry, 2000; Vickery et al., 2004) and discriminant functions
have been developed based on multiple symptom validity tests (SVTs; e.g., Martin,
Hayes, & Gouvier, 1996). Larrabee, Greiffenstein, Greve, and Bianchini (2007)
recently addressed this area of concern using analysis with likelihood ratios (i.e., the
Address correspondence to: Tara L. Victor, Department of Psychology, CSUDH, 1000 E Victoria
Street, Carson, CA 90747, USA. E-mail: tvictor@csudh.edu
Accepted for publication: May 26, 2008. First published online: September 25, 2008.
ß 2008 Psychology Press, an imprint of the Taylor & Francis group, an Informa business
298 TARA L. VICTOR ET AL.
FREESTANDING SVTS
With regards to the use of multiple freestanding SVTs, there are three studies
from David Berry’s laboratory attempting to quantify the extent to which using
such tests in combination affects our predictive accuracy. Inman and Berry (2002)
and Orey et al. (2000) used a college sample reporting history of head injury in the
context of an analog design to investigate the combined utility of various
freestanding effort indicators, and concluded that failure on any one test (in a
group of highly sensitive freestanding indicators) should raise questions of
inadequate effort. Requiring failure on two or more tests was inadequately sensitive.
In the third study from this laboratory, Vickery et al. (2004) used a more
ecologically valid sample of four comparison groups (23 moderately to severely
head-injured controls, 23 moderately to severely head-injured simulators, 23
community volunteer controls, and 23 community volunteer simulators). Again,
Downloaded by [Palo Alto University] at 13:46 12 April 2016
the authors found that the best model was one that used a criterion of failure on one
or more of these freestanding SVTs (sensitivity ¼ 89.1, specificity ¼ 93.5, overall
hit rate ¼ 91.5), as compared to models requiring two or more failures
(sensitivity ¼ 65.2, specificity ¼ 97.8, overall hit rate ¼ 79.9) or three or more
failures (sensitivity ¼ 32.6, specificity ¼ 100.0, overall hit rate ¼ 63.6). Taken
together, their findings suggest that using failure on one or more freestanding
SVT produced the most accurate classification rates. However, all of these
investigations were carried out with use of simulation (i.e., analog) designs and
relatively small samples. Further, two of these studies employed college samples self-
reporting a history of mild head injury whose impairment was likely minimal,
possibly leading to overestimation of specificity and further limiting the general-
izability to real-world clinical samples. In addition, the use of simulators may not
accurately capture test sensitivity in real-world applications (see Boone et al., 2002a;
Boone, Lu, & Wen, 2005). In this vein, it is relevant to note that Orey and colleagues
(2000) documented only a 4% sensitivity for the Rey 15-item, whereas studies
examining ‘‘real world’’ non-credible subjects have found much higher sensitivity
rates (e.g., 47%; Boone et al., 2002b). In fact, studies using clinical samples have
shown that it is fairly common for a credible patient to fail one SVT upon
neuropsychological exam (e.g., Dean, Victor, Boone, & Arnold, 2008; Meyers &
Volbrecht, 2003). As such, it is important to obtain converging evidence from both
experimental and clinical research designs (Rogers, 1997b).
EMBEDDED SVTS
The field of clinical neuropsychology also focuses on the use of SVTs
embedded into existing batteries as opposed to dedicated SVTs. Tests with
embedded symptom validity or response indicators include the Rey-Osterreith
Complex Figure Test (ROCFT; Lu, Boone, Cozolino, & Mitchell, 2003), the Rey
Auditory Verbal Learning Test (RAVLT; Boone et al., 2005; Sherman,
Brauer-Boone, Lu, & Razani, 2002), Finger Tapping, (Arnold et al., 2005), and
the Weschler Adult Intelligence Scale – Third Edition (WAIS-III; e.g., Reliable
Digit Span; Babikian, Boone, Lu, & Arnold, 2006; Greiffenstein, Gola,
& Baker, 1994).
300 TARA L. VICTOR ET AL.
To our knowledge, there are also only three prior published attempts to
quantify the extent to which the use of multiple embedded SVTs affects our
predictive accuracy. First, Iverson and Franzen (1996), using both freestanding
and embedded indicators, investigated the predictive accuracy of 10 different
indices derived from five tests in their three comparison groups (20 under-
graduates, 20 psychiatric inpatients, and 20 memory-impaired patients; note that
the first two groups were tested under two conditions, including once when they
were asked to do their best and once when they were asked to simulate a
malingerer). When a criterion of any 1 failure out of a possible 10 was used,
sensitivity was 92.5%, and specificity was 100% (overall hit rate ¼ 97%). However,
extrapolation of this finding is limited given that the study employed SVTs that are
less commonly used (as compared to others) in actual clinical practice, had small
sample sizes, and employed simulators rather than ‘‘real world’’ non-credible
participants.
Downloaded by [Palo Alto University] at 13:46 12 April 2016
study: overall hit rate ¼ 82.4%). As explained by the author, this demonstration
is important because, given that clinical use of a logistic regression equation
requires the administration of all tests used in the equation (and test batteries vary
from clinician to clinician), it suggests that the more parsimonious and user-friendly
pairwise failure model might be a viable rule of thumb that does not compro-
mise predictive accuracy. Based on this work, the recommendation of using two or
more failures as a guideline for detecting symptom invalidity was made by
Larrabee et al. (2007) in a recently published book chapter, but this requires
cross-validation.
The focus on embedded SVTs is important because they are derived from
standard cognitive tests and therefore have the advantage of serving ‘‘double duty’’
(i.e., measurement of both response bias and specific cognitive abilities) without
adding to test battery length. In addition, they are less likely to be targeted for
coaching because they have well-documented functions/purposes unrelated to
Downloaded by [Palo Alto University] at 13:46 12 April 2016
measurement of response bias and because they do not typically rely on a forced
choice format that is often recognized by sophisticated malingerers as a malingering
assessment technique (Suhr & Gunstad, 2000). Further, while freestanding SVTs are
thought to have high face validity and therefore relatively lower associated rates of
false positive error, they may be more susceptible to attorney coaching and can
considerably extend the length of a typical testing battery (Inman & Berry, 2002).
For these reasons, SVTs that are derived from standard neuropsychological
assessment procedures are increasing in popularity.
It is the purpose of this study to investigate the classification rates associated
with using multiple embedded SVTs in an actual clinical population by comparing
individual test sensitivities/specificities to those associated with use of pairwise
failure and to further evaluate the unique contributions of various tests used in
combination. The specific aims of this study are: (1) to cross-validate the pairwise
failure model of embedded SVTs, using a modified set of indices, and (2) to compare
these results to those that emerge from logistic regression analyses using continuous
predictors to assess the independent contributions of included SVTs (i.e., the extent
to which they demonstrate incremental predictive value), and finally (3) to address
limitations of generalizability and provide suggestions to stimulate and guide future
work in this area of research.
METHOD
Subjects/group assignment
An archival database of patients referred to a tertiary care outpatient
neuropsychological assessment service at the large county hospital was accessed
with institutional review board approval. All subjects underwent standard
administrations of a comprehensive neuropsychological test battery and consented
to have their results included in research analyses.
Subjects were identified as non-credible if they had motive to feign (i.e., they
were either in litigation at the time of testing or attempting to secure disability)
and failed at least two of the five freestanding SVTs. Subjects were included in the
credible group if they had no identified motive to feign or exaggerate their
302 TARA L. VICTOR ET AL.
with some special education classes, and diagnosed with depression and panic
disorder with a rule out for schizotypal personality disorder. These subjects were
not included in the analyses presented below, rendering a final sample size of 103.
Table 1 provides the frequency of primary diagnoses for both the credible and
non-credible groups in this study.
Measures
The neuropsychological test battery included nine indicators of symptom
validity or response bias (five freestanding and four embedded) with standardly
employed cutoffs (see Table 2), although not every subject was administered all nine
SVTs for various reasons (e.g., time constraints). Only subjects who had data on at
least three out of the five freestanding SVTs and at least three out of four embedded
SVTs were included in the study.
RESULTS
Sample demographics for both credible (n ¼ 66) and non-credible (n ¼ 37)
groups are presented in Table 3. Using independent samples t-test (for continuous
variables) and chi-square analyses (for categorical variables), groups were compared
on demographic, IQ, and embedded SVT performance. Groups did not differ with
respect to age, number of years of education, or gender, and overall the samples
were fairly comparable with respect to ethnic diversity, although the chi-square
statistic approached significance (p ¼ .078). The non-credible group had a
significantly lower WAIS-III FSIQ, likely secondary to their corresponding
higher levels of response bias given that the groups did not differ in level of
education. As expected, the groups differed in their performance on the four
embedded symptom validity indicator predictor variables, with the non-credible
group performing significantly worse on each. Correlational analysis (Table 4)
revealed significant modest to moderate correlations among all the predictor
variables (ranging from .23 to. 63).
Table 5 displays the sensitivities and specificities of the embedded SVTs at
published cut-scores (see Table 2). Individual test sensitivities ranged from 43.2
MULTIPLE SYMPTOM VALIDITY TEST FAILURE 303
Credible Non-credible
Diagnosis (n ¼ 66) (n ¼ 37)
Alcohol abuse 6 2
Attention deficit disorder 7 2
Anxiety disorders 9 3
Bacterial meningitis 1 0
Bipolar disorder 6 0
Brain aneurysm 2 0
Brain tumor 3 0
Cognitive disorder NOS 4 0
Dissociative disorder 1 0
Epilepsy/seizures 7 2
Head injury – severity unknown 1 1
HIV/AIDS 4 0
Downloaded by [Palo Alto University] at 13:46 12 April 2016
Klinefelters syndrome 2 0
Learning disability 20 6
Mild cognitive impairment 1 0
Mild head injury 3 10
Moderate head injury 2 4
Mood disorder (not bipolar) 28 13
Multiple sclerosis 1 0
Personality disorder NOS 2 3
Rule out anxiety 2 1
Rule out attention deficit disorder 0 1
Rule out bipolar disorder 1 0
Rule out cognitive disorder NOS 1 0
Rule out dementia 1 1
Rule out mood disorder (not bipolar) 4 6
Rule out dissociative disorder 1 0
Rule out learning disorder 1 1
Rule out psychotic disorder NOS 1 1
Rule out somatoform disorder 5 3
Schizophrenia/psychosis 9 3
Severe head injury 6 4
Somatoform disorder 11 6
Stroke 4 2
Substance abuse/dependence 17 7
Toxic exposure 0 4
Some patients had more than one diagnosis and were therefore included in
the count more than once. NOS ¼ Not otherwise specified.
Table 3 Sample characteristics and performance on predictors by group (i.e., credible vs non-credible)
RDS 1.00
R-O .58** 1.00
(n ¼ 80)
RAVLT .45** .63** 1.00
(n ¼ 101) (n ¼ 78)
FTT .38** .29* .23* 1.00
(n ¼ 98) (n ¼ 75) (n ¼ 96)
**p 5 .01; *p 5 .05; RDS ¼ WAIS-III Reliable Digit Span; R-O ¼ Rey Osterreith Complex
Figure Effort Equation; RAVLT ¼ Rey Auditory Verbal Learning Test effort equation; FTT ¼ Finger
Tapping Test.
306 TARA L. VICTOR ET AL.
Table 5 Classification rates for individual indicators, and those requiring failure of one, two, or three
symptom validity tests for the detection of non-credible performance
Symptom
validity indicator Hit rate Sensitivity Specificity PPP NPP PPP NPP PPP NPP
both groups correctly classified; Sensitivity ¼ percentage of malingering group falling below cutoff;
Specificity ¼ percentage of credible group falling above cutoff; PP ¼ Positive Predictive Power, percentage
of those with positive test sign who were malingering; NPP ¼ Negative Predictive Power, percentage of
those with negative test sign who were not malingering; RDS ¼ WAIS-III Reliable Digit Span; R-O ¼ Rey
Osterreith Complex Figure Effort Equation; RAVLT ¼ Rey Auditory Verbal Learning Test effort
equation; FTT ¼ Finger Tapping Test
Overall
Model Predictors accuracy Sensitivity Specificity 2 df p Wald df p Odds ratio
Odds ratio ¼ the increase in odds of being a malingerer with any unit decrease on the symptom validity
indicator; RDS ¼ WAIS-III Reliable Digit Span; R-O ¼ Rey Osterreith Complex Figure Effort Equation;
RAVLT ¼ Rey Auditory Verbal Learning Test effort equation; FTT ¼ Finger Tapping Test.
Analysis of false positive errors obtained when using the pairwise model
revealed that the four misclassifications included: (1) a 30-year-old Hispanic
female with 8 years of education and borderline intellectual functioning
(WAIS-III FSIQ ¼ 77) who reported speaking only 10% English when she was
MULTIPLE SYMPTOM VALIDITY TEST FAILURE 307
0 53% 5%
1 41% 11%
2 5% 32%
3 1% 41%
4 0% 11%
SVT ¼ symptom validity test.
years of education and borderline intellectual functioning (FSIQ ¼ 72) who had
suffered a severe head injury with post-traumatic seizures, and had a history of
alcohol dependence and learning disability; (3) a 53-year-old Hispanic male with
one year of education in Mexico and borderline intelligence (FSIQ ¼ 79) for
whom English was his second language with multiple Axis I diagnoses including
schizophrenia NOS, panic disorder and major depressive disorder; and finally
(4) a 50-year-old Caucasian female with 14 years of education (FSIQ not
obtained) diagnosed with bipolar disorder and a rule out for somatoform
disorder. Notably, the first three subjects were already on disability, had no
other identifiable external incentive to feign, and all failed the same two
embedded indicators, Reliable Digit Span and Finger Tapping. The fourth false
positive was not on disability, failed three embedded indicators, but passed three
freestanding indicators.
DISCUSSION
The current study replicates the findings of Larrabee (2003a) in terms of
demonstrating good sensitivity and specificity for discriminating between credible
and non-credible patients on the basis of any pairwise failure combination of
embedded SVTs. While Larrabee demonstrated this with use of Benton Visual Form
Discrimination, WCST Failure to Maintain Set, the Lees-Haley Fake Bad Scale,
Reliable Digit Span, and Finger Tapping (total for both hands) for a combined hit
rate of 91.6% (n ¼ 95), the present investigation found similar results with use of
Reliable Digit Span, the Rey-Osterreith Effort Equation, the Rey Auditory Verbal
Learning Test Effort Equation, and Finger Tapping (dominant hand average) for a
combined hit rate of 90.3% (n ¼ 103). Further, as in Larrabee’s sample, the pairwise
failure model produced hit rates comparable to those obtained when the indicators
were entered as continuous predictors in logistic regression.
Of interest, current findings are highly similar to those of Larrabee (2003a)
despite some differences in methodology between the two studies. In addition to the
fact that at least half of the Larrabee (2003a) embedded indicators differed from those
in the current study, differing cut-scores were also employed for the tests in common
between the two investigations (i.e., Finger Tapping and Reliable Digit Span).
308 TARA L. VICTOR ET AL.
Further, the non-credible group in Larrabee’s (2003a) initial study was assigned using
different criteria from the current study (i.e., his sample included ‘‘definite’’
malingerers who displayed worse than chance performance on a forced choice
measure), although his cross-validation sample essentially matched that of the
current sample (i.e., ‘‘probable’’ malingerers who failed two or more dedicated SVTs
and had incentive to feign). In addition, Larrabee’s (2003a) first sample
included mostly individuals with mild, moderate, or severe head injury, although
his cross-validation sample was a mixed neurologic and psychiatric group more
comparable to the mixed clinical sample of multiple varying diagnoses used in the
current study.
Examining the results of logistic regression suggests that using all four
indicators yields the highest levels of predictive accuracy, although only three
indicators appeared to make significant (or almost significant) independent
contributions to the overall predictive accuracy (i.e., the R-O Effort Equation,
Downloaded by [Palo Alto University] at 13:46 12 April 2016
Taken together, the results from Larrabee (2003a) and the current study show
that successful identification of symptom invalidity or response bias in ‘‘real-world’’
clinical samples, through pairwise SVT failure, is a robust finding replicable across a
wide range of embedded symptom validity indices, and is therefore appropriate for
use in clinical practice. Future research examining replication of this model with
freestanding SVTs in ‘‘real-world’’ samples is needed, particularly given that many
dedicated SVTs incorporate the same test format (i.e., forced choice), raising
questions regarding their redundancy and incremental validity.
ACKNOWLEDGEMENTS
Portions of this paper were presented at the 34th Annual Meeting of the
International Neuropsychological Society, Boston, Massachusetts, February 2006.
Downloaded by [Palo Alto University] at 13:46 12 April 2016
REFERENCES
Arnold, G., Boone, K., Dean, A., Wen, J., Nitch, S., Lu, P., et al. (2005). Sensitivity and
specificity of finger tapping test scores for the detection of suspect effort. The Clinical
Neuropsychologist, 19, 105–120.
Babikian, T., Boone, K. B., Lu, P., & Arnold, G. (2006). Sensitivity and specificity of various
digit span scores in the detection of suspect effort. The Clinical Neuropsychologist, 20,
145–159.
Belanger, H. G., Curtiss, G., & Demery, J. A. (2005). Factors moderating neuropsychological
outcomes following mild traumatic brain injury: A meta-analysis. Journal of the
International Neuropsychological Society, 11, 215–227.
Belanger, H. G., & Vanderploeg, R. D. (2005). The neuropsychological impact of sports-
related concussion: A meta-analysis. Journal of International Neuropsychological Society,
11, 345–357.
Binder, L. M., & Willis, S. C. (1991). Assessment of motivation after financially compensable
minor head trauma. Psychological Assessment, 3, 175–181.
Boone, K. B. (2007). A reconsideration of the Slick et al., (1999) criteria for
malingered neurocognitive dysfunction. In K. B. Boone (Ed.), Assessment of feigned
cognitive impairment: A neuropsychological perspective. New York: Guilford
Publications, Inc.
Boone, K. B., Lu, P., Back, C., King, C., Lee, A., Philpott, L., et al. (2002a). Sensitivity and
specificity of the Rey Dot Counting Test in patients with suspect effort and various
clinical samples. Archives of Clinical Neuropsychology, 17, 1–19.
Boone, K. B., Lu, P., Sherman, D., Palmer, B., Back, C., Shamieh, E., et al. (2000).
Validation of a new technique to detect malingering of cognitive symptoms: The b Test.
Archives of Clinical Neuropsychology, 15, 227–241.
Boone, K. B., Lu, P., & Wen, J. (2005). Comparison of various RAVLT scores in the
detection of non-credible memory performance. Archives of Clinical Neuropsychology,
20, 301–319.
Boone, K. B., Salazar, X., Lu, P., Warner-Chacon, K., & Razani, J. (2002b). The Rey 15-item
Recognition Trial: A technique to enhance sensitivity of the Rey 15-Item Memorization
Test. Journal of Clinical and Experimental Neuropsychology, 24, 561–573.
Carroll, L. J., Cassidy, J. D., Holm, L., Kraus, J., & Coronado, V. G. (2004). Methodological
issues and research recommendation for mild traumatic brain injury: The WHO
MULTIPLE SYMPTOM VALIDITY TEST FAILURE 311
Larrabee, G. J., Greiffenstein, M. F., Greve, K. W., & Bianchini, K. J. (2007). Refining
diagnostic criteria for malingering. In G. J. Larrabee (Ed.), Assessment of malingered
neuropsychological deficits pp. 334–372). New York: Oxford University Press.
Lu, P. H., Boone, K. B., Cozolino, L., & Mitchell, C. (2003). Effectiveness of the Rey
Osterreith Complex Figure Test and the Meyers and Meyers Recognition Trial in the
detection of suspect effort. The Clinical Neuropsychologist, 17, 426–440.
Martin, R. C., Hayes, J. S., & Gouvier, W. D. (1996). Differential vulnerability
between postconcussion self-report and objective malingering tests in identifying
simulated mild head injury. Journal of Clinical and Experimental Neuropsychology,
18, 265–275.
Meyers, J. E., & Volbrecht, M. E. (2003). A validation of multiple malingering detection
methods in a large clinical sample. Archives of Clinical Neuropsychology, 18, 261–276.
Millis, S. R. (2006). Introduction to logistic regression. Presentation at the XXth
annual meeting of the American Academy of Clinical Neuropsychology (AACN),
Philadelphia, PA.
Downloaded by [Palo Alto University] at 13:46 12 April 2016
Nelson, N. W., Boone, K., Dueck, A., Wagener, L., Lu, P., & Grills, C. (2003). Relationships
between eight measures of suspect effort. The Clinical Neuropsychologist, 17, 263–272.
Nitch, S., Boone, K. B., Wen, J., Arnold, G., & Alfano, K. (2006). The Utility of the Rey
Word Recognition Test in the detection of suspect effort. The Clinical Neuropsychologist,
20, 873–887.
Orey, S., Cragar, D. E., & Berry, D. T. R. (2000). The effects of two motivational
manipulations on the neuropsychological performance of mildly head-injured college
students. Archives of Clinical Neuropsychology, 15, 335–348.
Ponton, M. O., Gonzalez, J. J., Hernandez, I., Herrera, L., & Higareda, I. (2000). Factor
analysis of the Neuropsychological Screening Battery for Hispanics (NeSBHIS). Applied
Neuropsychology, 7, 32–39.
Rogers, R. (1997a). Introduction. In R. Rogers (Ed.), Clinical assessment of malingering and
deception (2nd ed., pp. 1–19). New York: Guilford Press.
Rogers, R. (1997b). Researching dissimulation. In R. Rogers (Ed.), Clinical assessment of
malingering and deception, (2nd ed., pp. 398–426). New York: Guilford Press.
Rosenfeld, B., Sands, S. A., & Van Gorp, W. G. (2000). Have we forgotten the base rate
problem?: Methodological issues in the detection of distortion. Archives of Clinical
Neuropsychology, 15, 349–359.
Salazar, X. F., Lu, P. H., Wen, J., & Boone, K. B. (2007). The use of effort tests in ethnic
minorities and in non-English speaking and English as a second language populations.
In K. B. Boone (Ed.), Assessment of feigned cognitive impairment: A neuropsychological
perspective pp. 405–427). New York: Guilford Press.
Schretlen, D. J., & Shapiro, A. M. (2003). A quantitative review of the effects of
traumatic brain injury on cognitive functioning. International Review of Psychiatry,
15, 341–349.
Sherman, D. S., Brauer-Boone, K., Lu, P., & Razani, J. (2002). Re-examination of the Rey
Auditory Verbal Learning Test/Rey Complex Figure discriminant function to detect
suspect effort. The Clinical Neuropsychologist, 16, 242–250.
Slick, D. J., Sherman, E. M. S., & Iverson, G. L. (1999). Diagnostic criteria for malingered
neurocognitive dysfunction: Proposed standards for clinical practice and research. The
Clinical Neuropsychologist, 13, 545–561.
Suhr, J. A., & Gunstad, J. (2002). The effects of coaching on the sensitivity and specificity of
malingering measures. Archives of Clinical Neuropsychology, 15, 415–424.
Taylor, J. S. (1999). The legal environment pertaining to clinical neuropsychology.
In J. Sweet (Ed.), Forensic neuropsychology: Fundamentals and practice pp. 421–442).
Exton, PA: Swets & Zeitlinger.
MULTIPLE SYMPTOM VALIDITY TEST FAILURE 313
Tombaugh, T. N. (1997). The test of memory malingering (TOMM): Normative data from
cognitively intact and cognitively impaired individuals. Psychological Assessment, 9,
260–268.
Vanderploeg, R. D, Curtiss, G., & Belanger, H. G. (2005). Long-term neuropsychological
outcomes following mild traumatic brain injury. Journal of the International
Neuropsychological Society, 11, 228–336.
Vickery, C. D., Berry, D. T. R., Dearth, C. S., Vagnini, V. L., Baser, R. E., Cragar, D. E.,
et al. (2004). Head injury and the ability to feign neuropsychological deficits. Archives of
Clinical Neuropsychology, 19, 37–48.
Victor, T., & Boone, K. B. (2007). Assessing effort in a mentally retarded population.
In K. B. Boone (Ed.), Assessment of feigned cognitive impairment: A neuropsychological
perspective pp. 310–345). New York: Guilford Publications, Inc.
Downloaded by [Palo Alto University] at 13:46 12 April 2016