Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

The Clinical Neuropsychologist

ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/ntcn20

Classification accuracy and resistance to coaching


of the Spanish version of the Inventory of
Problems -29 and the Inventory of Problems -
Memory: a simulation study with mTBI patients

Esteban Puente-López, David Pina, Paula Rambaud-Quiñones, José Antonio


Ruiz-Hernández, Maria Dolores Nieto-Cañaveras, Robert D. Shura, Andrés
Alcazar-Crevillén & Begoña Martinez-Jarreta

To cite this article: Esteban Puente-López, David Pina, Paula Rambaud-Quiñones, José
Antonio Ruiz-Hernández, Maria Dolores Nieto-Cañaveras, Robert D. Shura, Andrés Alcazar-
Crevillén & Begoña Martinez-Jarreta (2023): Classification accuracy and resistance to coaching
of the Spanish version of the Inventory of Problems -29 and the Inventory of Problems
- Memory: a simulation study with mTBI patients, The Clinical Neuropsychologist, DOI:
10.1080/13854046.2023.2249171

To link to this article: https://doi.org/10.1080/13854046.2023.2249171

Published online: 24 Aug 2023.

Submit your article to this journal

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=ntcn20
The Clinical Neuropsychologist
https://doi.org/10.1080/13854046.2023.2249171

Classification accuracy and resistance to coaching of the


Spanish version of the Inventory of Problems -29 and
the Inventory of Problems - Memory: a simulation study
with mTBI patients
Esteban Puente-Lópeza , David Pinab , Paula Rambaud-Quiñonesb, José
Antonio Ruiz-Hernándezb , Maria Dolores Nieto-Cañaverasc , Robert D.
Shurad , Andrés Alcazar-Crevilléne and Begoña Martinez-Jarretae,f
a
Department of Psychology, Universidad de Valladolid, Valladolid, Spain; bApplied Psychology Service,
Universidad de Murcia, Murcia, Spain; cDepartment of Psychology, Universidad Antonio de Nebrija,
Madrid, Spain; dMid-Atlantic (VISN 6) Mental Illness Research, Education, and Clinical Center (MIRECC),
Salisbury VA Medical Center, Salisbury, NC, USA; eMutua MAZ, Zaragoza, Spain; fDepartment of Pathological
Anatomy, Forensic and Legal Medicine and Toxicology, Universidad de Zaragoza, Zaragoza, Spain

ABSTRACT ARTICLE HISTORY


Objective: The present study aims to evaluate the classification Received 16 February
accuracy and resistance to coaching of the Inventory of Problems-29 2023
(IOP-29) and the IOP-Memory (IOP-M) with a Spanish sample of Accepted 12 August
2023
patients diagnosed with mild traumatic brain injury (mTBI) and Published online 24
healthy participants instructed to feign. Method: Using a simula- August 2023
tion design, 37 outpatients with mTBI (clinical control group) and
213 non-clinical instructed feigners under several coaching condi- KEYWORDS
tions completed the Spanish versions of the IOP-29, IOP-M, Feigning & malingering;
Structured Inventory of Malingered Symptomatology, and IOP-29 & IOP-M; SVT &
PVT; symptom validity;
Rivermead Post Concussion Symptoms Questionnaire. Results: The experimental malingering
IOP-29 discriminated well between clinical patients and instructed
feigners, with an excellent classification accuracy for the recom-
mended cutoff score (FDS ≥ .50; sensitivity = 87.10% for coached
group and 89.09% for uncoached; specificity = 95.12%). The IOP-M
also showed an excellent classification accuracy (cutoff ≤ 29; sensi-
tivity = 87.27% for coached group and 93.55% for uncoached;
specificity = 97.56%). Both instruments proved to be resistant to
symptom information coaching and performance warnings.
Conclusions: The results confirm that both of the IOP measures
offer a similarly valid but different perspective compared to SIMS
when assessing the credibility of symptoms of mTBI. The encourag-
ing findings indicate that both tests are a valuable addition to the
symptom validity practices of forensic professionals. Additional
research in multiple contexts and with diverse conditions is
warranted.

Clinical and forensic experts often rely on standardized psychological tests to better
assess patients’ symptoms and psychological problems. These instruments are part of

CONTACT David Pina David.pina@um.es Servicio de Psicología Aplicada, Universidad de Murcia, Edificio D
Campus Universitario de Espinardo Avenida 0, 5, 30100 Murcia, Spain.
© 2023 Informa UK Limited, trading as Taylor & Francis Group
2 E. PUENTE-LÓPEZ ET AL.

a methodologically sound and cost-effective approach to gathering valuable clinical


information. However, when relying on test-related data, the information collected
about a patient’s symptoms, complaints, and impairments is valid only if respondents
are able and willing to accurately portray their problems. When patients exaggerate
their psychological problems, their test data no longer reflect true symptoms or
abilities (Dandachi-FitzGerald et al., 2022; Giromini et al., 2022; Merten & Merckelbach,
2020), and this in turn can lead to negative consequences, such as misdiagnosis and
harmful therapeutic interventions, which may result in high financial costs (e.g. Roor
et al., 2016; van der Heide et al., 2020).
To prevent these negative outcomes, the potential presence of feigning must
always be considered and assessed when interpreting psychological test results;
however, clinical judgment alone is not accurate enough to distinguish between a
test profile that is indicative of genuine problems and one that is indicative of
symptom exaggeration (Dandachi-FitzGerald et al., 2022; Sweet et al., 2021). Thus,
a number of specialized instruments are available to facilitate this distinction.
Specifically, symptom validity tests (SVTs) are tests that assess the credibility (or
validity) of self-reported symptoms, and performance validity tests (PVTs) are tests
that assess the credibility (or validity) of observed performance on cognitive tests.
The joint use of both types of instruments is beneficial because they assess
semi-independent but not mutually exclusive constructs, and allow a greater number
of feigning strategies to be covered (Holcomb et al., 2023; Šömen et al., 2021).
Internationally, a wide variety of SVTs and PVTs are available, like the Structured
Inventory of Malingered Symptomatology (SIMS; Smith & Burger, 1997), the validity scales
of the Minnesota Multiphasic Personality Inventory-3 (MMPI-3; Ben-Porath & Tellegren,
2020), the validity scales of the Personality Assessment Inventory (PAI; Morey, 1991, 2007),
the Self-Report Symptom Inventory (SRSI; Merten et al., 2019), and the Test of Memory
Malingering (TOMM; Tombaugh, 1996). In some countries, such as the United States of
America and Canada, professional organizations strongly recommend the use of such
tools to assess the credibility of the symptoms, and their integration into assessment
batteries is common (Martin et al., 2015). In other places where the use of SVTs and PVTs
is less frequent, such as throughout Europe, there have been significant advances in the
last 10 years, yet this development has been very heterogeneous and recent scientific
contributions have not reached every part of the continent (Merten et al., 2022).
Spain is one European country that has made significant progress in this area in
recent years. As Merten et al. (2022) point out, Spanish adaptations of the most widely
used instruments, such as the PAI, the MMPI, the SIMS, and the TOMM, are available,
and scientific research has been developed on multiple topics, such as the impact of
coaching on the SIMS (Puente-López et al., 2022a), the performance of different SVTs
in victims of intimate partner violence (Marín-Torices et al., 2018), the classification
accuracy of the TOMM in patients with substance abuse (Vilar-López et al., 2021), and
the performance of different validity tests in patients who have suffered a traffic
accident (Pina et al., 2022; Puente-López et al., 2021). In addition, Spanish-specific
instruments have been developed with specific scales to analyze symptom credibility,
such as the Global Evaluation System (SEG; Arce & Fariña, 2007), the Trauma Impact
Questionnaire (CIT; Crespo et al., 2020), and PVTs like the extended version of the
Coin-in-the-Hand Test (Daugherty et al., 2021).
The Clinical Neuropsychologist 3

Despite the efforts and the substantial progress made in the country, as of today
there seem to be "more open questions than answers" in the field of symptom validity
(Merten et al., 2022, p. 119). A recent study by Puente-López et al. (2022b) asked to a
Spanish sample of 40 psychologists specialized in forensic psychology and 48 physicians
specialized in forensic/legal medicine about the system used to assess the credibility
of symptoms and their perception of the available methods. Only 9% of the profes-
sionals considered that they had sufficient tools to assess symptom credibility with an
acceptable degree of certainty, and the vast majority (90%) considered that it would
be beneficial to develop new tools and systems to assess symptom validity. These
results highlight two issues: first, the importance of conducting more basic and applied
research on this topic in the country (Merten et al., 2022), and second, the importance
of validating empirically sound measures of symptom validity for use in Spain. To this
end, the IOP duo, which includes the combination of the SVT the Inventory of Problems-29
(IOP-29; Viglione et al., 2017; Viglione & Giromini, 2020) and the companion PVT the
Inventory of Problems-Memory (IOP-M; Giromini et al., 2020), seems particularly prom-
ising as more than 20 scientific papers have been published on these instruments in
recent years, and preliminary analyses indicate that they have high classification accuracy
(see Giromini & Viglione, 2022; Puente-López et al., 2023). The use of the two IOP tests
may be particularly beneficial to add to an assessment battery when assessing the
credibility of presenting symptoms (Holcomb et al., 2023; Young et al., 2020), given the
good classification accuracy of both tests, their brief nature, and their focused design
to provide incremental validity when used in conjunction with SVTs that employ rare
or quasi-rare symptom endorsement screening strategies, such as the MMPI F scales
(Burchett & Bagby, 2022) or SIMS (Smith & Burger, 1997).
Although the IOP combination has been validated in ten different countries
(Australia, Brazil, Canada, England, France, Italy, Lithuania, Portugal, Slovenia, and USA;
Giromini & Viglione, 2022), to date, no validation study has been completed in Spain.
To address this gap in the literature, the present study aims to develop a cross-cultural
validation of the Spanish version of the IOP-29 and the IOP-M with a sample of
patients diagnosed with mild traumatic brain injury (mTBI) and healthy participants
with feigning instructions. Considering that a new instrument must not only provide
valuable information per se but also increase validity and improve classification accu-
racy when added to a given assessment battery. In addition to the IOP combination,
the SIMS was also administered to assess incremental and comparative validity.
Furthermore, given that SVTs and PVTs may be vulnerable to coaching (strategies to
increase the credibility of individuals’ performance in the forensic assessment; Boskovic
et al., 2022; Gorny & Merten, 2006; Puente-López et al., 2022a), the secondary objec-
tive of the present study is to evaluate the effect of different forms of coaching on
the classification accuracy of the IOP-29, IOP-M, and SIMS.

Method
A simulation design was used with a clinical reference sample as a control group.
The methodology followed was adapted from the studies by Gorny and Merten (2006)
and Puente-López et al. (2022a). Pre and post-experimental checklists, and feigning
instructions can be found in Appendices 1, 2, and 3.
4 E. PUENTE-LÓPEZ ET AL.

Participants
Following the recommendations of Pignolo et al. (2023), we aimed to reach a mini-
mum of 50/60 cases per group, anticipating that about 20% to 30% could yield invalid
data. Taking into account the possibility of discarding some participants due to the
application of the selection criteria, a total of 240 participants were recruited. Of
those, 37 were outpatients with a history of mTBI (clinical control group) and 213
were non-clinical adults. Non-clinical adults were randomly divided into three exper-
imental simulation groups using simple random sampling via the SPSS version 25
random number function: 1) honest respondents (honest control group); 2) experi-
mental feigners with basic scenario (uncoached group); and 3) experimental feigners
who obtained symptom information and a warning (coached group).
For the clinical control group, the following selection criteria were used: i) of legal
age (≥ 18); ii) able and willing to sign the informed consent; iii) having a diagnosis of
mTBI; and iv) not being involved in a financial compensation or litigation process due
to the accident. For the experimental feigner groups, the following selection criteria
were used: i) being of legal age (≥ 18); ii) able and willing to sign the informed consent;
iii) having passed the pre-manipulation check (all questions answered correctly; n = 11
did not meet); iv) passed the post-manipulation check (all questions answered correctly;
n = 8 did not meet); v) provided complete answers on all administered instruments (n = 9
did not meet); and vi) not familiar with mTBI via having suffered an mTBI injury or by
having someone in their environment who has a history of mTBI (n = 3 did not meet).
After applying the selection criteria, 31 experimental feigners were screened out.
The final sample consisted of 219 participants: 37 clinical controls (43.2% women;
Mage = 37.05, SD = 11.92); 65 honest controls (89.2% women; Mage = 20.54, SD = 4.92);
55 uncoached feigners (89.1% women; Mage = 20.22, SD = 4.71); and 62 coached feigners
(77.4% women; Mage = 20.63, SD = 6.38). No significant differences were found con-
cerning sex (p = .107), but the age of the clinical group differed significantly from
that of the experimental feigner groups [F(3,219) = 59.98, p < .01]. Also, no significant
age differences were found between the experimental groups (p = .912).
Regarding the mTBI patients, 59% had a university education, 22% had a high school
diploma, 16% had a primary education (first to tenth grade), and the remaining 3%
had no education. The vast majority (87%) had a job at the time of evaluation. Of the
remaining 13%, 6% were university students, and the rest were unemployed. None of
the participants were on sick leave. All participants had sustained mTBI as a result of
a traffic accident, the vast majority (92%) in a car, and the average time since the
accident was 73 days (SD = 10.31). Medical records indicated a mean Glasgow Coma
Scale score of 13.9 (SD = 0.4) at the emergency department. Only 22% had loss of
consciousness, and none had posttraumatic amnesia for longer than 24 h from the trauma.

Procedure
The experimental feigners were recruited among the students of a Spanish University in
March 2022. Participants were enrolled by e-mail and randomly assigned to the prepared
conditions. In two different classrooms set up for the study, all participants completed
the informed consent process and the study measures in paper-and-pencil format.
The Clinical Neuropsychologist 5

Each of the three conditions had specific feigning instructions adapted from Gorny
and Merten (2006) and Puente-López et al. (2022a). Participants assigned to the
uncoached condition were given a basic scenario designed to put them in the role
of a person who had suffered mTBI following a motor vehicle accident. They were
asked to imagine that because the legal proceedings to obtain compensation had
been delayed for a long time, their symptoms had been broadly mitigated, but that
they felt entitled to compensation for the discomfort and severe limitations suffered.
Thus, they decided to pretend that the symptoms persisted and remained intense.
The coached group also received this scenario, but in addition they received infor-
mation about the usual clinical presentation of mTBI (symptom information), as well
as cautionary items indicating that they should not exaggerate their presentation or
it might be detected by an expert evaluator (warning). The honest controls group
received instructions requesting them to respond with complete honesty.
To ensure understanding of the instructions and correct execution of the role, two
manipulation checks were administered, one before the experiment with questions
about the instructions, and one after the experiment with questions about the exe-
cution of the role. As a positive incentive, all groups were offered an extra credit of
0.25 points (out of 10 total for the course) in the final grade of the course; additionally,
failure was penalized in that only those who completed the scales according to the
assigned role would receive the bonus.
For the clinical control group, a sample of mTBI outpatients was recruited at a
multidisciplinary medical center in Spain between January and June 2022. Patients
presented to the medical center for evaluation following their accidents. According to
the center’s protocol, they were evaluated on three occasions: the initial evaluation
after the accident and after having passed through the hospital emergency service, a
second time for follow-up evaluation after completing rehabilitation, and a third time
for the analysis of possible ongoing sequelae and closure of the case. Patients who
agreed to participate signed the consent form and were evaluated by EPL. Special
emphasis was placed on the anonymous nature of the study, and it was indicated
that under no circumstances would the information provided in the framework of the
investigation be provided to third parties, especially participants’ physicians. The pos-
sible presence of external incentives was assessed with an ad hoc semi-structured
interview in which information about socioeconomic status, social, financial, and family
context factors, work history, current conflicts and legal issues was analyzed and sub-
sequently verified with the information available to the physician evaluating the patient.
This study was approved by the Research Ethics Committee of the University of
Zaragoza and followed the ethical considerations proposed by the American
Psychological Association (2017).

Measures
Rivermead post concussion symptoms questionnaire (RPQ)
The RPQ (King et al., 1995) is a 16-item self-report inventory that measures cognitive,
somatic, and emotional symptoms compared with the pre-injury status of the respon-
dent. Each item is rated on a 5-point scale (0 = Not experienced at all before or after
the accident to 4 = A severe problem for me now). The total score is the sum of all
6 E. PUENTE-LÓPEZ ET AL.

the item responses, taking the value 1 as 0, as it does not represent a problem at
present, and ranges from 0 to 64. The Spanish version of the scale, translated by the
CENTER-TBI Consortium (Steinbuechel et al., 2021) was used.

Structured inventory of malingered symptomatology (SIMS)


The SIMS (Widows & Smith, 2005) is a 75-item true/false self-reported SVT to assess
possible symptom overreporting. The primary score is the SIMS total score, with higher
scores indicating invalidity. The SIMS contains 5 subscales of 15 items each: Psychosis
(P), Neurological Impairment (NI), Amnestic Disorders (AM), Low Intelligence (LI), and
Affective Disorders (AF), which may be used to further understand the types of
symptoms driving an elevated total score. The Spanish version of the scale, adapted
by González-Ordi and Santamaría Fernández (2009), was used. Several total score
cutoffs have been reported in the extant literature on the SIMS (Shura et al., 2022;
van Impelen et al., 2014), thus three cutoff scores were evaluated: > 14 (cutoff per
manual), > 16 (Spanish manual cutoff ), and > 21 (as a more conservative cutoff ).

Inventory of problems – 29 (IOP-29)


The IOP-29 (Viglione & Giromini, 2020) is a 29-item self-report SVT that takes approx-
imately 5 to 10 min to complete (Viglione & Giromini, 2020). It assesses the credibility
of clinical presentations related to posttraumatic stress disorder, depression/anxiety,
psychosis, cognitive impairment, and combinations thereof. Most of the IOP-29 items
offer three alternative response options, i.e. “true,” “false,” and “doesn’t make sense.” A
small number of items are instead logical or mathematical tasks that require an
open-ended response. The main outcome of the IOP-29 is named the False Disorder
Probability Score (FDS). The formula used to generate the FDS compares the responses
of the test-taker with those of two reference groups: a sample of individuals with
psychiatric diagnoses who completed the IOP-29 with no known motivation to exag-
gerate their mental problems (patient reference group) and a sample of healthy
volunteers who completed the IOP-29 with instruction to pretend to have a mental
illness (feigner reference group). The FDS is a probability value that reflects the like-
lihood of obtaining a given IOP-29 within the feigner or patient reference group. The
higher the FDS value, the lower the credibility of the symptom report. For this study
we used the cutoff scores recommended in the manual: FDS ≥ 30, 50 & 65 (Viglione
& Giromini, 2020).

Inventory of problems–memory (IOP-M)


The IOP-M (Giromini et al., 2020) is a PVT composed of 34 forced-choice recognition
items with two response alternatives. Each item presents two words or short state-
ments: one that was part of the content of the IOP-29 (target) and one that was not
(foil; Giromini et al., 2020). The IOP-M is administered immediately after the IOP-29
without any warning of an upcoming recognition trial, which breaks the traditional
forced-choice recognition model used by most PVTs “and emphasizes the importance
of thoroughly encoding target stimuli” (Holcomb et al., 2023, p.2). The development
study by Giromini et al. (2020) indicated that the IOP-M had high classification accu-
racy and added incremental validity to the results of the IOP-29. These results have
The Clinical Neuropsychologist 7

been confirmed by other studies (Banovic et al., 2022; Carvalho et al., 2021; Gegner
et al., 2022; Holcomb et al., 2023), and suggest that the combined use of both tests
provides better classification accuracy in identifying invalid presentations than using
the IOP-29 alone. The IOP-M takes approximately 5 to 10 min to complete. If the total
number of IOP-M items answered correctly is less than 30, the performance is con-
sidered invalid. Conversely, a total score of ≥ 30 is interpreted as a credible result.

Data analysis
Differences between groups were analyzed using a one-factor analysis of variance (ANOVA)
with Bonferroni post hoc contrasts. Cohen’s d was used for effect size, with the range of
d proposed by Rogers et al. (2003): Moderate ≥ 0.75; Large ≥ 1.25; and Very large ≥ 1.50.
The classification accuracy of SIMS, IOP-29, and IOP-M was assessed by calculating sen-
sitivity, specificity, positive and negative likelihood ratio, and positive and negative pre-
dictive values (30% prevalence, Frederick, 2018) using standard formulas (see, for example,
Shreffler & Huecker, 2022). Finally, the incremental validity of the measures was analyzed
using a series of hierarchical logistic regressions. Given that SIMS is a widely used and
consolidated SVT, we wanted to evaluate the incremental validity provided by the use
of the IOP combination. Therefore, in the first model, the SIMS was entered first and then
either the IOP-29 or the IOP-M was entered individually as an additional predictor in each
subsequent step. Since the RPQ does not have up-to-date cutoff scores prepared for the
target population, their scores were analyzed descriptively only. Analyses were performed
with IBM Statistics SPSS version 25 and R environment (4.2.0, R Core Team, 2022), and
were completed independently by EPL and MDNC as a check that analyses were con-
ducted correctly and results reported accurately.

Results
Tables 1 and 2 show the means, standard deviations, ANOVA results, and Cohen’s d
effect sizes for each of the groups. Significant differences were observed in all com-
parisons for all tests [F(3,219) ranges from 6.22 to 431.92]. As expected, the highest
mean test scores occurred in the uncoached group.
Specifically, the Bonferroni-corrected pairwise-comparisons for the SIMS showed
significant differences between all groups, except for the comparison between honest
controls and patient controls (p = .18). The two groups of controls scored significantly
lower than the instructed feigners groups (p < .001), with very large effect sizes (d
ranges from −1.96 to −3.24). Regarding the IOP-29-FDS, the two groups of controls
also obtained significantly lower scores than the groups of instructed feigners (p <
.001), with very large effect sizes (d ranges from −3.72 to −4.35). Similarly, no signif-
icant differences were observed between the patients and the honest controls (p =
.98), but unlike SIMS, no significant differences were identified between the coached
and uncoached feigners (p = .11).
In contrast, but as expected, the control groups scored higher on the IOP-M com-
pared with the instructed feigner groups (p < .001), also with very large effect sizes.
As with IOP-29-FDS, no significant differences were found between the two control
groups (p = .64) or between the two groups of instructed feigners (p = .24).
8 E. PUENTE-LÓPEZ ET AL.

Table 1. Validity scores, omnibus ANOVA results, and post-hoc tests across groups (N = 219).
Honest Uncoached Coached Patients
Post-hoc tests – Bonferroni
Test Scores M (SD) M (SD) M (SD) M (SD) F(3,219) correction
SIMS Total 8.43(5.30) 34.24(9.91) 24.65(8.97) 10.49(4.81) 140.11 H-U**, H-C**, U-C**, U-P**, C-P**
SIMS Psychosis 0.78(0.97) 1.84(2.81) 0.73(1.32) 0.51(0.71) 6.22 H-U**, U-C**, U-P**
SIMS Neurological 0.77(1.33) 6.38(3.53) 4.23(2.83) 3.34(1.83) 50.37 H-U**, H-C**, H-P**, U-C**, U-P**
Impairment
SIMS Affective 4.11(2.90) 6.96(3.56) 5.50(2.87) 4.10(2.49) 11.21 H-U**, U-P**
Disorders
SIMS Amnestic 0.54(0.93) 11.73(2.27) 9.31(3.00) 0.76(1.11) 431.92 H-U**, H-C**, U-C**, U-P**, C-P**
Disorders
SIMS Low 2.23(1.04) 7.33(3.37) 4.89(2.97) 1.78(1.52) 58.67 H-U**, H-C**, U-C**, U-P**, C-P**
Intelligence
IOP-29-FDS 0.10(0.09) 0.78(0.21) 0.72(0.18) 0.10(0.15) 281.01 H-U**, H-C**, U-P**, C-P**
IOP-M 33.02(1.69) 18.07(7.86) 19.23(6.34) 32.54(1.89) 133.24 H-U**, H-C**, U-P**, C-P**
RPQ3 1.51(2.12) 7.05(3.34) 6.39(2.10) 7.20(1.72) 75.40 H-U**, H-C**, H-P**
RPQ13 7.51(9.11) 30.33(10.39) 24.71(8.39) 15.05(4.41) 82.23 H-U**, H-C**, H-P**, U-C**,
U-P**, C-P**
Note. SIMS = Structured Inventory of Malingered Symptomatology; IOP-29-FDS = Inventory of Problems -29 False
Disorder probability Score; IOP-M = Inventory of Problems Memory module; RPQ3&13 = Rivermead Post Concussion
Symptoms Questionnaire score for the three first items and for the next 13 items, respectively; ** = p <. 01;
H = Honest group; U = Uncoached group; C = Coached group; P = Patients group. Abbreviations are used in the
post-hoc column to denote specific comparisons that were significant.

Table 2. Cohen’s d effect sizes in the comparation


of groups (N = 154).
Uncoached Coached
SIMS Total
Honest −3.24 −2.20
Uncoached * 1.01
Patients −3.04 −1.96
IOP-29-FDS
Honest −4.20 −4.35
Uncoached * 0.30
Patients −3.72 −3.74
IOP-M
Honest −2.62 −2.97
Uncoached * 0.26
Patients −2.53 −2.84
RPQ3
Honest −1.98 −2.31
Uncoached * 0.23
Patients 0.05 0.42
RPQ13
Honest −2.33 −1.96
Uncoached * 0.59
Patients −1.91 −1.44
Note. SIMS  =  Structured Inventor y of Malingered
Symptomatology; IOP-29- FDS = Inventory of Problems -29
False Disorder probability Score; IOP-M = Inventory of
Problems Memory module; RPQ3&13 = Rivermead Post
Concussion Symptoms Questionnaire score for the three first
items and for the next 13 items, respectively.

Classification accuracy
Table 3 shows the classification accuracy of the tests used to compare clinical patients
and the two groups of experimental feigners. Regarding the SIMS, for the cutoff score
Table 3. Classification accuracy of tests in patients vs instructed feigners (N = 154).
Uncoached Coached
Scale (CoS) SPEC 95% CI SEN 95% CI L+ 95% CI L− 95% CI SEN 95% CI L+ 95% CI L− 95% CI
SIMS >14 75.61 59.70–87.64 100 93.51–100 4.10 2.39–7.03 * * 93.55 84.30–98.21 3.84 2.23–6.60 0.09 0.03–0.22
SIMS >16 87.80 73.80–95.92 96.36 87.47–99.56 7.90 3.47–17.99 0.04 0.01–0.16 87.10 76.15–94.26 7.14 3.12–16.33 0.15 0.08–0.28
SIMS >21 100 91.40–100 90.91 80.05–96.98 * * 0.09 0.04–0.21 62.90 49.69–74.84 * * 0.37 0.27–0.51
IOP-29-FDS ≥ .30 92.68 80.08–98.46 94.55 84.88–98.86 12.92 4.34–38.48 0.06 0.02–0.18 96.77 88.83–99.61 13.23 4.45–39.35 0.03 0.01–0.14
IOP-29-FDS ≥ .50 95.12 83.47–99.40 89.09 77.75–95.89 18.26 4.71–70.79 0.11 0.05–0.24 87.10 76.15–94.26 17.85 4.61–69.23 0.14 0.07–0.26
IOP-29-FDS ≥ .65 97.56 87.14–99.94 80.00 67.03–89.57 32.80 4.71–228.35 0.20 0.12–0.35 66.13 52.99–77.67 27.11 3.88– 0.35 0.24–0.49
189.45
IOP-M ≤ 27 97.56 87.14–99.94 87.27 75.52–94.73 35.78 5.15–284.64 0.13 0.07–0.26 90.32 80.12–96.37 37.03 5.33– 0.10 0.05–0.21
257.10
IOP-M ≤ 28 92.68 80.08–98.46 87.27 75.52–94.73 11.93 3.99–35.62 0.14 0.07–0.28 91.94 82.17–97.33 12.56 4.22–37.44 0.09 0.04–0.20
IOP-M ≤ 29 92.68 80.08–98.46 87.27 75.52–94.73 11.93 3.99–35.62 0.14 0.07–0.28 93.55 84.30–98.21 12.78 4.29–38.08 0.07 0.03–0.18
IOP-M ≤ 30 87.80 73.80–95–92 89.09 77.75–95.89 7.31 3.20–16.70 0.12 0.06–0.27 96.77 88.83–99.61 7.94 3.49–18.06 0.04 0.04–0.14
IOP-M ≤ 31 78.05 62.39–89.44 90.91 80.05–96.98 4.14 2.31–7.42 0.12 0.05–0.27 98.39 91.34–99.96 4.48 2.51–7.99 0.02 0.00–0.15
Note. SPEC = specificity; 95% CI = 95% confidence interval; SEN = sensitivity; L + and L− = positive and negative likelihood ratio. SIMS = Structured Inventory of Malingered
Symptomatology; IOP-29-FDS = Inventory of Problems – 29 False Disorder probability Score; IOP-M = Inventory of Problems Memory Module; * = The calculations
generate a result equal to zero.
The Clinical Neuropsychologist
9
10 E. PUENTE-LÓPEZ ET AL.

>16, the specificity was 87.80%, and the sensitivity was 96.36% for the uncoached
group and 87.10% for the coached group. Using a more liberal cutoff score of >14,
sensitivity rose to 100% and 93.55% for the uncoached and coached groups, respec-
tively, but specificity decreased to 75.61%. A more conservative cutoff score of >21
increased specificity to 100%, but decreased sensitivity to 90.91% for the uncoached
group and 62.90% for the coached group.
For the IOP-29, for the cutoff score of FDS ≥ .50 the specificity was 95.12%, and the
sensitivity was 89.09% for the uncoached group and 87.10% for the coached group. With
a more conservative cutoff score (FDS ≥ .65), specificity increased to 97.56%, while sen-
sitivity decreased in the uncoached and coached groups to 80.0% and 66.13%, respectively.
In contrast, a more liberal cutoff score (FDS ≥ .30) decreased specificity to 92.68%, and
increased sensitivity to 94.55% for the uncoached group and 96.77% for the coached group.
As for IOP-M, the ≤ 29 cutoff score recommended by Giromini et al. (2020) had a
specificity of 92.68%, and a sensitivity of 87.27% for the uncoached group and 93.55%
for the coached group. The cutoff score ≤ 28 had the same specificity and sensitivity
in the uncoached group as ≤ 29, but in the uncoached group it decreased to 91.94%,
while the cutoff score of ≤ 27 increased specificity to 97.56%, maintaining sensitivity
at 87.27% in the uncoached group and decreasing to 90.32% in the coached group.

Incremental validity
Table 4 shows the results of a series of hierarchical logistic regressions in which group
membership (patient vs uncoached/coached feigner) was set as the outcome variable.
In the first step, the SIMS total score was entered as the sole predictor, in the second
step the IOP-29 FDS was included, and in the third step the IOP-M score was entered.

Table 4. Hierarchical logistic regression models predicting group membership.


Patient vs Uncoached Patient vs Coached Patient vs All Feigners
Step 1
X2 Model 108.76** 75.78** 113.44**
BSIMS (SE) 0.57 (0.17)** 0.37 (0.75)** 0.39 (0.07)**
Wald 10.89 24.77 28.35
% Class Patient 92.7 80.5 75.6
% Class Feigner 94.5 90.3 96.6
% Class Total 93.8 86.4 91.1
Step 2
X Model
2
114.89** 111.06** 148.21**
BIOP-29 (SE) 6.25 (2.70)** 9.51 (2.30)** 8.87(2.02)**
Wald 5.34 16.97 19.31
% Class Patient 95.1 92.7 92.7
% Class Feigner 94.5 96.8 97.4
% Class Total 94.8 95.1 96.2
Step 3
X2 Model 115.68** 126.13** 160.96**
BIOP-M (SE) −0.13 (0.17) −0.45 (0.20)* −0.34(0.13)*
Wald 0.65 5.11 6.58
% Class Patient 95.1 95.1 95.1
% Class Feigner 94.5 98.4 97.4
% Class Total 94.8 97.1 96.8
Note. Patients N = 37; Uncoached N = 55; Coached N = 62; All Feigners N = 107; SIMS = Structured Inventory of
Malingered Symptomatology; IOP-29 = Inventory of Problems – 29; IOP-M = Inventory of Problems Memory module;
(SE)= Standard error.
The Clinical Neuropsychologist 11

In all 3 comparisons, the inclusion of IOP-29 (second step) significantly improved


the overall classification accuracy of the first step. Classification accuracy increased
by 8.7% (from 86.4% to 95.1%) when comparing patients and coached, and by 5.1%
(from 91.1% to 96.2%) in that of patients vs both feigners groups. On the other hand,
the inclusion of IOP-M slightly improved the overall classification accuracy of the
model only in the comparison between patients and coached (2%, from 95.1% to
97.1%) and in the comparison between patients vs both groups of feigners (0.6%,
from 96.2% to 96.8%).

Discussion
The present study evaluated the classification accuracy and resistance to different
forms of coaching of the Spanish version of the IOP-29 and the IOP-M in a sample
of experimental feigners instructed to feign persisting symptoms after mTBI, honest
controls, and patients with a history of mTBI. This is the first study to date to evaluate
the performance of the IOP-29 and IOP-M in a Spanish population, and one of the
few to include patients with a genuine history of mTBI. The scores of both tests
produced very large effect sizes in the comparisons between controls and feigners,
in line with results from previous studies with mTBI patients in the United Kingdom
(Bosi et al., 2022) and in Australia (Gegner et al., 2022). Instructed feigners, especially
groups without specific preparation (uncoached or unprepared), produced the highest
scores. This is in line with both previous publications on the IOP-29/M (e.g. Banovic
et al., 2022; Bosi et al., 2022; Gegner et al., 2022), as well as publications on other
SVTs, such as the SIMS (Boskovic et al., 2022; Puente-López et al., 2022a), the SRSI
(Boskovic & Akca, 2022), and the MMPI-2 (Aparcero et al., 2023).
On the other hand, IOP-29 and IOP-M scores did not differ between nonclinical
and clinical controls. For both tests, nonclinical controls produced scores similar to
those of previous studies such as Bosi et al. (2022), Gegner et al. (2022), and Šomen
et al. (2021). However, for the IOP-29, clinical controls produced lower scores than
other studies, such as those of Giromini et al. (2018, 2019) and Ilgunaite et al. (2022),
where the mean FDS value ranged between .25 and .30. One possible explanation
for these results is that the items of the applied SVTs reflect genuine complaints or
symptoms. Although there are multiple reasons why a patient may generate a positive
score on an SVT, such as inattentive response style or symptom misinformation
(Merckelbach et al., 2019), this effect has been observed in other SVTs such as the
SIMS. As Puente-López et al. (2021) and Shura et al. (2022) point out, although the
SIMS theoretically only presents implausible, rare or extreme symptoms, some items
have been identified as presenting common symptomatology in the clinical popula-
tion, such as those related to sleep problems. It is possible that the IOP-29 includes
some items that reflect actual symptomatology and are to some degree recognizable
for patients with psychopathology, but not for patients with mTBI.
Another possible explanation is that the present study excluded actively litigating
patients and evaluated the presence of possible sources of incentives. It is possible
that the patients in other studies, like Ilgunaite et al. (2022), exaggerated their symp-
tomatology to some extent. Although this is a relatively unlikely situation due to the
12 E. PUENTE-LÓPEZ ET AL.

type of patients evaluated, this issue should not be taken for granted, and any research
involving clinical controls should comprehensively and objectively analyze the presence
and influence of possible incentives, as well as patients’ expectations about the pos-
sible advantages and disadvantages of psychological assessment (Dandachi-FitzGerald
et al., 2016).
Regarding the effects of coaching, the scores on both IOP-29 and IOP-M were
somewhat less extreme in the coached group than in the non-coached group, but
not significantly so. In contrast, mean SIMS scores decreased in the coached group,
in accordance with the findings of Puente-López et al. (2022a). This suggests that the
three SVTs are resistant to the different forms of coaching studied, but the IOP-29
and IOP-M show more resistance than the SIMS, which is consistent with previous
similar studies (Boskovic et al., 2022; Gegner et al., 2022). IOP-29 and IOP-M use
innovative detection strategies: the former focuses on how one copes with their
problems, rather than asking whether or not they experience a list of rare symptoms;
the latter focuses on incidental recognition. Arguably, this approach might make it
more difficult for the feigner to understand how to feign in a more credible manner,
compared to the classic rare-symptoms based test.
With respect to the classification accuracy of the instruments, the performance of
the IOP-29 was slightly superior to that of the SIMS and the IOP-M. In the former,
for FDS ≥ .30, the sensitivity was similar to that reported by previous studies such
as Gegner et al. (2022), Šömen et al. (2021), and Boskovic et al. (2022). However, the
specificity was much higher than other studies with clinical patients, such as Giromini
et al. (2018, 2019), Ilgunaite et al. (2022), and Roma et al. (2020), where the specificity
ranged between 60% and 70%. For FDS ≥ .50, the recommended cutoff for general
use of the IOP-29 (Viglione & Giromini, 2020), the specificity (95.1%) and sensitivity
(87.1% and 89.0%) values were in line with what had been observed in previous
studies such as Gegner et al. (2022), Ilgunaite et al. (2022), Šömen et al. (2021), and
Winters et al. (2021). Although the specificity was 5% lower than that obtained at
the equivalent SIMS cutoff score (> 21), the sensitivity in the coached group remained
above 85%, whereas in SIMS it decreased to 62.9%. For FDS ≥ .65, specificity increased
very slightly (from 95.1% to 97.5%), in exchange for a very significant decrease in
sensitivity, especially in the coached group (66.1%), which is in line with previous
studies such as Giromini et al. (2019), Viglione et al. (2017), and Banovic et al. (2022).
Overall, the Spanish IOP-29 findings are similar to both the original version of the
instrument (Viglione & Giromini, 2020) and other validation studies in different
countries (e.g. Carvalho et al., 2021; Šömen et al., 2021). The cutoff FDS ≥ .30 is
suitable for use in screening assessments where it is necessary to prioritize sensitivity
over specificity, whereas the cutoff FDS ≥ .50 is recommended for the general use
of the instrument. Regarding the cutoff FDS ≥ .65, as Holcomb et al. (2023) point
out, although the IOP-29 manual (Viglione & Giromini, 2020) recommends using it
to minimize the possibility of false positives in high-stakes settings, our results show
a very slight increase in specificity, in exchange for a very significant decrease in
sensitivity. For this reason, the cutoff FDS ≥ .50 appears to be sufficiently specific
(95%) for routine use in Spanish with both clinical and forensic settings.
Regarding the accuracy of the IOP-M classification, the cutoff of ≤ 29 yielded a
specificity of 92.7%, together with a sensitivity in the uncoached and coached groups
The Clinical Neuropsychologist 13

of 87.2% and 93.5%, respectively. At the cutoff of ≤ 28, the specificity and sensitivity
of the uncoached group did not change, but the sensitivity of the coached group
decreased to 92.0%, while at ≤ 27 the specificity increased to 97.5% in exchange
for decreasing sensitivity only in the coached group (90.3%). On the other hand,
cutoffs of ≤ 30 and ≤ 31 decreased specificity to 87.8% and 78.0%, respectively, in
exchange for increasing sensitivity in the uncoached group to 89.0% and 90.9%,
respectively, and in the coached group to 96.7% and 98.3%, respectively. These
findings are in line with Bosi et al. (2022), Giromini et al. (2020), and Šömen et al.
(2021), where the cutoff of ≤ 29 generated similar scores and was recommended
for general use. The cutoff of ≤ 27 resulted in a very large specificity in exchange
for a slight decrease in sensitivity, making it appropriate for evaluations in high-stake
contexts where the minimization of false positives should be prioritized. Similarly, a
cutoff of ≤ 30 can be used for situations where it is necessary to prioritize the
detection of true positives, such as screening tasks. The cutoff of ≤ 31 is not rec-
ommended for this task as specificity decreased by 10% in exchange for a very slight
increase in sensitivity.
The combination of both the IOP-29 and the IOP-M further added incremental
validity when used together with the SIMS, which is consistent with previous similar
studies (Giromini et al., 2018; Bosi et al., 2022) and supports their joint use. It should
be noted that the two versions of the IOP improved the overall accuracy of the
models when included separately after the SIMS. However, when the IOP-M was added
to a model that had previously included the SIMS and the IOP-29, the increase was
still statistically significant, but with a very small effect size. However, given that the
IOP-M is a short, easy-to-use and efficient instrument, we believe that its application
together with IOP-29 is beneficial, and the joint use of the two IOP measures is a
useful addition to any test battery aimed at assessing symptom validity in multiple
contexts (Holcomb et al., 2023). Nevertheless, it would be beneficial if additional
research could be conducted to further evaluate the joint performance of the two
IOP measures.

Research and clinical implications


This study provided promising results supporting the efficacy of IOP-29 and IOP-M
as SVT and PVT, respectively, and the use of the IOP combination in the Spanish
population. In this study, both IOP tests appeared to be resistant to the effects of
different forms of coached feigning, which is known to be a threat to the classification
accuracy of SVTs and PVTs (Holcomb et al., 2023; Kosky et al., 2022). Nevertheless,
our results, although promising, offer preliminary support for the use of the IOP
combination and should be used with caution, especially in high-stake contexts such
as forensic settings. For future research, it would be of particular interest to replicate
our findings with a criterion group paradigm design, using different symptomatology
and diagnoses, and in other contexts.
One of the strengths of the IOP-29 is its incremental validity when used together
with other SVTs, such as the SIMS, or the MMPI-2 (Giromini et al., 2018, 2019). Given
that in the assessment of symptom validity the use of multiple SVTs is recommended
(Giromini et al., 2022), the incorporation of the IOP-29 in the assessment protocol may
14 E. PUENTE-LÓPEZ ET AL.

be beneficial to maximize the classification accuracy. The IOP-29’s brief nature is also
an advantage in this regard, given that its short application time (5 to 10 min) allows
it to be used in conjunction with more extensive instruments, such as the MMPI or
the PAI. Similarly, the incorporation of the IOP-29 into an assessment battery can
also be beneficial for research. For example, studies such as Curtis et al. (2019) use
multidimensional malingering criteria (Sherman et al., 2020) with a comprehensive
neuropsychological battery, but this design may be inaccessible to some researchers
because of the high cost and time involved. The brief nature and high classification
accuracy of the IOP-29 make it a valuable inclusion in a group classification system
for the criterion group paradigm, and we believe that further research regarding this
issue will be of high interest.
Finally, with regard to the IOP-M, our findings indicate that it is a promising PVT
with high classification accuracy, which is consistent with emerging evidence. As
Erdodi (2023) and Holcomb et al. (2023) indicate, the properties of the instrument
are comparable to those of other older and more established PVTs, and with further
evidence it may become a very interesting addition to the assessment of performance
in neuropsychological assessment.

Limitations
Despite the promising results, this study is not without limitations. First, the simulation
design tends to produce substantially higher classification accuracy and effect sizes
than criterion group paradigms, and also limits external validity (Rogers & Gillard,
2011). It would be advisable for future research to replicate the results obtained in
our study with a criterion group paradigm, where results are more clinically relevant
(Rogers & Gillard, 2011). Nevertheless, the use of the simulation design is frequent
in the early stages of validation of SVTs due to its high internal validity (Rogers &
Cruise, 1998), so we consider that this study represents an appropriate first step to
evaluate the performance of the IOP combination in a Spanish population. Second,
we have relied on a sample of patients diagnosed with mTBI, and the results obtained
may not be generalizable to other populations or conditions. More research is needed
with different contexts, and using additional conditions before recommending the
application of the test more broadly. Third, although we have tried to objectively rule
out possible feigners in the group of clinical controls, the classification criteria used
cannot fully guarantee that all clinical controls are genuine. Given that limited eval-
uation time was available with the participants, the inclusion of any criteria-based
classification system such as those of Bianchini et al. (2005) was not possible. As
noted by Pina et al. (2022), with the resources currently available this issue remains
a major challenge for researchers. Last, the age of the participants differed significantly
from the age of the clinical control group, and this may have influenced the results.
This limitation has already been exposed in previous studies by the authors in Spanish
territory, such as Pina et al. (2022) or Puente-López et al. (2022a, 2022b).
Despite these limitations, this study is the first to independently replicate the prom-
ising findings of Giromini et al. (2020) regarding the classification accuracy of the
IOP-29 and IOP-M with mTBI patients, and the first to contribute to the study of the
effectiveness of the aforementioned tests within a Spanish sample. Although it would
The Clinical Neuropsychologist 15

still be premature to recommend its application in clinical or forensic contexts in this


country, the encouraging results indicate that both tests are a valuable addition to the
symptom validity assessment systems of professionals in Spain. We believe that it is of
particular interest to further develop research on the IOP combination, with the primary
aim of evaluating its effectiveness in multiple contexts and with various conditions.

Acknowledgments
We would like to thank Luciano Giromini for providing us with all the information and materials
necessary to successfully execute this study.

Disclosure statement
The authors declare that they have no known competing financial interests or personal rela-
tionships that could have appeared to influence the work reported in this paper.

Funding
This study was funded by the Cathedra MAZ from the Universidad de Zaragoza. The authors
declare that the research was conducted in the absence of any commercial or financial relation-
ships that could be construed as a potential conflict of interest. This research was supported in
part by the Salisbury VA Health Care System and Mid-Atlantic (VISN 6) Mental Illness Research,
Education, and Clinical Center (MIRECC). The views, opinions, and/or findings contained in this
article are those of the authors and should not be construed as an official Veterans Affairs posi-
tion, policy, or decision, unless so designated by other official documentation.

ORCID
Esteban Puente-López http://orcid.org/0000-0001-6367-457X
David Pina http://orcid.org/0000-0001-5944-4683
José Antonio Ruiz-Hernández http://orcid.org/0000-0002-8319-2129
Maria Dolores Nieto-Cañaveras http://orcid.org/0000-0001-9560-0313
Robert D. Shura http://orcid.org/0000-0002-9505-0080
Begoña Martinez-Jarreta http://orcid.org/0000-0001-6469-9189

Data availability statement


Since the database contains sensitive information on symptom validity tests, and its dissemina-
tion and knowledge may significantly undermine its effectiveness, an adapted version has been
created for its dissemination. This database can be requested from the corresponding author
(david.pina@um.es).

References
American Psychological Association. (2017). Ethical principles for psychologists and code of conduct.
https://www.apa.org/ethics/code
Aparcero, M., Picard, E. H., Nijdam-Jones, A., & Rosenfeld, B. (2023). Comparing the ability of
MMPI-2 and MMPI-2-RF validity scales to detect feigning: A meta-analysis. Assessment, 30(3),
744–760. Advance online publication https://doi.org/10.1177/10731911211067535
16 E. PUENTE-LÓPEZ ET AL.

Arce, R., & Fariña, F. (2007). Cómo evaluar el daño moral consecuencia de accidentes de tráfi-
co: validación de un protocolo de medida [How to assess moral damages resulting from
traffic accidents: validation of a measurement protocol]. Papeles del Psicólogo, 28, 205–210.
https://doi.org/10.1080/14999013.2021.1906798
Banovic, I., Filippi, F., Viglione, D. J., Scrima, F., Zennaro, A., Zappalà, A., & Giromini, L. (2022).
Detecting coached feigning of schizophrenia with the Inventory of Problems – 29 (IOP-29)
and its memory module (IOP-M): A simulation study on a French community sample.
International Journal of Forensic Mental Health, 21(1), 37–53. https://doi.org/10.1080/1499901
3.2021.1906798
Ben-Porath, Y. S., & Tellegren, A. (2020). Minnesota multiphasic personality inventory-3 (MMPI-3):
Manual for administration, scoring, and interpretation. University of Minnesota Press.
Bianchini, K. J., Greve, K. W., & Glynn, G. (2005). On the diagnosis of malingered pain-related
disability: Lessons from cognitive malingering research. The Spine Journal: Official Journal
of the North American Spine Society, 5(4), 404–417. https://doi.org/10.1016/
j.spinee.2004.11.016
Boskovic, I., & Akca, A. Y. E. (2022). Presenting the consequences of feigning: Does it diminish
symptom overendorsement? An analog study. Applied Neuropsychology: Adult, 1–10. Advance
online publication. https://doi.org/10.1080/23279095.2022.2044329
Boskovic, I., Akca, A. Y. E., & Giromini, L. (2022). Symptom coaching and symptom validity tests:
An analog study using the structured inventory of malingered symptomatology, Self-Report
Symptom Inventory, and Inventory of Problems-29. Applied Neuropsychology. Adult, 1–13.
Advance online publication https://doi.org/10.1080/23279095.2022.2057856
Bosi, J., Minassian, L., Ales, F., Akca, A., Winters, C., Viglione, D. J., Zennaro, A., & Giromini, L. (2022).
The sensitivity of the IOP-29 and IOP-M to coached feigning of depression and mTBI: An on-
line simulation study in a community sample from the United Kingdom. Applied Neuropsychology.
Adult, 1–13. Advance online publication https://doi.org/10.1080/23279095.2022.2115910
Burchett, D., & Bagby, R. M. (2022). Assessing negative response bias: A review of the noncred-
ible overreporting scales of the MMPI‑2‑RF and MMPI-3. Psychological Injury and Law, 15(1),
22–36. https://doi.org/10.1007/s12207-021-09435-9
Carvalho, L., Reis, A., Colombarolli, M. S., Pasian, S. R., Miguel, F. K., Erdodi, L. A., Viglione, D.
J., & Giromini, L. (2021). Discriminating feigned from credible PTSD symptoms: A validation
of a Brazilian version of the Inventory of Problems-29 (IOP-29). Psychological Injury and Law,
14(1), 58–70. https://doi.org/10.1007/s12207-021-09403-3
Crespo, M., González-Ordi, H., Gómez-Gutiérrez, M., & Santamaría, P. (2020). CIT, Cuestionario de
Impacto del Trauma [Trauma Impact Questionnaire]. TEA Ediciones.
Curtis, K. L., Aguerrevere, L. E., Bianchini, K. J., Greve, K. W., & Nicks, R. C. (2019). Detecting
malingered pain-related disability with the pain catastrophizing scale: a criterion groups
validation study. The Clinical Neuropsychologist, 33(8), 1485–1500. https://doi.org/10.1080/13
854046.2019.1575470
Dandachi-FitzGerald, B., Merckelbach, H., & Merten, T. (2022). Cry for help as a root cause of
poor symptom validity: A critical note. Applied Neuropsychology. Adult, 1–6. Advance online
publication. https://doi.org/10.1080/23279095.2022.2040025
Dandachi-FitzGerald, B., van Twillert, B., van de Sande, P., van Os, Y., & Ponds, R. W. (2016). Poor
symptom and performance validity in regularly referred Hospital outpatients: Link with
standard clinical measures, and role of incentives. Psychiatry Research, 239, 47–53. https://
doi.org/10.1016/j.psychres.2016.02.061
Daugherty, J. C., Querido, L., Quiroz, N., Wang, D., Hidalgo-Ruzzante, N., Fernandes, S., Pérez-García,
M., De Los Reyes-Aragon, C. J., Pires, R., & Valera, E. (2021). The coin in hand–extended
version: Development and validation of a multicultural performance validity test. Assessment,
28(1), 186–198. https://doi.org/10.1177/1073191119864652
Erdodi, L., Calamia, M., Holcomb, M., Robinson, A., Rasmussen, L., & Bianchini, K. (2023). M is for
performance validity: The IOP-M provides a cost-effective measure of the credibility of mem-
ory deficits during neuropsychological evaluations. Journal of Forensic Psychology Research and
Practice, 1–17. Advance online publication https://doi.org/10.1080/24732850.2023.2168581
The Clinical Neuropsychologist 17

Frederick, R. (2018). Feigned amnesia and memory problems. In R. Rogers & S. D. Bender (Eds.),
Clinical assessment of malingering and deception (4th ed.). Guilford Press.
Gegner, J., Erdodi, L. A., Giromini, L., Viglione, D. J., Bosi, J., & Brusadelli, E. (2022). An Australian
study on feigned mTBI using the Inventory of Problems - 29 (IOP-29), its Memory Module
(IOP-M), and the Rey Fifteen Item Test (FIT). Applied Neuropsychology. Adult, 29(5), 1221–1230.
https://doi.org/10.1080/23279095.2020.1864375
Giromini, L., Lettieri, S. C., Zizolfi, S., Zizolfi, D., Viglione, D. J., Brusadelli, E., Perfetti, B., di Carlo,
D. A., & Zennaro, A. (2019). Beyond rare-symptoms endorsement: A clinical comparison
simulation study using the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) with the
Inventory of Problems-29 (IOP-29). Psychological Injury and Law, 12(3-4), 212–224. https://doi.
org/10.1007/s12207-019-09357-7
Giromini, L., & Viglione, D. J. (2022). Assessing negative response bias with the Inventory of
Problems‑29 (IOP‑29): A quantitative literature review. Psychological Injury and Law, 15(1),
79–93. https://doi.org/10.1007/s12207-021-09437-7
Giromini, L., Viglione, D. J., Pignolo, C., & Zennaro, A. (2018). A clinical comparison, simulation
study testing the validity of SIMS and IOP-29 with an Italian sample. Psychological Injury and
Law, 11(4), 340–350. https://doi.org/10.1007/s12207-018-9314-1
Giromini, L., Viglione, D. J., Zennaro, A., Maffei, A., & Erdodi, L. A. (2020). SVT meets PVT:
Development and initial validation of the Inventory of Problems – Memory (IOP-M).
Psychological Injury and Law, 13(3), 261–274. https://doi.org/10.1007/s12207-020-09385-8
Giromini, L., Young, G., & Sellbom, M. (2022). Assessing negative response bias using self-report
measures: New articles, new issues. Psychological Injury and Law, 15(1), 1–21. https://doi.
org/10.1007/s12207-022-09444-2
González-Ordi, H., & Santamaría Fernández, P. (2009). Adaptación española del Inventario
Estructurado de Simulación de Síntomas. – SIMS [Spanish adaptation of the Structured Inventory
of Malingered Symptomatology – SIMS]. TEA Ediciones.
Gorny, I., & Merten, T. (2006). Symptom information – warning – coaching. How do they affect
successful feigning in neuropsychological assessment? Journal of Forensic Neuropsychology,
4(4), 71–97. https://doi.org/10.1300/J151v04n04_05
Holcomb, M., Pyne, S., Cutler, L., Oikle, D. A., & Erdodi, L. A. (2023). Take their word for it: The
inventory of problems provides valuable information on both symptom and performance
validity. Journal of Personality Assessment, 105(4), 520–530. https://doi.org/10.1080/00223891
.2022.2114358
Ilgunaite, G., Giromini, L., Bosi, J., Viglione, D. J., & Zennaro, A. (2022). A clinical comparison
simulation study using the Inventory of Problems-29 (IOP-29) with the Center for Epidemiologic
Studies Depression Scale (CES-D) in Lithuania. Applied Neuropsychology. Adult, 29(2), 155–162.
https://doi.org/10.1080/23279095.2020.1725518
King, N. S., Crawford, S., Wenden, F. J., Moss, N. E., & Wade, D. T. (1995). The Rivermead Post
Concussion Symptoms Questionnaire: A measure of symptoms commonly experienced after
head injury and its reliability. Journal of Neurology, 242(9), 587–592. https://doi.org/10.1007/
BF00868811
Kosky, K. M., Lace, J. W., Austin, T. A., Seitz, D. J., & Clark, B. (2022). The utility of the Wisconsin card
sorting test, 64-card version to detect noncredible attention-deficit/hyperactivity disorder. Applied
Neuropsychology. Adult, 29(5), 1231–1241. https://doi.org/10.1080/23279095.2020.1864633
Marín-Torices, M. I., Hidalgo-Ruzzante, N., Daugherty, J. C., Jiménez- González, P., & Perez Garcia,
M. (2018). Validation of neuropsychological consequences in victims of intimate partner vi-
olence in a Spanish population using specific effort tests. The Journal of Forensic Psychiatry
& Psychology, 29(1), 86–98. https://doi.org/10.1080/14789949.2017.1339106
Martin, P. K., Schroeder, R. W., & Odland, A. P. (2015). Neuropsychologists’ validity testing beliefs
and practices: A survey of North American professionals. The Clinical Neuropsychologist, 29(6),
741–776. https://doi.org/10.1080/13854046.2015.1087597
Merckelbach, H., Dandachi-FitzGerald, B., van Helvoort, D., Jelicic, M., & Otgaar, H. (2019). When
patients overreport symptoms: More than just malingering. Current Directions in Psychological
Science, 28(3), 321–326. https://doi.org/10.1177/0963721419837681
18 E. PUENTE-LÓPEZ ET AL.

Merten, T., Dandachi-FitzGerald, B., Hall, V., Bodner, T., Giromini, L., Lehrner, J., González-Ordi,
H., Santamaría, P., Schmand, B., & Di Stefano, G. (2022). Symptom and performance validity
assessment in European Countries: An update. Psychological Injury and Law, 15(2), 116–127.
https://doi.org/10.1007/s12207-021-09436-8
Merten, T., Giger, P., Merckelbach, H., & Stevens, A. (2019). Self-report Symptom Inventory (SRSI) –
deutsche Version. Manual [German version of the Self-Report Symptom Inventory. Manual]. Hogrefe.
Merten, T., & Merckelbach, H. (2020). Factious disorders and malingering. In J. R. Geddes, N. C.
Andreasen, & G. M. Goodwin (Eds.), New Oxford textbook of psychiatry (3rd ed., pp. 1342–1349).
Oxford University Press.
Morey, L. C. (1991). Personality Assessment Inventory: Professional manual. Psychological Assessment
Resources.
Morey, L. C. (2007). Personality Assessment Inventory: Professional manual. (2nd ed.). Psychological
Assessment Resources.
Pignolo, C., Giromini, L., Ales, F., & Zennaro, A. (2023). Detection of feigning of different symp-
tom presentations with the PAI and IOP-29. Assessment, 30(3), 565–579. Advance online
publication https://doi.org/10.1177/10731911211061282
Pina, D., Puente-López, E., Ruiz-Hernández, J. A., Llor-Esteban, B., & Aguerrevere, L. E. (2022). Self-report
measures for symptom validity assessment in whiplash-associated disorders. The European Journal
of Psychology Applied to Legal Context, 14(2), 73–81. https://doi.org/10.5093/ejpalc2022a7
Puente-López, E., Pina, D., Ruiz-Hernández, J. A., & Llor-Esteban, B. (2021). Classification accu-
racy of the structured inventory of malingered symptomatology (SIMS) in motor vehicle
accident patients. The Journal of Forensic Psychiatry & Psychology, 32(1), 131–154. https://doi.
org/10.1080/14789949.2020.1833073
Puente-López, E., Pina, D., López-López, R., Ordi, H. G., Bošković, I., & Merten, T. (2022b).
Prevalence estimates of symptom feigning and malingering in Spain. Psychological Injury and
Law, 16(1), 1–17. Advance online publication https://doi.org/10.1007/s12207-022-09458-w
Puente-López, E., Pina, D., López-Nicolás, R., Iguacel, I., & Arce, R. (2023). The Inventory of
Problems-29 (IOP-29): A systematic review and bivariate diagnostic test accuracy meta-analysis.
Psychological Assessment, 35(4), 339–352. https://doi.org/10.1037/pas0001209
Puente-López, E., Pina, D., Shura, R., Boskovic, I., Martínez-Jarreta, B., & Merten, T. (2022a). The
impact of different forms of coaching on the structured inventory of malingered symtomatol-
ogy (SIMS). Psicothema, 34(4), 528–536. https://doi.org/10.7334/psicothema2022.129
Rogers, R., & Cruise, K. R. (1998). Assessment of malingering with simulation design:
Threats to external validity. Law and Human Behavior, 22(3), 273–285. https://doi.
org/10.1023/A:1025702405865
Rogers, R., & Gillard, N. D. (2011). Research methods for the assessment of malingering. In B.
Rosenfeld & S. D. Penrod (Eds.), Research methods in forensic psychology (pp. 174–188). John
Wiley & Sons, Inc.
Rogers, R., Sewell, K. W., Martin, M. A., & Vitacco, M. J. (2003). Detection of feigned mental
disorders: A meta-analysis of the MMPI-2 and malingering. Assessment, 10(2), 160–177. https://
doi.org/10.1177/1073191103010002007
Roma, P., Giromini, L., Burla, F., Ferracuti, S., Viglione, D. J., & Mazza, C. (2020). Ecological validity of
the Inventory of Problems-29 (IOP-29): An Italian study of court-ordered, psychological injury
evaluations using the Structured Inventory of Malingered Symptomatology (SIMS) as criterion
variable. Psychological Injury and Law, 13(1), 57–65. https://doi.org/10.1007/s12207-019-09368-4
Roor, J. J., Dandachi-FitzGerald, B., & Ponds, R. W. (2016). A case of misdiagnosis of mild cognitive
impairment: The utility of symptom validity testing in an outpatient memory clinic. Applied
Neuropsychology. Adult, 23(3), 172–178. https://doi.org/10.1080/23279095.2016.1030018
Sherman, E. M. S., Slick, D. J., & Iverson, G. L. (2020). Multidimensional malingering criteria for
neuropsychological assessment: A 20-year update of the malingered neuropsychological
dysfunction criteria. Archives of Clinical Neuropsychology: The Official Journal of the National
Academy of Neuropsychologists, 35(6), 735–764. https://doi.org/10.1093/arclin/acaa019
Shreffler, J., & Huecker, M. R. (2022). Type I and type II errors and statistical power. In StatPearls.
StatPearls Publishing.
The Clinical Neuropsychologist 19

Shura, R. D., Ord, A. S., & Worthen, M. D. (2022). Structured Inventory of Malingered
Symptomatology: A psychometric review. Psychological Injury and Law, 15(1), 64–78. https://
doi.org/10.1007/s12207-021-09432-y
Smith, G. P., & Burger, G. K. (1997). Detection of malingering: Validation of the structured in-
ventory of malingered symptomatology (SIMS). Journal of the American Academy on Psychiatry
and Law, 25, 180–183.
Šömen, M. M., Lesjak, S., Majaron, T., Lavopa, L., Giromini, L., Viglione, D. J., & Podlesek, A.
(2021). Using the Inventory of Problems-29 (IOP-29) with the Inventory of Problems Memory
(IOP-M) in malingering-related assessments: A study with a Slovenian sample of experimen-
tal feigners. Psychological Injury and Law, 14(2), 104–113. https://doi.org/10.1007/
s12207-021-09412-2
Steinbuechel, N. V., Rauen, K., Bockhop, F., Covic, A., Krenz, U., Plass, A. M., Cunitz, K., Polinder,
S., Wilson, L., Steyerberg, E. W., Maas, A. I. R., Menon, D., Wu, Y. J., & Zeldovich, M. (2021).
Psychometric characteristics of the patient-reported outcome measures applied in the
CENTER-TBI study. Journal of Clinical Medicine, 10(11), 2396. https://doi.org/10.3390/jcm10112396
Sweet, J. J., Heilbronner, R. L., Morgan, J. E., Larrabee, G. J., Rohling, M. L., Boone, K. B., Kirkwood,
M. W., Schroeder, R. W., & Suhr, J. A. (2021). American Academy of Clinical Neuropsychology
(AACN) 2021 consensus statement on validity assessment: Update of the 2009 AACN con-
sensus conference statement on neuropsychological assessment of effort, response bias, and
malingering. The Clinical Neuropsychologist, 35(6), 1053–1106. https://doi.org/10.1080/13854
046.2021.1896036
Tombaugh, T. N. (1996). Test of memory malingering (TOMM). Multi-Health Systems.
van der Heide, D., Boskovic, I., van Harten, P., & Merckelbach, H. (2020). Overlooking feigning
behavior may result in potential harmful treatment interventions: Two cases of undetected
malingering. Journal of Forensic Sciences, 65(4), 1371–1375. https://doi.org/10.1111/1556-4029.14320
van Impelen, A., Merckelbach, H., Jelicic, M., & Merten, T. (2014). The Structured Inventory of
Malingered Symptomatology (SIMS): A systematic review and meta-analysis. The Clinical
Neuropsychologist, 28(8), 1336–1365. https://doi.org/10.1080/13854046.2014.984763
Viglione, D. J., & Giromini, L. (2020). Inventory of Problems–29: Professional Manual. IOP-Test.
LLC.
Viglione, D. J., Giromini, L., & Landis, P. (2017). The development of the Inventory of Problems-29:
A brief self-administered measure for discriminating bona fide from feigned psychiatric and
cognitive complaints. Journal of Personality Assessment, 99(5), 534–544. https://doi.org/10.10
80/00223891.2016.1233882
Vilar-López, R., Daugherty, J. C., Pérez-García, M., & Piñón-Blanco, A. (2021). A pilot study on
the adequacy of the TOMM in detecting invalid performance in patients with substance use
disorders. Journal of Clinical and Experimental Neuropsychology, 43(3), 255–263. https://doi.or
g/10.1080/13803395.2021.1912298
Widows, M. R., & Smith, G. P. (2005). SIMS: Structured Inventory of Malingered Symptomatology.
Professional manual. Psychological Assessment Resources.
Winters, C. L., Giromini, L., Crawford, T. J., Ales, F., Viglione, D. J., & Warmelink, L. (2021). An
Inventory of Problems-29 (IOP-29) study investigating feigned schizophrenia and random
responding in a British community sample. Psychiatry, Psychology, and Law: An Interdisciplinary
Journal of the Australian and New Zealand Association of Psychiatry, Psychology and Law, 28(2),
235–254. https://doi.org/10.1080/13218719.2020.1767720
Young, G., Foote, W. E., Kerig, P. K., Mailis, A., Brovko, J., Kohutis, E. A., McCall, S., Hapidou, E.
G., Fokas, K. F., & Goodman-Delahunty, J. (2020). Introducing psychological injury and law.
Psychological Injury and Law, 13(4), 452–463. https://doi.org/10.1007/s12207-020-09396-5
20 E. PUENTE-LÓPEZ ET AL.

Appendices
Appendix 1. Pre-experimental checklist for the three groups

Honest controls
Please check the correct answer to the following questions:

1. How have you been asked to answer the questionnaires?

(a) Honestly, reflecting my current situation.


(b) By inventing problems from memory.
(c) Exaggerating problems I may be experiencing.

2. Should you make up any symptoms?

(a) Yes, if the questionnaire asks me to do so, I should make it up.


(b) Yes, I have to make up a memory problem.
(c) No, I must answer all the questionnaires reflecting my current situation.

Uncoached group

1. What type of difficulties did you experience after the accident?

(a) Paralysis.
(b) Memory and concentration problems.
(c) Epileptic seizures.

2. Did the accident affect your professional and private life?

(a) Yes, he needed a lot of help from friends and family after the accident to be
able to lead a normal life.
(b) Yes, for some time after the accident I could not work and I also had problems
in my social and private life.
(c) No, everything continued as if nothing had happened.

3. What is your goal during this next evaluation?

(a) I want to show as much memory and concentration impairment as possible


so that they will believe me.
(b) I want to strive to present my impairments as they occurred after the
accident.
(c) I want to appear as well as possible to appear credible.
The Clinical Neuropsychologist 21

4. D
 o you think it is fair to want to convince the evaluator that you have memory
problems?

(a) Yes, because I still have serious memory and concentration problems to this
day.
(b) Yes, because I really suffered from these symptoms and it is not my fault that
the judicial process took so long.
(c) No, and that is why I am going to take the exam to the best of my ability.

Coached group

1. What type of difficulties did you experience after the accident?

(a) Paralysis.
(b) Memory problems.
(c) Epileptic seizures.

2. What kind of problems did you NOT experience?

(a) Inability to remember details of his past life.


(b) Inability to remember correctly what items to buy.
(c) Headaches and lack of energy.

3. Did the accident affect your professional and private life?

(a) Yes, I needed a lot of help from friends and family after the accident in order
to lead a normal life.
(b) Yes, for some time after the accident I could not work and I also had problems
in my social and private life.
(c) No, everything continued as if nothing had happened.

4. How was your ability to concentrate after the accident?

(a) Completely normal.


(b) I found it hard to concentrate on even the simplest things.
(c) I could not concentrate on things as well as I did in the past.

5. What is your goal during this next evaluation?

(a) I want to show as much memory and concentration impairment as possible


so that they will believe me.
(b) I want to strive to present my real impairments (what I really experienced)
after the accident.
(c) I want to appear as well as possible to appear credible.
22 E. PUENTE-LÓPEZ ET AL.

6. What might happen if you exaggerate your symptoms to a greater degree than you
originally experienced?

(a) The examiner would not believe me and I would not receive compensation.
(b) The award would be more than I think I deserve.
(c) There would be no consequences.

7. If I wanted to be credible during this examination, I should:

(a) Insist that I will never be as bad as I am now.


(b) Tell her that I am so bad that I don’t think I will ever recover.
(c) Explain that, since the incident, the severity of my memory problems has been
fluctuating.

8. Do you think a qualified examiner will be able to distinguish between real and exag-
gerated memory deficits?

(a) No, I don’t think so.


(b) Maybe, but I am very unlikely to be found out.
(c) Yes, I should be careful.

9. Do you think it is fair to want to convince the evaluator that you have memory
problems?

(a) Yes, because I still have serious memory and concentration problems.
(b) Yes, because I really suffered from these symptoms and it is not my fault that
the court process took so long.
(c) No, and that is why I am going to take the exam to the best of my ability.

Appendix 2. Post-experimental manipulation check for the three groups


Next, we will ask you 10 short questions about the last instrument you completed.

A. How difficult did you find the questions in the questionnaire (choose one answer).

1. Very easy.
2. Easy.
3. Normal.
4. Difficult.
5. Very difficult.
B. How understandable did you find the questions in the questionnaire? (Choose one
answer).

1. Very understandable.
2. Understandable. [Mismatch]
3. Neutral.
4. Not very understandable.
5. Very poorly understandable.
The Clinical Neuropsychologist 23

C. How difficult was it to exaggerate or make up the symptoms (choose one answer).

1. Very easy.
2. Easy.
3. Normal.
4. Difficult.
5. Very difficult.

D. How convincing do you think you have been in exaggerating or inventing the symp-
toms (choose one answer).

1. Very convincing.
2. Convincing.
3. Neutral.
4. Not very convincing.
5. Not convincing at all.

E. How skillful do you think you were in exaggerating or inventing the symptoms (choose
one answer).

1. Very skillful.
2. Skillful.
3. Neutral.
4. Not very skillful.
5. Not skilled at all.

F. Have you tried to follow instructions and behave like the person on the stage?

1. Yes
2. No
3. Partially.

G. Did you feign memory problems during the test?

1. Yes
2. No
3. Partially.

H. Did you feign concentration problems during the test?

1. Yes
2. No
3. Partially.

I. Did you do worse on all the tests than you could have done by answering honestly?

1. Yes
2. No
3. Partially.
24 E. PUENTE-LÓPEZ ET AL.

J. Do you think any of the tests you took were too easy, so that it would have been
obvious if you had done poorly?

1. Yes
2. No. If yes, please indicate which test (each test) was too easy?
If yes, indicate which test (each test has on the first page the number in a square):
__________________

Appendix 3. Experimental feigners instructions & scenario


Below you will find instructions and a series of psychometric questionnaires (4 tests in total).
Read the instructions and the scenario carefully and put yourself in the situation they illustrate.
You will have to fill in the psychometric instruments by following the scenario exactly. You will
also find a series of items before and after each assessment instrument. They are not psycho-
logical tests, but please fill them out honestly.
We need you to take your role very seriously and complete the tests as instructed. Remember
that you will not receive the bonus in the subject if you have done it incorrectly. Don’t worry,
it’s not difficult, you just need to pay a little attention and get involved in the task - lots of
encouragement and thank you very much for participating!

Honest controls
We need you to fill in all the questionnaires included in the battery honestly, without making
up anything and reflecting at all times your current situation. Remember that there are no right
or wrong questions, you simply have to reflect what you really feel or suffer.

Uncoached & group


Note. The underlined portion was only provided to the coached group.
In this study, you will be asked to complete a series of tests that are designed to measure a
variety of changes that people experience after a head injury. For the purposes of this research,
we would like you to pretend that you have had a head injury and are experiencing symptoms
as a result of brain damage or impairment. We would like you to complete these tests as if you
have suffered some type of accident in which you have hit your head and are experiencing the
symptoms of a head injury. Try to perform the tests as you would imagine a brain-damaged
person would. Try to convince the examiner that you are actually brain damaged or impaired.
Throughout the tests, try to remain in the injured role as much as possible. Here is a scenario
to help you put yourself in the situation:
About two years ago, you were involved in an automobile accident for which you were in no
way responsible. During the accident, your head was propelled against the windshield and you
suffered a mild traumatic brain injury. Following this event, you experienced considerable im-
pairment of your memory functions. These were expressed by difficulties in remembering new
information, such as names of people, appointments or recent events.
In order to be able to shop properly you started to take notes before going out. There were
days when you did not have to look at her shopping list, but there were also days when you
were unable to do shopping without the reminders. Sometimes you asked a question, two or
even three times and felt embarrassed when the person to whom you were directing the ques-
tion pointed out that you had been repeating yourself. There were times when you could not
remember what was in the newspaper article you had just read, so you had to reread it.
In addition to these memory problems, you often suffered from headaches and lack of ener-
gy, and could not concentrate on things as well as you had in the past. It took you more time
The Clinical Neuropsychologist 25

and effort to do tasks that had come easily to you before the accident. You tired very quickly,
so for a while after the accident you were unable to work and also had problems in your social
and private life. However, you were able to remember normally things that were of great impor-
tance to you, such as details of your past life.
Since the person who caused your accident did not have insurance, the process slowed down
significantly and you did not receive any information from him for months. Your doctor and
lawyer advised you to go to court, so you decided to file a lawsuit for compensation. However,
the lawsuit dragged on. During the two years since the accident occurred, your cognitive prob-
lems mostly diminished. Finally, the court hired a professional to evaluate the actual extent of
your traumatic brain injury. You believe that it was not your fault that the court case has
dragged on and you feel that you deserve to receive a compensation payment for the suffering
you have actually endured. Therefore, you decide to try to present your injury to the examiner
as it occurred immediately after the accident.
Whether or not you receive compensation for the accident depends on the results of your
evaluation. On the one hand, this implies that you are going to have to convince the expert that
you still have a noticeable memory function impairment. Keep in mind that, if you present your
condition in an extremely dramatic way, your performance may not be credible, and the exam-
iner may understand that you are not suffering from a mild traumatic brain injury, but are only
faking it. Therefore, try not to “overreact.”

You might also like