Mmse, Moca, SPMSQ, Slump

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Disability and Rehabilitation

ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/idre20

A comparison of test-retest reliability of four


cognitive screening tools in people with dementia

Ya-Chen Lee, Yi-Te Lin & En-Chi Chiu

To cite this article: Ya-Chen Lee, Yi-Te Lin & En-Chi Chiu (2021): A comparison of test-retest
reliability of four cognitive screening tools in people with dementia, Disability and Rehabilitation,
DOI: 10.1080/09638288.2021.1891466

To link to this article: https://doi.org/10.1080/09638288.2021.1891466

Published online: 09 Mar 2021.

Submit your article to this journal

Article views: 87

View related articles

View Crossmark data

Citing articles: 1 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=idre20
DISABILITY AND REHABILITATION
https://doi.org/10.1080/09638288.2021.1891466

ORIGINAL ARTICLE

A comparison of test-retest reliability of four cognitive screening tools in people


with dementia
Ya-Chen Leea , Yi-Te Linb and En-Chi Chiuc
a
Department of Occupational Therapy, College of Medical and Health Science, Asia University, Taichung, Taiwan; bDepartment of Neurology,
Cardinal Tien Hospital, New Taipei City, Taiwan; cDepartment of Long-Term Care, National Taipei University of Nursing and Health Sciences,
Taipei, Taiwan

ABSTRACT ARTICLE HISTORY


Purpose: This study aimed to compare the test-retest reliability and minimal detectable change (MDC) of Received 20 August 2020
the Mini-Mental State Examination (MMSE), the Short Portable Mental Status Questionnaire (SPMSQ), the Revised 11 February 2021
Montreal Cognitive Assessment (MoCA), and the Saint Louis University Status Examination (SLUMS) in a Accepted 13 February 2021
single sample of people with dementia.
KEYWORDS
Methods: Sixty people with dementia were assessed twice two weeks apart, and the test-retest reliability Cognition; screening tools;
was examined using the intraclass correlation coefficient (ICC) for four screening tools. The MDC95 value reliability; minimal
was calculated based on the standard error of measurement to estimate the random measurement error. detectable change; people
Results: The ICC values for screening tools were 0.86–0.90. The MDC95 values (MDC95%) were 5.0 (17.2%), with dementia
2.74 (27%), 4.71(20%), and 6.26 (24%) for the MMSE, SPMSQ, MoCA, and SLUMS, respectively.
Conclusions: Overall, the four screening tools were similar in test-retest reliability which imply that the
MMSE, MoCA, SPMSQ, and SLUMS were reliable in monitoring cognitive function in people with demen-
tia. The results of the direct comparisons of test-retest reliability of the four screening tools provide useful
information for both clinicians and researchers to select an appropriate cognitive screening tool.

ä IMPLICATIONS FOR REHABILITATION


 The MMSE, MoCA, SPMSQ, and SLUMS are equally reliable and thus they could be used to monitor
the cognitive function in people with dementia.
 The MDC values are useful in determining whether a real change has occurred between repeated
assessments for people with dementia.

Introduction specificity but also with lower sensitivity [9]. The MoCA measures
an important component of dementia that is not measured by
With the rapid growth of the aging population, the number of
the MMSE, namely executive function [10,11]; however, it has its
people aged over 60-years old at risk of developing dementia is
rapidly increasing. People with dementia could have problems own weakness, which is the time required for administration [12].
with memory, language, thinking, and judgement, which affect Compared with the other three screening tools that could be fin-
their ability to perform activities of daily living and might have ished within around 3–10 min, the MoCA takes approximately
impact on quality of life of themselves and those that care for 15 min to complete could be considered rather lengthy. The
them [1]. Although no cures currently exist, early detection is cru- SLUMS only takes about seven minutes to administer [2,13]; how-
cial for ensuring the best targeted treatment effect or out- ever, it has been less researched for psychometric properties
come [2]. (such as minimal detectable change and responsiveness) than the
Four commonly-used cognitive screening tools have been other three screening tools [14]. Summarizing the above-men-
used for the early detection of dementia, including the Mini- tioned aspects, each one offers its own set of pros and cons, and
Mental State Examination (MMSE) [3], the Short Portable Mental it is important to determine the appropriate screening tool based
Status Questionnaire (SPMSQ) [4], the Montreal Cognitive on the patient’s cognitive status.
Assessment (MoCA) [5], and the Saint Louis University Status Some supportive evidences of psychometric properties have
Examination (SLUMS) [2]. Among these four screening tools, the been found for the MMSE, SPMSQ, MoCA, and SLUMS in people
MMSE has been the most extensively used in clinical and research with dementia [8,10,15,16]. However, to our knowledge, no stud-
settings due to its practicality [6]. The MMSE is easy to administer ies have simultaneously examined all four screening tools to
and requires no specialized equipment or training [3,7]. However, determine their psychometric properties (particularly the test-
the MMSE has been reported to be less sensitive to mild cognitive retest reliability and random measure error) in people with
impairment [2,8]. The SPMSQ is a brief cognitive screening tool dementia. As all studies employed different methodologies in the
that only has 10 items and has been reported to have high examination of psychometric properties, it is difficult for users to

CONTACT En-Chi Chiu enchichiu@ntunhs.edu.tw No. 83-1, Nei-Chiang Street, Wan-Hwa District, Taipei 10845, Taiwan
Ya-Chen Lee and Yi-Te Lin contributed equally to this work.
ß 2021 Informa UK Limited, trading as Taylor & Francis Group
2 Y.-C. LEE ET AL.

decide which screening tools to select for people with dementia. [3]. This study used the education adjusted cut-off scores of the
Thus, a direct comparison of these screening tools in a single MMSE. If the participants attained education level is 8th grade or
sample is indeed required, as the results could assist clinicians some high school, a MMSE score of 24 or below is expected,
and researchers in selecting a proper cognitive screening tool. while those with some college or higher, a MMSE score of 26 or
As there are many situations where cognitive screening tools below may imply a cognitive impairment [22]. Administration
must be administered on more than two occasions [17], the abil- time of the MMSE has been reported to be about 10 min [23].
ity of a screening tool to reflect the extent of agreement [18], The SPMSQ has been widely used to screen for cognitive dys-
namely, test-retest reliability [19] between repeated assessments function [4,9], and consists of 10 items, including testing orienta-
have become extremely important. The clinicians must have confi- tion to time and place, memory, current event information, and
dence that any changes in scores on repeated assessments repre- calculation. The total number of errors is computed and the range
sent a true change in the cognitive function of people with of the total score is 0–10; where 0–2 errors imply normal mental
dementia, and they are not due to measurement errors [19]. function; 3–4 errors imply mild cognitive impairment; 5–7 errors
However, no previous studies have examined whether these four imply moderate cognitive impairment [4]. Administration time of
cognitive screening tools have similar test-retest reliability for the SPMSQ has been reported to be about 3 min [24].
people with dementia. While clinicians and researchers may be The MoCA is a cognitive screening test designed for detecting
faced with a greater range of choices, there is limited information cognitive impairment [25]. It consists of 30 items for assessing
regarding which to select. Therefore, the purpose of this study multiple cognitive domains, including orientation, memory, visuo-
was to compare the test-retest reliability of the MMSE, SPMSQ, spatial skills, executive functioning, language, and attention [25].
MoCA, and SLUMS in a single sample of people with dementia. The total scores range between 0 and 30, and higher scores indi-
cate better cognitive functioning [11]. In this study, the total score
and the addition of one point for examinees with 12 or fewer
Methods
years of education was used for analysis. Administration time of
Participants the MoCA has been reported to be about 15 min [26].
The SLUMS was developed as a screening tool for Alzheimer’s
A convenience sample of people with dementia were recruited
disease or other kinds of dementia [2]. The SLUMS consists of 11
from geriatric outpatient clinics from one hospital in northern
items that assess various aspects of cognition, including orienta-
Taiwan. All patients attending the clinics between March and
tion, short-term memory, calculations, the naming of animals, the
December 2019 were approached by a research assistant to par-
clock drawing test, and recognition of geometric figures [13]. The
ticipate in the study. The inclusion criteria were: (1) diagnosis of
scores range from 0 to 30; a score of 27 to 30 is considered nor-
dementia based on the Diagnostic and Statistical Manual of
mal for a person with a high school education and a score of
Mental Disorder, fifth edition; (2) age 65; and (3) having a stable
0–20 indicates dementia [2]. Administration time of the SLUMS
condition with a stable dose of medication within the past month.
has been reported to be about 7 min [14].
Participants were excluded if they (1) had history of severe brain
The CDR was developed to stage dementia severity. The CDR
injury; (2) were diagnosed with intellectual disability; (3) different
consists of six domains, including orientation, judgment and prob-
scores on the Clinical Dementia Rating (CDR) over these two
lem solving, community affairs, home and hobbies, and personal
repeated tests (any change in score of CDR was considered as
care [27]. A global score can be obtained from the six domains to
unstable cognitive condition).
quantify the severity of dementia on a 5-point scale: 0 (healthy),
With a 10% drop out rate, a sample size of at least 55 partici-
0.5 (questionable dementia), 1 (mild dementia), 2 (moderate
pants was estimated for reliability with a power of 0.8 at a signifi-
dementia), and 3 severe dementia) [27]. We used the CDR to
cance level of 0.05 [20,21]. The study was approved by the
examine whether the symptom severity of the participants was
Institutional Review Board of a teaching hospital. Both the partici-
stable during the study period [28].
pants and their caregivers were required to sign the informed
consent to confirm their willingness to participate in this study.
Data analysis
Procedure The intraclass correlation coefficient (ICC2, 1) was calculated on
the basis of a two-way random model with absolute agreement
A trained rater administered all screening tools, including the
type. ICC values 0.80 indicate high agreement, 0.60–0.79 indi-
MMSE, SPMSQ, MoCA, and SLUMS, to participants twice, 2 weeks
cate moderate agreement, and <0.59 indicates poor agree-
apart. The four screening tools were administered randomly to
ment [1,29].
avoid possible order effects. The participants were assessed in a
In addition, the minimal detectable change (MDC95) was calcu-
quiet room to avoid distraction that might affect their perform-
lated to estimate the random measurement error of all four
ance. Between each test, the participants were allowed to have a
screening tools. The MDC95 is smallest detectable threshold of
rest in order to minimize fatigue. The CDR was administered in
change score, and can be used to determine whether a real
the test and retest sessions to confirm that the participants had
change (either improvement or deterioration) has occurred
stable cognitive condition. In addition, the participants’ demo-
between two assessments or whether the change is due to the
graphic data such as age and years of education were collected
measurement error [30]. This study calculated the MDC95 value
from chart review.
based on the standard error of measurement (SEM) using the fol-
lowing formula [31]:
Measures pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
SEM ¼ SDall testing scores  1  ICC
The MMSE was developed to test the cognitive functions of the pffiffiffi
MDC95 ¼ z-scorelevel of confidence  2  SEM
elderly [3]. It tests an individual’s orientation, attention, memory,
language, and visual spatial skills. The total scores range from 0 In these formulae, the SDall testing scores denotes the standard
to 30, where higher scores indicate better cognitive functioning deviation of all scores of the two assessments. The z-score
RELIABILITY OF FOUR COGNITIVE SCREENING TOOLS 3

represents the confidence interval from a standard normal distri- Results


bution (i.e., 1.96 for 95% confidence level in this study).
A total of 60 people with dementia participated in this study; the
This study also calculated the MDC95% by dividing the MDC95 by
mean age was 81.5 years, 55% of the participants were female.
the maximum score, and then, multiplying by 100 to manifest a
Fifty-eight percent of participants’ education attainment level was
relative amount of random measurement error. An MDC95% of less
elementary school and below. The demographic and clinical char-
than 30% was considered as an acceptable random measurement
acteristics of the participants are shown in Table 1.
error [1,28].
The ICC values for the MMSE, SPMSQ, MoCA, and SLUMS were
In addition, this study plotted the Bland-Altman plots with
0.90, 0.89, 0.88, and 0.86, respectively. The MDC95 values
95% limits of agreement (LOA) to visualize the agreement
(MDC95%) were 5.0 (17.2%), 2.74 (27.0%), 4.71(20.0%), and 6.26
between test-retest assessments. In these plots, the difference
(24.0%) for the MMSE, SPMSQ, MoCA, and SLUMS, respectively
of each pair of scores were against the mean scores of the
(Table 2).
two assessments [32,33]. The LOA provides insight into the
The Bland-Altman Plots for the MMSE, SPMSQ, MoCA, and
amount of variation between assessments and was estimated
SLUMS are shown in Figure 1. The LOAs of the four screening
as the mean difference ±1.96  SD, where SD represents the
tools ranged from 5.63 to 4.33 for the MMSE, 2.73 to 2.83 for
standard deviation of differences [32]. The plots were also
the SPMSQ, 5.21 to 4.11 for the MoCA, and 6.80 to 6.17 for the
used to examine the possibility of heteroscedasticity. If hetero-
SLUMS. Correlations between the mean and the difference of the
geneity is detected, there is a possibility that a systematic
two assessments for these four screening tools were r ¼ 0.05–0.26.
trend (e.g., the higher the score, the larger the difference)
exists [33]. If heteroscedasticity exists, the MDC95 value should The LOA%s were 13.89%, 22.14%, 21.62%, and 30.20% for the
not apply for different levels (i.e., level of impairment) of MMSE, SPMSQ, MoCA, and SLUMS, respectively, indicating that
patients [30]. Heterogeneity was examined using Pearson’s r the MMSE tended to be the most reliable screening tool and the
between the mean values and the absolute difference of the SLUMS the least reliable one.
two assessments. The data were considered to display hetero- In addition, no significant difference (p ¼ 0.052–0.786) was
scedasticity when r > 0.30 [19]. In addition, the relative limits found in all four screening tools. The practice effects of these
of agreement (LOA%) was used to describe the absolute differ- three screening tools were trivial-to-small (d ¼ 0.04–0.26) (Table 2).
ence of screening tools measured at pre-test vs post-test as a
percentage of the group mean of the screening tools [34]. Discussion
Furthermore, two approaches have been used to check
whether the practice effect exists. First, paired t-test (tow-tailed, This study was the first one to simultaneously compare the test-
a ¼ 0.05) was performed to determine whether a statistically retest reliability and minimal detectable change of the MMSE,
significant difference existed between two assessments. Second, SPMSQ, MoCA, and SLUMS in a single sample of people with
the effect size (Cohen’s d) was calculated to estimate the size dementia. The results of direct comparison could provide an
of the practice effect. The value of Cohen’s d  0.80 is large, informed choice regarding the screening tools most appropriate
0.50–0.79 is moderate, and 0.20–0.49 has small effect size [35]. for clinical practice and research.
The ICC represents the extent of agreement between two
assessments [19]. This study found that the ICC values for the
test-retest reliability of the four screening tools were high (ICC >
Table 1. Demographic and clinical characteristics of the participants (n ¼ 60). 0.86). Our findings indicate that all four screening tools are
equally reliable when administered to people with dementia. A
Characteristic
high test-retest reliability of these four screening tools imply that
Gender, n (%)
Male 27 (45.0) they were all useful for monitoring change in cognitive function
Female 33 (55.0) of people with dementia over time.
Age, years, mean (SD) 81.5 (7.8) Regarding the estimation of the random measurement error
Education, n (%) between repeated assessments, the MDC95 is used to examine
Elementary school and below 35 (58.3)
the variability of the score in individual response [30]. Our find-
Junior high school 8 (13.3)
Senior high school 10 (16.7) ings suggest that a change of 5.0 points for the total scores of
College and above 7 (11.7) the MMSE (highest possible score is 30), 3 points (30) for the
MMSE, mean of two assessments (SD) 14.0 (5.7) MoCA, 5 points (10) for the SPMSQ, and 7 (30) points for the
SPMSQ, mean of two assessments (SD) 4.8 (3.0) SLUMS in a person with dementia, can be interpreted as a real
MoCA, mean of two assessments (SD) 8.3 (4.9)
SLUMS, mean of two assessments (SD) 8.3 (6.1) change (either improvement or deterioration) with 95% confi-
dence, and such change is not due to a measurement error. The
SD: standard deviation; MMSE: Mini-Mental State Examination; SPMSQ: Short
Portable Mental Status Questionnaire; MoCA: Montreal Cognitive Assessment; abovementioned MDC95 values provide a threshold for clinicians
SLUMS: Saint Louis University Mental Status Examination. to determine whether a real change has occurred between
repeated assessments for people with dementia.

Table 2. Results of reliability of four cognitive screening assessments.


Measures Mean1 (SD1) Mean2 (SD2) Difference mean (SD) ICC (95% CI) SEM MDC95 (MDC95%) Pearson’s r t-test (p-value) Cohen’s d
MMSE 13.7 (5.8) 14.4 (5.7) 0.65 (2.54) 0.90 (0.83, 0.94) 1.80 5.0 (17.2%) 0.05 1.99 (0.052) 0.26
SPMSQ 4.8 (3.0) 4.78 (3.0) 0.05 (1.42) 0.89 (0.82, 0.93) 0.99 2.74 (27.0%) 0.18 0.27 (0.786) 0.04
MoCA 8.1 (4.6) 8.6 (5.3) 0.55 (2.38) 0.88 (0.81, 0.93) 1.70 4.71 (20.0%) 0.25 1.79 (0.078) 0.23
SLUMS 8.2 (5.8) 8.5 (6.5) 0.32 (3.31) 0.86 (0.77, 0.91) 2.26 6.26 (24.0%) 0.26 0.74 (0.462) 0.10
MMSE: Mini-Mental State Examination; SPMSQ: Short Portable Mental Status Questionnaire; MoCA: Montreal Cognitive Assessment; SLUMS: Saint Louis University
Mental Status Examination; SD: standard deviation; ICC: intra-class correlation coefficient; CI: confidence interval; SEM: standard error of measurement; MDC95: min-
imal detectable change.
4 Y.-C. LEE ET AL.

Figure 1. Bland-Altman plots. (A) Mini-Mental State Examination; (B) Short Portable Mental Status Questionnaire; (C) Montreal Cognitive Assessment; (D) Saint Louis
University Mental Status Examination. The bold line is the mean difference. The two dotted lines are the 95% limits of agreement.

In terms of heteroscedasticity, the Bland-Altman plots show no phone number?” in SPMSQ, which might have reduced the prac-
obvious systematic trend for all four screening tools. In addition, tice-related phenomenon of the SPMSQ in repetitive testing. In
the Pearson’s r between the absolute difference scores and mean contrast, regarding both the MMSE and MoCA, the extent of prac-
scores of three screening tools were all less than 0.30, implying tice effects observed here (Cohen’s d ¼ 0.26 and 0.23, respectively)
that no heteroscedasticity exists. Our findings suggest that both were higher than SPMSQ and SLUMS. It was reported that the
clinicians and researchers could use a fixed MDC95 value as the high practice effect might have obscured any true changes in
threshold to judge whether a difference between repeated meas- cognitive function [36,37]. In order to reduce the practice effect
urement is real change [30]. These results further support the reli- of the MMSE and MoCA, it has been suggested that at least
ability of the MMSE, SPMSQ, MoCA, and SLUMS in monitoring the two assessments should be conducted to stabilize perform-
changes of the cognitive function of people with dementia. ance [36,38].
However, although the values of the MDC95% of the four To be clinically useful, a screening tool must be scientifically
screening tools were <30%, it indicates an acceptable random sound in test-retest reliability to demonstrate the specific treat-
measurement error. The MDC95% is independent of the unit ment effects, and ensure such effects are not due to measure-
measurements and can be used to compare the quantities of ran- ment inconsistency. Overall, the four screening tools were similar
dom measurement errors between screening tools [30]. in test-retest reliability. Particularly, the results showed that the
Compared to the other three screening tools, the SPMSQ had the MMSE had high test-retest reliability with acceptable random
highest MDC95% value (27.0%), which implies that a high random measurement error and small practice effect. Although the MMSE
measurement error exists. Although the SPMSQ had the least has its own weakness, which is lower sensitivity in detecting peo-
numbers of items among the four screening tools, the high ran- ple with mild cognitive impairment [2,8], the new version of
dom measurement error might have hindered its ability to detect MMSE, the Mini-Mental Examination, 2nd edition (MMSE-2) has
any real changes in the cognitive functions of people in clin- been developed to resolve these issues. The MMSE-2, as sug-
ical settings. gested by Folstein et al. [39], has the ability to measure cognitive
Interestingly, compared to the other three screening tools, the function in more detail, and thus, it can measure a greater variety
SPMSQ had the lowest practice effect (d ¼ 0.04). The most likely of cognitive functions than the MMSE [8]. Future research examin-
explanation for this discrepancy might be due to the participants ing the test-retest reliability of the MMSE-2, SPMSQ, MoCA, and
in our study rarely remembered their own phone number (about SLUMS simultaneously in people with dementia might be needed.
62.0% of participants), as most of them use a communication Two limitations need to be addressed in this study. First, par-
app, such as LINE, to make free voice calls. Thus, people have no ticipants were recruited only from one hospital and the limited
need to exchange phone numbers, as they just scan a QR code to education attainment level (58.3% of participants had elementary
obtain contact information. It was observed that our participants school and below) variability may limit the generalizability of our
were less motivated to respond to the question of: “What is your results. Second, participants were recruited using convenience
RELIABILITY OF FOUR COGNITIVE SCREENING TOOLS 5

sampling and thus, a potential selection bias might have in detection of cognitive impairment in Chinese elderly
occurred. In addition, no information was available for participants from the geriatric department. J Am Med Dir Assoc. 2012;
who did not consent to participate. Therefore, we cannot com- 13(7):626–629.
pare the differences between participants who participated in the [8] Baek MJ, Kim K, Park YH, et al. The validity and reliability of
study and those who did not. the Mini-Mental State Examination-2 for detecting mild
cognitive impairment and Alzheimer’s Disease in a Korean
Conclusions population. PLoS One. 2016;11(9):e0163792.
[9] Malhotra C, Chan A, Matchar D, et al. Diagnostic perform-
Overall, the four screening tools were similar in test-retest reliabil- ance of short portable mental status questionnaire for
ity which imply that the MMSE, MoCA, SPMSQ, and SLUMS were screening dementia among patients attending cognitive
reliable in monitoring cognitive function in people with dementia. assessment clinics in Singapore. Ann Acd Med Singapore.
The results of the direct comparisons of the test-retest reliability 2013; 42:315–319.
of the four screening tools can provide useful information for [10] Bruijnen CJWH, Dijkstra BAG, Walvoor SJW, et al.
both clinicians and researchers to select an appropriate cognitive Psychometric properties of the Montreal Cognitive
screening tool. Assessment (MoCA) in healthy participants aged 18–70. Int
J Psychiatry Clin Pract. 2020;4:1–8.
[11] Wong A, Yiu S, Nasreddine Z, et al. Validity and reliability
Acknowledgments
of two alternate versions of the Montreal Cognitive
We would like to thank the participants for their work during Assessment (Hong Kong version) for screening of mild neu-
data collection. rocognitive disorder. PLoS One. 2018;13(5):e0196344.
[12] Chiti G, Pantoni L. Use of Montreal Cognitive Assessment
in patients with stroke. Stroke. 2014;45(10):3135–3140.
Disclosure statement
[13] Szczesniak D, Rymaszewska J. The usfulness of the SLUMS
The authors declare no conflict of interest. test for diagnosis of mild cognitive impairment and
dementia. Psychiatr Pol. 2016;50(2):457–472.
Funding [14] Kaya D, Isik A, Usarel C, et al. The Saint Louis University
Mental Status Examination is better than the Mini-Mental
This study was supported by Cardinal Tien Hospital [Grant no. State Examination to determine the cognitive impairment
CTH108A-2N02]. in Turkish elderly people. J Am Med Dir Asso. 2016;17:
370.ee11–370.ee15.
[15] Roccaforte WH, Burke WJ, Bayer BL, et al. Reliability and
ORCID validity of the Short Portable Mental Status Questionnaire
administered by telephone. J Geriatr Psychiatry Neurol.
Ya-Chen Lee http://orcid.org/0000-0003-0917-3859 1994;7(1):33–38.
En-Chi Chiu http://orcid.org/0000-0002-6052-0871 [16] Zhang S, Wu YH, Zhang Y, et al. Preliminary study of the
validity and reliability of the Chinese version of the Saint
Louis University Mental Status Examination (SLUMS) in
References
detecting cognitive impairment in patients with traumatic
[1] Chiu EC, Yip PK, Woo P, et al. Test-retest reliability and brain injury. Appl Neuropsychol Adult. 2019;24:1–8.
minimal detectable change of the Cognitive Abilities [17] Falleti MG, Maruff P, Collie A, et al. Practice effects associ-
Screening Instrument in patients with dementia. PLoS One. ated with the repeated assessment of cognitive function
2019;14(5):e0216450. using the CogState battery at 10-minute, one week and
[2] Howland M, Tatsuoka C, Smyth KA, et al. Detecting change one month test-retest intervals. J Clin Exp Neuropsychol.
over time: a comparison of the SLUMS examination and 2006;28(7):1095–1112.
the MMSE in older adults at risk for cognitive decline. CNS [18] Hobart JC, Lamping DL, Thompson AJ. Evaluating neuro-
Neurosci Ther. 2016;22(5):413–419. logical outcome measures: the bare essentials. J Neurol
[3] Folstein MF, Folstein SE, McHugh PR. “Mini-mental state”. A Neurosurg Psychiatry. 1996;60(2):127–130.
practical method for grading the cognitive state of patients [19] Atkinson G, Nevill AM. Statistical methods for assessing
for the clinician. J Psychiatr Res. 1975;12(3):189–198. measurement error (reliability) in variables relevant to
[4] Pfeiffer EA. A short portable mental status questionnaire sports medicine. Sports Med. 1998;26(4):217–238.
for the assessment of organic brain deficit in elderly [20] Walter SD, Eliasziw M, Donner A. Sample size and optimal
patients. J Am Geriatr Soc. 1975;23(10):433–441. designs for reliability. Statist Med. 1998;17(1):101–110.
[5] Rademeyer M, Joubert P. A comparison between the Mini- [21] Bonett DG. Sample size requirements for estimating intra-
Mental State Examination and the Montreal Cognitive class correlations with desired precision. Stat Med. 2002;
Assessment Test in schizophrenia. S Afr J Psychiatr. 2016; 21(9):1331–1335.
22(1):890–890. [22] Tombaugh TN, McIntyre NJ. The mini-mental state examin-
[6] Werner P, Heinik J, Mendel A, et al. Examining the reliabil- ation: a comprehensive review. J Am Geriatr Soc. 1992;
ity and validity of the Hebrew version of the Mini Mental 40(9):922–935.
State Examination. Aging. 1999;11(5):329–334. [23] Steenland NK, Auman CM, Patel PM, et al. Development of
[7] Cao L, Hai S, Lin X, et al. Comparison of the Saint Louis a rapid screening instrument for mild cognitive impairment
University Mental Status Examination, the Mini-Mental and undiagnosed dementia. J Alzheimers Dis. 2008;15(3):
State Examination, and the Montreal Cognitive Assessment 419–427.
6 Y.-C. LEE ET AL.

[24] Villareio A, Puertas-Martin V. Usefulness of short tests in of movement measure. J Rehabil Med. 2008;40(8):
dementia screening. Neurologia. 2011;26:425–433. 615–619.
[25] Nasreddine ZS, Phillips NA, Bedirian V, et al. The Montreal [33] Bland JM, Altman DG. Statistical methods for assessing
Cognitive Assessment, MoCA: a brief screening tool for agreement between two methods of clinical measurement.
mild cognitive impairment. J Am Geriatr Soc. 2005;53(4): Lancet. 1986;1(8476):307–310.
695–699. [34] Almarwani M, Perera S, VanSwearingen JM, et al. The test-
[26] Roalf DR, Moore TM, Wolk DA, et al. Defining and validing retest reliability and minimal detectable change of spatial
a short form Montreal Cognitive Assessment (s-MoCA) for and temporal gait variability during usual over-ground
use in neurodegenerative disease. J Neurol Neurosurg walking for younger and older adults. Gait Posture. 2016;
Psychiatry. 2016;87(12):1303–1310. 44:94–99.
[27] O’Bryant SE, Waring SC, Cullum CM, et al. Staging dementia [35] Cohen J. Statistical power analysis for the behavioral sciences.
using Clinical Dementia Rating Scale Sum of Boxes scores:
2nd ed. Hillsdale (NJ): Lawrence Erlbaum Associates; 1988.
a Texas Alzheimer’s research consortim study. Arch Neurol.
[36] Goldberg TE, Harvey PD, Wesnes KA, et al. Practice effects
2008;65(8):1091–1095.
due to serial cognitive assessment: Implications for preclin-
[28] Hughes CP, Berg L, Danziger WL, et al. A new clinical scale
ical Alzheimer’s disease randomized controlled trials.
for the staging of dementia. Br J Psychiatry. 1982;140:
Alzheimers Dement. 2015;1(1):103–111.
566–572.
[37] Collie A, Maruff P, Darby DG, et al. The effects of practice
[29] Landis JR, Koch GG. The measurement of observer agree-
ment for categorical data. Biometrics. 1977;33(1):159–174. on the cognitive test performance of neurologically normal
[30] Huang SL, Hsieh CL, Wu RM, et al. Minimal detectable individuals assessed at brief test-retest intervals. J Int
change of the timed “up & go” test and the dynamic gait Neuropsychol Soc. 2003;9(3):419–428.
index in people with Parkinson disease. Phys Ther. 2011; [38] Cooley SA, Heaps JM, Bolzenius JD, et al. Longitudinal
91(1):114–121. change in performance on the Montreal Cognitive
[31] Haley SM, Fragala-Pinkham MA. Interpreting change scores Assessment in older adults. Clin Neuropsychol. 2015;29(6):
of tests and measures used in physical therapy. Phys Ther. 824–835.
2006;86(5):735–743. [39] Folstein MF, Folstin SE, White T, et al. Mini-mental state
[32] Lu WS, Wang CH, Lin JH, et al. The minimal detectable examination. 2nd ed. Lutz (FL): Psychological Assessment
change of the simplified stroke rehabilitation assessment Resources; 2010.

You might also like