[Skip to Navigation]
Sign In
Figure 1.  Summary of the Literature Search
Summary of the Literature Search

3MS indicates modified Mini-Mental State Examination (MMSE); ACE-R, Addenbrooke’s Cognitive Examination–Revised; AMT, Abbreviated Mental Test; CDT, Clock Drawing Test; GPCOG, General Practitioner Assessment of Cognition; IQCODE, Informant Questionnaire on Cognitive Decline in Elderly; MIS, Memory Impairment Screen; and MoCA, Montreal Cognitive Assessment.

Figure 2.  Forest Plots for the Pooled Sensitivity and Specificity
Forest Plots for the Pooled Sensitivity and Specificity

Data are provided for the Mini-Cog test,10 Addenbrooke’s Cognitive Examination–Revised (ACE-R),9 and Montreal Cognitive Assessment (MoCA).31 MCI indicates mild cognitive impairment.

Table 1.  Characteristics of the 11 Screening Tests for Dementia
Characteristics of the 11 Screening Tests for Dementia
Table 2.  Characteristics of Included Studies
Characteristics of Included Studies
Table 3.  Meta-analyses for Diagnostic Accuracy on Dementia
Meta-analyses for Diagnostic Accuracy on Dementia
1.
Ashford  JW, Borson  S, O’Hara  R,  et al.  Should older adults be screened for dementia?  Alzheimers Dement. 2006;2(2):76-85.PubMedGoogle ScholarCrossref
2.
Freund  B.  Office-based evaluation of the older driver.  J Am Geriatr Soc. 2006;54(12):1943-1944.PubMedGoogle ScholarCrossref
3.
Lin  JS, O’Connor  E, Rossom  RC, Perdue  LA, Eckstrom  E.  Screening for cognitive impairment in older adults: a systematic review for the U.S. Preventive Services Task Force.  Ann Intern Med. 2013;159(9):601-612.PubMedGoogle Scholar
4.
Iliffe  S, Manthorpe  J, Eden  A.  Sooner or later? issues in the early diagnosis of dementia in general practice: a qualitative study.  Fam Pract. 2003;20(4):376-381.PubMedGoogle ScholarCrossref
5.
Valcour  VG, Masaki  KH, Curb  JD, Blanchette  PL.  The detection of dementia in the primary care setting.  Arch Intern Med. 2000;160(19):2964-2968.PubMedGoogle ScholarCrossref
6.
Mitchell  AJ.  The clinical significance of subjective memory complaints in the diagnosis of mild cognitive impairment and dementia: a meta-analysis.  Int J Geriatr Psychiatry. 2008;23(11):1191-1202.PubMedGoogle ScholarCrossref
7.
Folstein  MF, Folstein  SE, McHugh  PR.  “Mini-mental state”: a practical method for grading the cognitive state of patients for the clinician.  J Psychiatr Res. 1975;12(3):189-198.PubMedGoogle ScholarCrossref
8.
Powsner  S, Powsner  D.  Cognition, copyright, and the classroom.  Am J Psychiatry. 2005;162(3):627-628.PubMedGoogle ScholarCrossref
9.
Mioshi  E, Dawson  K, Mitchell  J, Arnold  R, Hodges  JR.  The Addenbrooke’s Cognitive Examination–Revised (ACE-R): a brief cognitive test battery for dementia screening.  Int J Geriatr Psychiatry. 2006;21(11):1078-1085.PubMedGoogle ScholarCrossref
10.
Borson  S, Scanlan  J, Brush  M, Vitaliano  P, Dokmak  A.  The Mini-Cog: a cognitive “vital signs” measure for dementia screening in multi-lingual elderly.  Int J Geriatr Psychiatry. 2000;15(11):1021-1027.PubMedGoogle ScholarCrossref
11.
Brodaty  H, Low  LF, Gibson  L, Burns  K.  What is the best dementia screening instrument for general practitioners to use?  Am J Geriatr Psychiatry. 2006;14(5):391-400.PubMedGoogle ScholarCrossref
12.
Jorm  AF, Jacomb  PA.  The Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE): socio-demographic correlates, reliability, validity and some norms.  Psychol Med. 1989;19(4):1015-1022.PubMedGoogle ScholarCrossref
13.
Jorm  AF.  A short form of the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE): development and cross-validation.  Psychol Med. 1994;24(1):145-153.PubMedGoogle ScholarCrossref
14.
Moher  D, Liberati  A, Tetzlaff  J, Altman  DG; PRISMA Group.  Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement.  Ann Intern Med. 2009;151(4):264-269, W64.PubMedGoogle ScholarCrossref
15.
Leeflang  MM, Deeks  JJ, Gatsonis  C, Bossuyt  PM; Cochrane Diagnostic Test Accuracy Working Group.  Systematic reviews of diagnostic test accuracy.  Ann Intern Med. 2008;149(12):889-897.PubMedGoogle ScholarCrossref
16.
Macaskill  P, Gatsonis  C, Deeks  JJ, Harbord  RM, Takwoingi  Y. Analysing and presenting results. In: Deeks  JJ, Bossuyt  PM, Gatsonis  C, eds.  Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy Version 1.0. Oxford, England: Cochrane Collaboration; 2010, http://srdta.cochrane.org/. Accessed December 12, 2014.
17.
Cullen  B, O’Neill  B, Evans  JJ, Coen  RF, Lawlor  BA.  A review of screening tests for cognitive impairment.  J Neurol Neurosurg Psychiatry. 2007;78(8):790-799.PubMedGoogle ScholarCrossref
18.
Lonie  JA, Tierney  KM, Ebmeier  KP.  Screening for mild cognitive impairment: a systematic review.  Int J Geriatr Psychiatry. 2009;24(9):902-915.PubMedGoogle ScholarCrossref
19.
Whiting  PF, Rutjes  AW, Westwood  ME,  et al; QUADAS-2 Group.  QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies.  Ann Intern Med. 2011;155(8):529-536.PubMedGoogle ScholarCrossref
20.
Bossuyt  PM, Reitsma  JB, Bruns  DE,  et al; Standards for Reporting of Diagnostic Accuracy.  The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration.  Ann Intern Med. 2003;138(1):W1-W12.PubMedGoogle ScholarCrossref
21.
Reitsma  JB, Glas  AS, Rutjes  AW, Scholten  RJ, Bossuyt  PM, Zwinderman  AH.  Bivariate analysis of sensitivity and specificity produces informative summary measures in diagnostic reviews.  J Clin Epidemiol. 2005;58(10):982-990.PubMedGoogle ScholarCrossref
22.
Glas  AS, Lijmer  JG, Prins  MH, Bonsel  GJ, Bossuyt  PM.  The diagnostic odds ratio: a single indicator of test performance.  J Clin Epidemiol. 2003;56(11):1129-1135.PubMedGoogle ScholarCrossref
23.
Rutter  CM, Gatsonis  CA.  A hierarchical regression approach to meta-analysis of diagnostic test accuracy evaluations.  Stat Med. 2001;20(19):2865-2884.PubMedGoogle ScholarCrossref
24.
Swets  JA.  Measuring the accuracy of diagnostic systems.  Science. 1988;240(4857):1285-1293.PubMedGoogle ScholarCrossref
25.
DerSimonian  R, Laird  N.  Meta-analysis in clinical trials.  Control Clin Trials. 1986;7(3):177-188.PubMedGoogle ScholarCrossref
26.
Rosman  AS, Korsten  MA.  Application of summary receiver operating characteristics (sROC) analysis to diagnostic clinical testing.  Adv Med Sci. 2007;52:76-82.PubMedGoogle Scholar
27.
Hodkinson  HM.  Evaluation of a mental test score for assessment of mental impairment in the elderly.  Age Ageing. 1972;1(4):233-238.PubMedGoogle ScholarCrossref
28.
Sunderland  T, Hill  JL, Mellow  AM,  et al.  Clock drawing in Alzheimer’s disease: a novel measure of dementia severity.  J Am Geriatr Soc. 1989;37(8):725-729.PubMedGoogle ScholarCrossref
29.
Shulman  KI, Shedletsky  R, Silver  IL.  The challenge of time: clock-drawing and cognitive function in the elderly.  Int J Geriatr Psychiatry. 1986;1:135-140.Google ScholarCrossref
30.
Buschke  H, Kuslansky  G, Katz  M,  et al.  Screening for dementia with the memory impairment screen.  Neurology. 1999;52(2):231-238.PubMedGoogle ScholarCrossref
31.
Nasreddine  ZS, Phillips  NA, Bédirian  V,  et al.  The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment.  J Am Geriatr Soc. 2005;53(4):695-699.PubMedGoogle ScholarCrossref
32.
Teng  EL, Chui  HC.  The Modified Mini-Mental State (3MS) examination.  J Clin Psychiatry. 1987;48(8):314-318.PubMedGoogle Scholar
33.
Sager  MA, Hermann  BP, La Rue  A, Woodard  JL.  Screening for dementia in community-based memory clinics.  WMJ. 2006;105(7):25-29.PubMedGoogle Scholar
34.
Borson  S, Scanlan  JM, Chen  P, Ganguli  M.  The Mini-Cog as a screen for dementia: validation in a population-based sample.  J Am Geriatr Soc. 2003;51(10):1451-1454.PubMedGoogle ScholarCrossref
35.
Borson  S, Scanlan  JM, Watanabe  J, Tu  S-P, Lessig  M.  Simplifying detection of cognitive impairment: comparison of the Mini-Cog and Mini-Mental State Examination in a multiethnic sample.  J Am Geriatr Soc. 2005;53(5):871-874.PubMedGoogle ScholarCrossref
36.
Borson  S, Scanlan  JM, Watanabe  J, Tu  SP, Lessig  M.  Improving identification of cognitive impairment in primary care.  Int J Geriatr Psychiatry. 2006;21(4):349-355.PubMedGoogle ScholarCrossref
37.
Carnero-Pardo  C, Cruz-Orduña  I, Espejo-Martínez  B, Martos-Aparicio  C, López-Alcalde  S, Olazarán  J.  Utility of the Mini-Cog for detection of cognitive impairment in primary care: data from two spanish studies.  Int J Alzheimers Dis. 2013;2013:285462.PubMedGoogle Scholar
38.
Fuchs  A, Wiese  B, Altiner  A, Wollny  A, Pentzek  M.  Cued recall and other cognitive tasks to facilitate dementia recognition in primary care.  J Am Geriatr Soc. 2012;60(1):130-135.PubMedGoogle ScholarCrossref
39.
Holsinger  T, Plassman  BL, Stechuchak  KM, Burke  JR, Coffman  CJ, Williams  JW  Jr.  Screening for cognitive impairment: comparing the performance of four instruments in primary care.  J Am Geriatr Soc. 2012;60(6):1027-1036.PubMedGoogle ScholarCrossref
40.
Kaufer  DI, Williams  CS, Braaten  AJ, Gill  K, Zimmerman  S, Sloane  PD.  Cognitive screening for dementia and mild cognitive impairment in assisted living: comparison of 3 tests.  J Am Med Dir Assoc. 2008;9(8):586-593.PubMedGoogle ScholarCrossref
41.
Milian  M, Leiherr  AM, Straten  G, Müller  S, Leyhe  T, Eschweiler  GW.  The Mini-Cog versus the Mini-Mental State Examination and the Clock Drawing Test in daily clinical practice: screening value in a German memory clinic.  Int Psychogeriatr. 2012;24(5):766-774.PubMedGoogle ScholarCrossref
42.
Alexopoulos  P, Ebert  A, Richter-Schmidinger  T,  et al.  Validation of the German revised Addenbrooke’s Cognitive Examination for detecting mild cognitive impairment, mild dementia in alzheimer’s disease and frontotemporal lobar degeneration.  Dement Geriatr Cogn Disord. 2010;29(5):448-456.PubMedGoogle ScholarCrossref
43.
Bastide  L, De Breucker  S, Van den Berge  M, Fery  P, Pepersack  T, Bier  JC.  The Addenbrooke’s Cognitive Examination Revised is as effective as the original to detect dementia in a French-speaking population.  Dement Geriatr Cogn Disord. 2012;34(5-6):337-343.PubMedGoogle ScholarCrossref
44.
Carvalho  VA, Barbosa  MT, Caramelli  P.  Brazilian version of the Addenbrooke Cognitive Examination–Revised in the diagnosis of mild Alzheimer disease.  Cogn Behav Neurol. 2010;23(1):8-13.PubMedGoogle ScholarCrossref
45.
dos Santos Kawata  KH, Hashimoto  R, Nishio  Y,  et al.  A validation study of the Japanese version of the Addenbrooke’s Cognitive Examination–Revised.  Dement Geriatr Cogn Dis Extra. 2012;2(1):29-37.PubMedGoogle ScholarCrossref
46.
Fang  R, Wang  G, Huang  Y,  et al.  Validation of the Chinese version of Addenbrooke’s Cognitive Examination–Revised for screening mild Alzheimer’s disease and mild cognitive impairment.  Dement Geriatr Cogn Disord. 2014;37(3-4):223-231.PubMedGoogle ScholarCrossref
47.
Konstantinopoulou  E, Kosmidis  MH, Ioannidis  P, Kiosseoglou  G, Karacostas  D, Taskos  N.  Adaptation of Addenbrooke’s Cognitive Examination–Revised for the Greek population.  Eur J Neurol. 2011;18(3):442-447.PubMedGoogle ScholarCrossref
48.
Kwak  YT, Yang  Y, Kim  GW.  Korean Addenbrooke’s Cognitive Examination Revised (K-ACER) for differential diagnosis of Alzheimer’s disease and subcortical ischemic vascular dementia.  Geriatr Gerontol Int. 2010;10(4):295-301.PubMedGoogle ScholarCrossref
49.
Pigliautile  M, Ricci  M, Mioshi  E,  et al.  Validation study of the Italian Addenbrooke’s Cognitive Examination Revised in a young-old and old-old population.  Dement Geriatr Cogn Disord. 2011;32(5):301-307.PubMedGoogle ScholarCrossref
50.
Terpening  Z, Cordato  NJ, Hepner  IJ, Lucas  SK, Lindley  RI.  Utility of the Addenbrooke’s Cognitive Examination–Revised for the diagnosis of dementia syndromes.  Australas J Ageing. 2011;30(3):113-118.PubMedGoogle ScholarCrossref
51.
Torralva  T, Roca  M, Gleichgerrcht  E, Bonifacio  A, Raimondi  C, Manes  F.  Validation of the Spanish version of the Addenbrooke’s Cognitive Examination–Revised (ACE-R).  Neurologia. 2011;26(6):351-356.PubMedGoogle ScholarCrossref
52.
Wong  L, Chan  C, Leung  J,  et al.  A validation study of the Chinese-Cantonese Addenbrooke’s Cognitive Examination Revised (C-ACER).  Neuropsychiatr Dis Treat. 2013;9:731-737.PubMedGoogle Scholar
53.
Dalrymple-Alford  JC, MacAskill  MR, Nakas  CT,  et al.  The MoCA: well-suited screen for cognitive impairment in Parkinson disease.  Neurology. 2010;75(19):1717-1725.PubMedGoogle ScholarCrossref
54.
Dong  Y, Lee  WY, Basri  NA,  et al.  The Montreal Cognitive Assessment is superior to the Mini-Mental State Examination in detecting patients at higher risk of dementia.  Int Psychogeriatr. 2012;24(11):1749-1755.PubMedGoogle ScholarCrossref
55.
Hu  JB, Zhou  WH, Hu  SH,  et al.  Cross-cultural difference and validation of the Chinese version of Montreal Cognitive Assessment in older adults residing in Eastern China: preliminary findings.  Arch Gerontol Geriatr. 2013;56(1):38-43.PubMedGoogle ScholarCrossref
56.
Larner  AJ.  Screening utility of the Montreal Cognitive Assessment (MoCA): in place of—or as well as—the MMSE?  Int Psychogeriatr. 2012;24(3):391-396.PubMedGoogle ScholarCrossref
57.
Cummings-Vaughn  LA, Chavakula  NN, Malmstrom  TK, Tumosa  N, Morley  JE, Cruz-Oliver  DM.  Veterans Affairs Saint Louis University Mental Status Examination compared with the Montreal Cognitive Assessment and the Short Test of Mental Status.  J Am Geriatr Soc. 2014;62(7):1341-1346.PubMedGoogle ScholarCrossref
58.
Luis  CA, Keegan  AP, Mullan  M.  Cross validation of the Montreal Cognitive Assessment in community dwelling older adults residing in the Southeastern US.  Int J Geriatr Psychiatry. 2009;24(2):197-201.PubMedGoogle ScholarCrossref
59.
Martinelli  JE, Cecato  JF, Bartholomeu  D, Montiel  JM.  Comparison of the diagnostic accuracy of neuropsychological tests in differentiating Alzheimer’s disease from mild cognitive impairment: can the Montreal Cognitive Assessment be better than the Cambridge Cognitive Examination?  Dement Geriatr Cogn Dis Extra. 2014;4(2):113-121.PubMedGoogle ScholarCrossref
60.
Smith  T, Gildeh  N, Holmes  C.  The Montreal Cognitive Assessment: validity and utility in a memory clinic setting.  Can J Psychiatry. 2007;52(5):329-332.PubMedGoogle Scholar
61.
Mitchell  AJ.  A meta-analysis of the accuracy of the Mini-Mental State Examination in the detection of dementia and mild cognitive impairment.  J Psychiatr Res. 2009;43(4):411-431.PubMedGoogle ScholarCrossref
62.
Mitchell  AJ, Malladi  S.  Screening and case finding tools for the detection of dementia, part I: evidence-based meta-analysis of multidomain tests.  Am J Geriatr Psychiatry. 2010;18(9):759-782.PubMedGoogle ScholarCrossref
63.
Mitchell  AJ, Malladi  S.  Screening and case-finding tools for the detection of dementia, part II: evidence-based meta-analysis of single-domain tests.  Am J Geriatr Psychiatry. 2010;18(9):783-800.PubMedGoogle ScholarCrossref
64.
Diamond  GA, Forrester  JS, Hirsch  M,  et al.  Application of conditional probability analysis to the clinical diagnosis of coronary artery disease.  J Clin Invest. 1980;65(5):1210-1221.PubMedGoogle ScholarCrossref
2 Comments for this article
EXPAND ALL
Query regarding the interpretation of best alternative screening tests
Adam Bentvelzen (B Psych), Dr Katrin Seeher (PhD), and Professor Henry Brodaty (MBBS, MD, DSc) | Dementia Collaborative Research Centre, University of New South Wales, Sydney, Australia (Bentvelzen, Seeher, Brodaty)
Dear Drs Tsoi, Chan, Hirai, Wong, and Kwok,

A recently published meta-analysis of the effectiveness of different cognitive screens in detecting dementia stated that the Mini-Cog and the ACE-R are the best alternative screening tests to the MMSE due to their high pooled sensitivity and specificity compared to other alternative screens [4]. However the data are at odds with this conclusion.
The Mini-Cog is reported to have pooled sensitivity and specificity values of 0.91 (0.80-0.96) and 0.86 (0.74-0.93), respectively. However, the reported values for the GPCOG are numerically higher with respective values of 0.92 (0.81-0.97) and 0.87 (0.83-0.90). In addition,
while the confidence intervals are equally wide between the tests for sensitivity, for specificity they are narrower for the GPCOG than the Mini-Cog.
In contrast to these reported measures, in the Methods the authors state that a diagnostic odds ratio (DOR) was used as a single indicator of test performance, to account for the trade-off between sensitivity and specificity in the context of different thresholds used across studies [5]. While we agree with this approach, this measure was not reported in the Abstract, Results or online Supplementary Material, so it is unclear to the reader how or if the DOR was actually used to support their conclusions independently of pooled sensitivity and specificity. Taking reported pooled sensitivity (Se) and specificity (Sp) alone [5], where DOR = (Se x Sp)/((1 – Se) x (1 – Sp)) the GPCOG (76.96) performs better than the Mini-Cog (62.11). Furthermore, the reported prevalence of dementia in the Mini-Cog studies (1182/4178; 28.3%) was slightly higher than that of the GPCOG (292/1082; 27.0%). Since a higher level of dementia prevalence positively biases the chances of detecting dementia, accounting for prevalence may have further increased the sensitivity and specificity of the GPCOG relative to the Mini-Cog.
Unless the authors can demonstrate how their conclusions are supported by their data, we believe that the conclusion should have been revised to reflect that the Mini-Cog and GPCOG have at least equivalent diagnostic efficiency for detecting dementia.

REFERENCES
1. Lorentz WJ, Scanlan JM, Borson S. Brief screening tests for dementia. Can J Psychiatry. 2002;47(8):723-33.
2. Milne A, Culverwell A, Guss R, Tuppen J, Whelton R. Screening for dementia in primary care: a review of the use, efficacy and quality of measures. Int Psychogeriatr. 2008;20(5):911-26.
3. Cullen B, O'Neill B, Evans JJ, Coen RF, Lawlor BA. A review of screening tests for cognitive impairment. J Neurol Neurosurg Psychiatry. 2007;78(8):790.
4. Tsoi KKF, Chan JYC, Hirai HW, Wong SYS, Kwok TCY. Cognitive Tests to Detect Dementia: A Systematic Review and Meta-analysis. JAMA Int Med. 2015;175(9):1450-8.
5. Glas AS, Lijmer JG, Prins MH, Bonsel GJ, Bossuyt PM. The diagnostic odds ratio: a single indicator of test performance. J Clin Epidemiol. 2003;5600:1129-1135.
CONFLICT OF INTEREST: Henry Brodaty is one of the authors of The General Practitioner Assessment of Cognition (Brodaty H, Pond D, Kemp NM, Luscombe G, Harding L, Berman K, et al. The GPCOG: a new screening test for dementia designed for general practice. J Am Geriatr Soc. 2002;50(3):530-4), which is mentioned in the article and in this comment.
READ MORE
Query regarding the interpretation of best alternative screening tests [corrected]
Adam Bentvelzen, Katrin Seeher, and Henry Brodaty | Dementia Collaborative Research Centre, University of New South Wales, Sydney, Australia
Dear Drs Tsoi, Chan, Hirai, Wong, and Kwok,

Several recent reviews [1-3] of the relative utility of cognitive screening instruments have come to different conclusions. While two sets of reviews recommended the very brief Mini-Cog, GPCOG and MIS for use in general practice [1,2], another review recommended more comprehensive screens (3MS, CASI, SASSI, and ACE-R) [3]. A recently published meta-analysis of the effectiveness of different cognitive screens in detecting dementia stated that the Mini-Cog and the ACE-R are the best alternative screening tests to the MMSE due to their high pooled sensitivity and specificity compared to other alternative screens
[4]. However the data are at odds with this conclusion.

The Mini-Cog is reported to have pooled sensitivity and specificity values of 0.91 (0.80-0.96) and 0.86 (0.74-0.93), respectively. However, the reported values for the GPCOG are numerically higher with respective values of 0.92 (0.81-0.97) and 0.87 (0.83-0.90). In addition, while the confidence intervals are equally wide between the tests for sensitivity, for specificity they are narrower for the GPCOG than the Mini-Cog.

In contrast to these reported measures, in the Methods the authors state that a diagnostic odds ratio (DOR) was used as a single indicator of test performance, to account for the trade-off between sensitivity and specificity in the context of different thresholds used across studies [5]. While we agree with this approach, this measure was not reported in the Abstract, Results or online Supplementary Material, so it is unclear to the reader how or if the DOR was actually used to support their conclusions independently of pooled sensitivity and specificity. Taking reported pooled sensitivity (Se) and specificity (Sp) alone [5], where DOR = (Se x Sp)/((1 – Se) x (1 – Sp)) the GPCOG (76.96) performs better than the Mini-Cog (62.11). Furthermore, the reported prevalence of dementia in the Mini-Cog studies (1182/4178; 28.3%) was slightly higher than that of the GPCOG (292/1082; 27.0%). Since a higher level of dementia prevalence positively biases the chances of detecting dementia, accounting for prevalence may have further increased the sensitivity and specificity of the GPCOG relative to the Mini-Cog.

Unless the authors can demonstrate how their conclusions are supported by their data, we believe that the conclusion should have been revised to reflect that the Mini-Cog and GPCOG have at least equivalent diagnostic efficiency for detecting dementia.

REFERENCES
1. Lorentz WJ, Scanlan JM, Borson S. Brief screening tests for dementia. Can J Psychiatry. 2002;47(8):723-33.
2. Milne A, Culverwell A, Guss R, Tuppen J, Whelton R. Screening for dementia in primary care: a review of the use, efficacy and quality of measures. Int Psychogeriatr. 2008;20(5):911-26.
3. Cullen B, O'Neill B, Evans JJ, Coen RF, Lawlor BA. A review of screening tests for cognitive impairment. J Neurol Neurosurg Psychiatry. 2007;78(8):790.
4. Tsoi KKF, Chan JYC, Hirai HW, Wong SYS, Kwok TCY. Cognitive Tests to Detect Dementia: A Systematic Review and Meta-analysis. JAMA Int Med. 2015;175(9):1450-8.
5. Glas AS, Lijmer JG, Prins MH, Bonsel GJ, Bossuyt PM. The diagnostic odds ratio: a single indicator of test performance. J Clin Epidemiol. 2003;5600:1129-1135.
CONFLICT OF INTEREST: Henry Brodaty is one of the authors of The General Practitioner Assessment of Cognition (Brodaty H, Pond D, Kemp NM, Luscombe G, Harding L, Berman K, et al. The GPCOG: a new screening test for dementia designed for general practice. J Am Geriatr Soc. 2002;50(3):530-4), which is mentioned in the article and in this comment.
READ MORE
Original Investigation
September 2015

Cognitive Tests to Detect Dementia: A Systematic Review and Meta-analysis

Author Affiliations
  • 1School of Public Health and Primary Care, The Chinese University of Hong Kong, Shatin
  • 2Stanley Ho Big Data Decision Analytics Research Centre, The Chinese University of Hong Kong, Shatin
  • 3Department of Medicine and Therapeutics, The Chinese University of Hong Kong, Shatin
JAMA Intern Med. 2015;175(9):1450-1458. doi:10.1001/jamainternmed.2015.2152
Abstract

Importance  Dementia is a global public health problem. The Mini-Mental State Examination (MMSE) is a proprietary instrument for detecting dementia, but many other tests are also available.

Objective  To evaluate the diagnostic performance of all cognitive tests for the detection of dementia.

Data Sources  Literature searches were performed on the list of dementia screening tests in MEDLINE, EMBASE, and PsychoINFO from the earliest available dates stated in the individual databases until September 1, 2014. Because Google Scholar searches literature with a combined ranking algorithm on citation counts and keywords in each article, our literature search was extended to Google Scholar with individual test names and dementia screening as a supplementary search.

Study Selection  Studies were eligible if participants were interviewed face to face with respective screening tests, and findings were compared with criterion standard diagnostic criteria for dementia. Bivariate random-effects models were used, and the area under the summary receiver-operating characteristic curve was used to present the overall performance.

Main Outcomes and Measures  Sensitivity, specificity, and positive and negative likelihood ratios were the main outcomes.

Results  Eleven screening tests were identified among 149 studies with more than 49 000 participants. Most studies used the MMSE (n = 102) and included 10 263 patients with dementia. The combined sensitivity and specificity for detection of dementia were 0.81 (95% CI, 0.78-0.84) and 0.89 (95% CI, 0.87-0.91), respectively. Among the other 10 tests, the Mini-Cog test and Addenbrooke’s Cognitive Examination–Revised (ACE-R) had the best diagnostic performances, which were comparable to that of the MMSE (Mini-Cog, 0.91 sensitivity and 0.86 specificity; ACE-R, 0.92 sensitivity and 0.89 specificity). Subgroup analysis revealed that only the Montreal Cognitive Assessment had comparable performance to the MMSE on detection of mild cognitive impairment with 0.89 sensitivity and 0.75 specificity.

Conclusions and Relevance  Besides the MMSE, there are many other tests with comparable diagnostic performance for detecting dementia. The Mini-Cog test and the ACE-R are the best alternative screening tests for dementia, and the Montreal Cognitive Assessment is the best alternative for mild cognitive impairment.

Introduction

Early diagnosis of dementia can identify people at risk for complications.1 Previous studies2,3 have found that health care professionals commonly miss the diagnosis of cognitive impairment or dementia; the prevalence of missed diagnosis ranges from 25% to 90%. Primary care physicians may not recognize cognitive impairment3 until the moderate to severe stage.4-6 Screening tests are quick and useful tools to assess the cognitive condition of patients.1

The Mini-Mental State Examination (MMSE)7 is the most widely applied test for dementia screening. Since the intellectual property rights of the MMSE were transferred to Psychological Assessment Resources in 2001, it has become less accessible and useful.7,8 However, there are more than 40 other tests available for dementia screening in health care settings, many of which are freely available, such as Addenbrooke’s Cognitive Examination–Revised (ACE-R),9 the Mini-Cog test,10 the General Practitioner Assessment of Cognition (GPCOG),11 and the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE).12,13 The diagnostic performances of these tests have not been systematically evaluated and synthesized for relative comparison, which is particularly salient because the MMSE, as a proprietary instrument, incurs a cost, whereas others do not. Thus, it is worth identifying the best alternative among the long list of screening tests. Therefore, this systematic review aimed to quantitatively analyze the diagnostic accuracy of various dementia screening tests and compare their performance to that of the MMSE.

Methods

This systematic review followed standard guidelines for conducting and reporting systematic reviews of diagnostic studies, including Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA)14 and guidelines from the Cochrane Diagnostic Test Accuracy Working Group.15,16

Search Strategy

A list of screening tests was identified in previous systematic reviews.3,11,17,18 Literature searches were performed on the list of dementia screening tests in MEDLINE, EMBASE, and PsychoINFO from the earliest available dates stated in the individual databases until September 1, 2014. Each screening test was separately searched with general keywords of dementia, including Alzheimer, Parkinson, vascular, stroke, cognitiveimpairment, and dementia. Diagnostic studies comparing accuracy of screening tests for detection of dementia were manually identified from the title or abstract preview of all search records. The selection was limited to peer-reviewed articles published in English abstracts. Because Google Scholar searches literature with a combined ranking algorithm on citation counts and keywords in each article, our literature search was extended to Google Scholar with individual test names and dementia screening as a supplementary search. The first 10 pages of all search records were scanned. Manual searches were extended to the bibliographies of review articles and included research studies. Screening tests were classified into different categories according to the administration time: 5 minutes or less, 10 minutes or less, and 20 minutes or less.

Inclusion and Exclusion Criteria

Cross-sectional studies were included if they met the following inclusion criteria: (1) involved participants studied for the detection of dementia associated with Alzheimer disease, vascular dementia, or Parkinson disease in any clinical or community setting; (2) screened patients or caregivers with a face-to-face interview; (3) used standard diagnostic criteria as the criterion standard for defining dementia, including the international diagnostic guidelines (eg, Diagnostic and Statistical Manual of Mental Disorders, International Classification of Diseases, National Institute of Neurological and Communicative Disorders and Stroke and the Alzheimer Disease and Related Disorders Association, National Institute of Neurological and Communicative Disorders and Stroke and the Association Internationale pour la Recherche et L’Enseignement en Neuroscience criteria, or clinical judgment after a full assessment series); (4) reported the number of participants with dementia and evaluated the accuracy of the screening tests, including sensitivity, specificity, or data that could be used to derive those values. Studies were excluded if they were not written in English or only included a screening test that (1) requires administration time longer than 20 minutes, (2) was identified in fewer than 4 studies in the literature search, or (3) was administered to participants with visual impairment.

Data Extraction

Two investigators (J.Y.C.C., H.W.H.) independently assessed the relevancy of search results and abstracted the data into a data extraction form. This form was used to record the demographic details of individual articles, including year of publication, study location, number of participants included, mean age of participants, percentage of male participants, type of dementia, recruitment site, number of participants with dementia or mild cognitive impairment (MCI), diagnostic criteria, cutoff values, sensitivity, specificity, and true-positive, false-positive, true-negative, and false-negative likelihood ratios. When a study reported results of sensitivity and specificity across different cutoff values of a screening test, only the results from a recommended cutoff by the authors of the article were selected. If the study did not have this recommendation, the cutoff used to summarize sensitivity and specificity in the abstract was chosen. When discrepancies were found regarding inclusion of studies or data extraction, the third investigator (K.K.F.T.) would make the definitive decision for study eligibility and data extraction.

Risk of Bias and Study Quality

Potential risks of bias in each screening test were evaluated by the Quality Assessment of Diagnostic Accuracy Studies 2 instrument,19 which evaluated patient selection, execution of the index test and the reference standard, and flow of patients. All high risk of bias was counted in an Excel worksheet (Microsoft Inc) and presented as a percentage in each screening test. The quality of study was also assessed according to the methods section of the Standards for Reporting of Diagnostic Accuracy statement.20 An 8-point scale was designed for the evaluation of study quality, including description of the following: (1) study population, (2) participant recruitment, (3) sampling of participant selection, (4) data collection plan, (5) reference standard and its rationale, (6) technical specifications, (7) rationale for units and cutoffs, and (8) methods for calculating diagnostic accuracy with CIs. This quality score was presented as median and range across the screening tests.

Data Synthesis and Statistical Analysis

Statistical analyses were performed with the Metandi and Midas procedures in STATA statistical software, version 11 (StataCorp). The overall sensitivity and specificity of each diagnostic test were pooled using a bivariate random-effects model.21 Forest plots were used to graphically present the combined sensitivity and specificity. The accuracy of a screening test had to allow trade-off between sensitivity and specificity that occurs when different threshold values were used to define positive and negative likelihood ratios of the tests. Therefore, a diagnostic odds ratio was used as a single indicator of test performance.22 In addition, a hierarchical summary receiver-operating characteristic (HSROC) curve was generated to present the summary estimates of sensitivities and specificities along with 95% CIs and prediction region.23 The area under the curve (AUC) for the HSROC was also calculated, and an area between 0.9 and 1.0 indicated that the diagnostic accuracy was good.24 When the Hessian matrix of bivariate approach was unstable or asymmetric, a random-effects model following the approach of DerSimonian and Laird was applied to estimate the pooled sensitivity and specificity, and a summary receiver-operating characteristic (SROC) curve was generated to present the summary estimates of sensitivities and specificities with an AUC for SROC presented as a summary statistic.25,26 Statistical heterogeneity among the trials was assessed by I2, which describes the percentage of total variation across studies due to the heterogeneity rather than the chance alone. P < .10 was considered as statistically significant heterogeneity. Because we used random-effects models to combine the results, the heterogeneity among the studies was taken into account.

Subgroup analysis was conducted across the studies by geographic regions, recruitment settings, and patients with MCI. Geographic regions were classified as Americas, Asia, and Europe. Recruitment could be performed in the community, memory clinics, cognitive function clinics, or hospitals. The definitions of participants with MCI were according to the cutoff values suggested in the individual studies.

Results
Literature Search and Study Selection

A total of 26 165 abstracts were identified from the databases, and 215 potential studies were further extracted from the bibliographies. All titles or abstracts were screened, and 346 articles were relevant to screening tools for dementia. One hundred ninety-seven were excluded for the following reasons: studies were systematic reviews (n = 30), studies did not fulfill inclusion criteria (n = 121), studies lacked data details for meta-analysis (n = 39), and studies reported results of screening tests without comparing to a criterion standard (n = 7) (Figure 1). The definitive analysis in this systematic review included 149 studies published from 1989 until September 1, 2014, for patients with dementia from the United States, the United Kingdom, Canada, and 30 other countries.

A total of 11 screening tests9,10,12,13,27-33 were identified in the 149 eligible studies, including 102 studies (eTable 1 in the Supplement) for the MMSE,7 12 studies for the ACE-R,9 13 studies for the Abbreviated Mental Test,27 9 studies for Sunderland’s version of the Clock Drawing Test,28 9 studies for Shulman’s version of the Clock Drawing Test,29 5 studies for the GPCOG,11 15 studies for the long-form IQCODE,12 7 studies for the short-form IQCODE,13 9 studies for the Mini-Cog test,10 6 studies for the Memory Impairment Screen,30 20 studies for the Montreal Cognitive Assessment (MoCA),31 6 studies for the modified MMSE,32 and 7 studies for the verbal fluency tests.33 Some other screening tests were excluded because of the limited number of studies reported, for example, the Free and Cued Selective Reminding Test, the Mental Scale Questionnaire, the Cognitive Assessment Screening Instrument, the Self-administered Gerocognitive Examination, and the Short Blessed Test. The components of each screening test, the administration time required, and the range of total score are presented in Table 1. High scores represented good cognitive function in most screening tests, except the IQCODE.

Study Characteristics

A total of 149 studies with more than 40 000 patients across the 11 screening tests were included (Table 2). One hundred ten eligible studies (73.8%) reported the diagnostic performances of at least 2 screening tests, including those compared with the MMSE. Approximately 12 000 participants were confirmed as having dementia (Table 2). Most studies (68.5%) used the MMSE as the screening test for dementia in 29 regions. The next most common screening test studied was the MoCA, which was used in 20 studies (13.4%) from 9 countries. Patients were mainly recruited from community or clinic settings (80.3%). One hundred ten (73.8%) of 149 studies had good study quality with quality scores of 7 to 8. The quality scores were comparable across the 11 screening tests with median scores of approximately 7 (range, 3-8). The original data of each study on the true-positive, false-positive, false-negative, and true-negative likelihood ratios were presented (eTable 2 in the Supplement). Furthermore, risks of bias were not identified among these studies, and only the studies for the GPCOG, MoCA and modified MMSE revealed approximately 20% to 30% high risks of bias on execution for the index test and the reference standard (Table 2).

Diagnostic Accuracy of the MMSE

There were 10 263 cases of dementia identified from 36 080 participants in 108 cohorts studying the MMSE. The most common cutoff values to define participants with dementia were 23 and 24, used in 48 cohorts (44.4%). With different cutoff threshold values, we found considerable variation in the sensitivity and specificity estimates reported by individual studies. The sensitivities ranged from 0.25 to 1.00, and the specificities ranged from 0.54 to 1.00. The heterogeneity among studies was large, with I2 statistics for sensitivity and specificity of 92% and 94%, respectively. The diagnostic accuracy is summarized by meta-analysis (Table 3). The combined data in the bivariate random-effects model gave a summary point with 0.81 sensitivity (95% CI, 0.78-0.84) and 0.89 specificity (95% CI, 0.87-0.91). The HSROC curve was plotted with a diagnostic odds ratio of 35.4, and the AUC was 92% (95% CI, 90%-94%) (eFigure 1 in the Supplement).

Diagnostic Accuracy of Other Screening Tests

The performances of the other 10 screening tests were summarized by random-effects models (Table 3). All tests presented with AUCs of at least 85%, and most of the tests had comparable performance to that of the MMSE. The Mini-Cog test and the ACE-R were the best alternative tests. Among the studies with the Mini-Cog test,10,34-41 the pooled sensitivity was 0.91 (95% CI, 0.80-0.96), and the pooled specificity was 0.86 (95% CI, 0.74-0.93) (Figure 2A). The heterogeneity among studies was large, with I2 statistics for sensitivity and specificity of 89% and 97%, respectively. Among studies that used the ACE-R,9,42-52 the pooled sensitivity was 0.92 (95% CI, 0.90-0.94) and the pooled specificity was 0.89 (95% CI, 0.84-0.93) (Figure 2B). The confidence regions of the HSROC curves for sensitivity and specificity of the Mini-Cog test and the ACE-R were plotted with reference to the HSROC curve of the MMSE (eFigure 2 in the Supplement).

Subgroup Analyses
Studies of the MMSE

Only the MMSE had a sufficient number of studies to perform subgroup analysis. For the geographic regions, studies were conducted in Europe (44.4%), Americas (31.5%), and Asia (23.1%). The diagnostic performances of the MMSE were comparable across these regions with similar AUCs (eFigure 2 in the Supplement). For the recruitment settings, participants were recruited in hospital (9.3%), clinic (32.4%), primary care (12.0%), community (38.9%), and other settings (7.4%). The diagnostic performances were comparable across different recruitments settings (P > .05 for all) (eTable 3 in the Supplement).

Patients With MCI

Only 21 of 108 cohorts reported diagnostic performance of the MMSE for the detection of MCI. The combined data gave a summary point of 0.62 sensitivity (95% CI, 0.52-0.71) and 0.87 specificity (95% CI, 0.80-0.92). Nine of 20 studies reported diagnostic performance of the MoCA for the detection of MCI.31,53-60 The combined data gave a summary point of 0.89 sensitivity (95% CI, 0.84-0.92) and 0.75 specificity (95% CI, 0.62-0.85) (Figure 2C). The confidence regions of the HSROC curve for sensitivity and specificity of the MoCA were plotted with reference to the HSROC curve of the MMSE (eFigure 3 in the Supplement).

Discussion

This systematic review and meta-analysis included 149 studies that assessed the accuracy of the MMSE and 10 other screening tests for the detection of dementia. Compared with other screening tests, the Mini-Cog test and the ACE-R had better diagnostic performance for dementia, and the MoCA had better diagnostic performance for MCI. The Mini-Cog test is relatively simple and short compared with the MMSE.

In a previous meta-analysis, Mitchell61 combined 34 diagnostic studies to evaluate the accuracy of the MMSE, but he only combined the sensitivity and specificity without mentioning the methodologic details. Mitchell and Malladi62,63 also published 2 meta-analyses that included 45 studies to compare diagnostic performance of single-domain and multidomain tests. They found that 15 brief single-domain tests were less accurate than that of the MMSE in detecting dementia in community and primary care settings. These studies used an uncommon approach of Bayesian curve modeling,64 instead of using the ROC curve to evaluate the diagnostic performance of the tests. A systematic review3 reported a combined diagnostic accuracy of the MMSE and summarized the sensitivity and specificity ranges of 10 other screening tests, but the literature search was limited to studies from systematic reviews conducted in primary care settings. In some other studies, dementia screening was performed in secondary or tertiary care settings. Therefore, the review combined only 14 studies with 10 185 participants using the MMSE as the screening test. The lack of a precise estimate of sensitivity has resulted in confusion among health care professionals to apply the MMSE for dementia screening. In our meta-analysis, we tried to make our findings more comprehensive, using publications from all possible sources, and included 102 studies with 36 080 participants to evaluate the diagnostic performance of the MMSE. The results reported a sensitivity of 0.81 and a specificity of 0.89 for the MMSE. The diagnostic performance of MMSE is good because the AUC was 92%.

Diagnostic sensitivity improves with lower cutoff values but with a corresponding decrease in specificity. High sensitivity corresponds to high negative predictive value and is the ideal to rule out dementia. We found considerable variation on the definitions of cutoff thresholds among the individual studies. According to our selection criteria, the most common cutoff scores for the MMSE for dementia were 23 and 24 (44.4% study cohorts), and approximately 20% of eligible cohorts used cutoff scores of 25 to 26 (range, 17-28). The range of scores for the Mini-Cog test is similarly 0 to 5, and 7 cohorts (77.8%) used a score of less than 3 as the cutoff for dementia, indicating disagreement on the optimal cutoff score across different screening tests. The users of screening tests should strike a balance between sensitivity and specificity to rule in or out the participants with dementia according to the available resources.

This study has several limitations. First, the screening tests were not directly compared in the same populations. Each study used different populations, and the inclusion criteria and prevalence of dementia varied. It would be preferable to directly compare screening tests using the same group of participants with similar educational levels. Second, only a few studies were included that showed head-to-head comparison between the screening tests, so the test performance could not be directly compared. Third, the screening tests were translated into different languages, which may have unknown effects on the results. We assumed that the tests were all validated in various languages in the individual studies although this was not guaranteed, and unidentified cultural effects on the use of screening tests may still exist. Fourth, we only included studies that reported the diagnostic performance of screening tests for dementia. Although we used MCI as a secondary outcome, the definitions of MCI were heterogeneous across studies. Studies that only reported the results of MCI or cognitive impairment but not dementia (cognitive impairment no dementia) were not included in this meta-analysis. Finally, some unpublished studies may not have been identified through the literature search in OVID databases, and publication bias may exist.

Conclusions

This review systematic and meta-analysis found that the MMSE is the most frequently studied test for dementia screening. However, many other screening tests have comparable diagnostic performance. The Mini-Cog test and the ACE-R had better performance than the other dementia screening tests. The MoCA had better performance than the other MCI screening tests. Although the MMSE is a proprietary instrument for dementia screening, the other screening tests are comparably effective but easier to perform and freely available.

Back to top
Article Information

Accepted for Publication: April 8, 2015.

Corresponding Author: Timothy C. Y. Kwok, MD, PhD, Department of Medicine and Therapeutics, The Chinese University of Hong Kong, 9/F, Clinical Sciences Building, Prince of Wales Hospital, Ngan Shing Street, Shatin, Hong Kong (tkwok@cuhk.edu.hk).

Published Online: June 8, 2015. doi:10.1001/jamainternmed.2015.2152.

Author Contributions: Dr Tsoi had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Study concept and design: Tsoi, Kwok.

Acquisition, analysis, or interpretation of data: All authors.

Drafting of the manuscript: Tsoi.

Critical revision of the manuscript for important intellectual content: All authors.

Statistical analysis: Tsoi, Hirai.

Administrative, technical, or material support: Chan, Hirai.

Study supervision: Tsoi, Wong, Kwok.

Conflict of Interest Disclosures: Dr Kwok reported lecturing in workshops sponsored by Lundbeck and Novartis and receiving donations from Novartis for a website for family caregivers of dementia. No other disclosures were reported.

References
1.
Ashford  JW, Borson  S, O’Hara  R,  et al.  Should older adults be screened for dementia?  Alzheimers Dement. 2006;2(2):76-85.PubMedGoogle ScholarCrossref
2.
Freund  B.  Office-based evaluation of the older driver.  J Am Geriatr Soc. 2006;54(12):1943-1944.PubMedGoogle ScholarCrossref
3.
Lin  JS, O’Connor  E, Rossom  RC, Perdue  LA, Eckstrom  E.  Screening for cognitive impairment in older adults: a systematic review for the U.S. Preventive Services Task Force.  Ann Intern Med. 2013;159(9):601-612.PubMedGoogle Scholar
4.
Iliffe  S, Manthorpe  J, Eden  A.  Sooner or later? issues in the early diagnosis of dementia in general practice: a qualitative study.  Fam Pract. 2003;20(4):376-381.PubMedGoogle ScholarCrossref
5.
Valcour  VG, Masaki  KH, Curb  JD, Blanchette  PL.  The detection of dementia in the primary care setting.  Arch Intern Med. 2000;160(19):2964-2968.PubMedGoogle ScholarCrossref
6.
Mitchell  AJ.  The clinical significance of subjective memory complaints in the diagnosis of mild cognitive impairment and dementia: a meta-analysis.  Int J Geriatr Psychiatry. 2008;23(11):1191-1202.PubMedGoogle ScholarCrossref
7.
Folstein  MF, Folstein  SE, McHugh  PR.  “Mini-mental state”: a practical method for grading the cognitive state of patients for the clinician.  J Psychiatr Res. 1975;12(3):189-198.PubMedGoogle ScholarCrossref
8.
Powsner  S, Powsner  D.  Cognition, copyright, and the classroom.  Am J Psychiatry. 2005;162(3):627-628.PubMedGoogle ScholarCrossref
9.
Mioshi  E, Dawson  K, Mitchell  J, Arnold  R, Hodges  JR.  The Addenbrooke’s Cognitive Examination–Revised (ACE-R): a brief cognitive test battery for dementia screening.  Int J Geriatr Psychiatry. 2006;21(11):1078-1085.PubMedGoogle ScholarCrossref
10.
Borson  S, Scanlan  J, Brush  M, Vitaliano  P, Dokmak  A.  The Mini-Cog: a cognitive “vital signs” measure for dementia screening in multi-lingual elderly.  Int J Geriatr Psychiatry. 2000;15(11):1021-1027.PubMedGoogle ScholarCrossref
11.
Brodaty  H, Low  LF, Gibson  L, Burns  K.  What is the best dementia screening instrument for general practitioners to use?  Am J Geriatr Psychiatry. 2006;14(5):391-400.PubMedGoogle ScholarCrossref
12.
Jorm  AF, Jacomb  PA.  The Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE): socio-demographic correlates, reliability, validity and some norms.  Psychol Med. 1989;19(4):1015-1022.PubMedGoogle ScholarCrossref
13.
Jorm  AF.  A short form of the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE): development and cross-validation.  Psychol Med. 1994;24(1):145-153.PubMedGoogle ScholarCrossref
14.
Moher  D, Liberati  A, Tetzlaff  J, Altman  DG; PRISMA Group.  Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement.  Ann Intern Med. 2009;151(4):264-269, W64.PubMedGoogle ScholarCrossref
15.
Leeflang  MM, Deeks  JJ, Gatsonis  C, Bossuyt  PM; Cochrane Diagnostic Test Accuracy Working Group.  Systematic reviews of diagnostic test accuracy.  Ann Intern Med. 2008;149(12):889-897.PubMedGoogle ScholarCrossref
16.
Macaskill  P, Gatsonis  C, Deeks  JJ, Harbord  RM, Takwoingi  Y. Analysing and presenting results. In: Deeks  JJ, Bossuyt  PM, Gatsonis  C, eds.  Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy Version 1.0. Oxford, England: Cochrane Collaboration; 2010, http://srdta.cochrane.org/. Accessed December 12, 2014.
17.
Cullen  B, O’Neill  B, Evans  JJ, Coen  RF, Lawlor  BA.  A review of screening tests for cognitive impairment.  J Neurol Neurosurg Psychiatry. 2007;78(8):790-799.PubMedGoogle ScholarCrossref
18.
Lonie  JA, Tierney  KM, Ebmeier  KP.  Screening for mild cognitive impairment: a systematic review.  Int J Geriatr Psychiatry. 2009;24(9):902-915.PubMedGoogle ScholarCrossref
19.
Whiting  PF, Rutjes  AW, Westwood  ME,  et al; QUADAS-2 Group.  QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies.  Ann Intern Med. 2011;155(8):529-536.PubMedGoogle ScholarCrossref
20.
Bossuyt  PM, Reitsma  JB, Bruns  DE,  et al; Standards for Reporting of Diagnostic Accuracy.  The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration.  Ann Intern Med. 2003;138(1):W1-W12.PubMedGoogle ScholarCrossref
21.
Reitsma  JB, Glas  AS, Rutjes  AW, Scholten  RJ, Bossuyt  PM, Zwinderman  AH.  Bivariate analysis of sensitivity and specificity produces informative summary measures in diagnostic reviews.  J Clin Epidemiol. 2005;58(10):982-990.PubMedGoogle ScholarCrossref
22.
Glas  AS, Lijmer  JG, Prins  MH, Bonsel  GJ, Bossuyt  PM.  The diagnostic odds ratio: a single indicator of test performance.  J Clin Epidemiol. 2003;56(11):1129-1135.PubMedGoogle ScholarCrossref
23.
Rutter  CM, Gatsonis  CA.  A hierarchical regression approach to meta-analysis of diagnostic test accuracy evaluations.  Stat Med. 2001;20(19):2865-2884.PubMedGoogle ScholarCrossref
24.
Swets  JA.  Measuring the accuracy of diagnostic systems.  Science. 1988;240(4857):1285-1293.PubMedGoogle ScholarCrossref
25.
DerSimonian  R, Laird  N.  Meta-analysis in clinical trials.  Control Clin Trials. 1986;7(3):177-188.PubMedGoogle ScholarCrossref
26.
Rosman  AS, Korsten  MA.  Application of summary receiver operating characteristics (sROC) analysis to diagnostic clinical testing.  Adv Med Sci. 2007;52:76-82.PubMedGoogle Scholar
27.
Hodkinson  HM.  Evaluation of a mental test score for assessment of mental impairment in the elderly.  Age Ageing. 1972;1(4):233-238.PubMedGoogle ScholarCrossref
28.
Sunderland  T, Hill  JL, Mellow  AM,  et al.  Clock drawing in Alzheimer’s disease: a novel measure of dementia severity.  J Am Geriatr Soc. 1989;37(8):725-729.PubMedGoogle ScholarCrossref
29.
Shulman  KI, Shedletsky  R, Silver  IL.  The challenge of time: clock-drawing and cognitive function in the elderly.  Int J Geriatr Psychiatry. 1986;1:135-140.Google ScholarCrossref
30.
Buschke  H, Kuslansky  G, Katz  M,  et al.  Screening for dementia with the memory impairment screen.  Neurology. 1999;52(2):231-238.PubMedGoogle ScholarCrossref
31.
Nasreddine  ZS, Phillips  NA, Bédirian  V,  et al.  The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment.  J Am Geriatr Soc. 2005;53(4):695-699.PubMedGoogle ScholarCrossref
32.
Teng  EL, Chui  HC.  The Modified Mini-Mental State (3MS) examination.  J Clin Psychiatry. 1987;48(8):314-318.PubMedGoogle Scholar
33.
Sager  MA, Hermann  BP, La Rue  A, Woodard  JL.  Screening for dementia in community-based memory clinics.  WMJ. 2006;105(7):25-29.PubMedGoogle Scholar
34.
Borson  S, Scanlan  JM, Chen  P, Ganguli  M.  The Mini-Cog as a screen for dementia: validation in a population-based sample.  J Am Geriatr Soc. 2003;51(10):1451-1454.PubMedGoogle ScholarCrossref
35.
Borson  S, Scanlan  JM, Watanabe  J, Tu  S-P, Lessig  M.  Simplifying detection of cognitive impairment: comparison of the Mini-Cog and Mini-Mental State Examination in a multiethnic sample.  J Am Geriatr Soc. 2005;53(5):871-874.PubMedGoogle ScholarCrossref
36.
Borson  S, Scanlan  JM, Watanabe  J, Tu  SP, Lessig  M.  Improving identification of cognitive impairment in primary care.  Int J Geriatr Psychiatry. 2006;21(4):349-355.PubMedGoogle ScholarCrossref
37.
Carnero-Pardo  C, Cruz-Orduña  I, Espejo-Martínez  B, Martos-Aparicio  C, López-Alcalde  S, Olazarán  J.  Utility of the Mini-Cog for detection of cognitive impairment in primary care: data from two spanish studies.  Int J Alzheimers Dis. 2013;2013:285462.PubMedGoogle Scholar
38.
Fuchs  A, Wiese  B, Altiner  A, Wollny  A, Pentzek  M.  Cued recall and other cognitive tasks to facilitate dementia recognition in primary care.  J Am Geriatr Soc. 2012;60(1):130-135.PubMedGoogle ScholarCrossref
39.
Holsinger  T, Plassman  BL, Stechuchak  KM, Burke  JR, Coffman  CJ, Williams  JW  Jr.  Screening for cognitive impairment: comparing the performance of four instruments in primary care.  J Am Geriatr Soc. 2012;60(6):1027-1036.PubMedGoogle ScholarCrossref
40.
Kaufer  DI, Williams  CS, Braaten  AJ, Gill  K, Zimmerman  S, Sloane  PD.  Cognitive screening for dementia and mild cognitive impairment in assisted living: comparison of 3 tests.  J Am Med Dir Assoc. 2008;9(8):586-593.PubMedGoogle ScholarCrossref
41.
Milian  M, Leiherr  AM, Straten  G, Müller  S, Leyhe  T, Eschweiler  GW.  The Mini-Cog versus the Mini-Mental State Examination and the Clock Drawing Test in daily clinical practice: screening value in a German memory clinic.  Int Psychogeriatr. 2012;24(5):766-774.PubMedGoogle ScholarCrossref
42.
Alexopoulos  P, Ebert  A, Richter-Schmidinger  T,  et al.  Validation of the German revised Addenbrooke’s Cognitive Examination for detecting mild cognitive impairment, mild dementia in alzheimer’s disease and frontotemporal lobar degeneration.  Dement Geriatr Cogn Disord. 2010;29(5):448-456.PubMedGoogle ScholarCrossref
43.
Bastide  L, De Breucker  S, Van den Berge  M, Fery  P, Pepersack  T, Bier  JC.  The Addenbrooke’s Cognitive Examination Revised is as effective as the original to detect dementia in a French-speaking population.  Dement Geriatr Cogn Disord. 2012;34(5-6):337-343.PubMedGoogle ScholarCrossref
44.
Carvalho  VA, Barbosa  MT, Caramelli  P.  Brazilian version of the Addenbrooke Cognitive Examination–Revised in the diagnosis of mild Alzheimer disease.  Cogn Behav Neurol. 2010;23(1):8-13.PubMedGoogle ScholarCrossref
45.
dos Santos Kawata  KH, Hashimoto  R, Nishio  Y,  et al.  A validation study of the Japanese version of the Addenbrooke’s Cognitive Examination–Revised.  Dement Geriatr Cogn Dis Extra. 2012;2(1):29-37.PubMedGoogle ScholarCrossref
46.
Fang  R, Wang  G, Huang  Y,  et al.  Validation of the Chinese version of Addenbrooke’s Cognitive Examination–Revised for screening mild Alzheimer’s disease and mild cognitive impairment.  Dement Geriatr Cogn Disord. 2014;37(3-4):223-231.PubMedGoogle ScholarCrossref
47.
Konstantinopoulou  E, Kosmidis  MH, Ioannidis  P, Kiosseoglou  G, Karacostas  D, Taskos  N.  Adaptation of Addenbrooke’s Cognitive Examination–Revised for the Greek population.  Eur J Neurol. 2011;18(3):442-447.PubMedGoogle ScholarCrossref
48.
Kwak  YT, Yang  Y, Kim  GW.  Korean Addenbrooke’s Cognitive Examination Revised (K-ACER) for differential diagnosis of Alzheimer’s disease and subcortical ischemic vascular dementia.  Geriatr Gerontol Int. 2010;10(4):295-301.PubMedGoogle ScholarCrossref
49.
Pigliautile  M, Ricci  M, Mioshi  E,  et al.  Validation study of the Italian Addenbrooke’s Cognitive Examination Revised in a young-old and old-old population.  Dement Geriatr Cogn Disord. 2011;32(5):301-307.PubMedGoogle ScholarCrossref
50.
Terpening  Z, Cordato  NJ, Hepner  IJ, Lucas  SK, Lindley  RI.  Utility of the Addenbrooke’s Cognitive Examination–Revised for the diagnosis of dementia syndromes.  Australas J Ageing. 2011;30(3):113-118.PubMedGoogle ScholarCrossref
51.
Torralva  T, Roca  M, Gleichgerrcht  E, Bonifacio  A, Raimondi  C, Manes  F.  Validation of the Spanish version of the Addenbrooke’s Cognitive Examination–Revised (ACE-R).  Neurologia. 2011;26(6):351-356.PubMedGoogle ScholarCrossref
52.
Wong  L, Chan  C, Leung  J,  et al.  A validation study of the Chinese-Cantonese Addenbrooke’s Cognitive Examination Revised (C-ACER).  Neuropsychiatr Dis Treat. 2013;9:731-737.PubMedGoogle Scholar
53.
Dalrymple-Alford  JC, MacAskill  MR, Nakas  CT,  et al.  The MoCA: well-suited screen for cognitive impairment in Parkinson disease.  Neurology. 2010;75(19):1717-1725.PubMedGoogle ScholarCrossref
54.
Dong  Y, Lee  WY, Basri  NA,  et al.  The Montreal Cognitive Assessment is superior to the Mini-Mental State Examination in detecting patients at higher risk of dementia.  Int Psychogeriatr. 2012;24(11):1749-1755.PubMedGoogle ScholarCrossref
55.
Hu  JB, Zhou  WH, Hu  SH,  et al.  Cross-cultural difference and validation of the Chinese version of Montreal Cognitive Assessment in older adults residing in Eastern China: preliminary findings.  Arch Gerontol Geriatr. 2013;56(1):38-43.PubMedGoogle ScholarCrossref
56.
Larner  AJ.  Screening utility of the Montreal Cognitive Assessment (MoCA): in place of—or as well as—the MMSE?  Int Psychogeriatr. 2012;24(3):391-396.PubMedGoogle ScholarCrossref
57.
Cummings-Vaughn  LA, Chavakula  NN, Malmstrom  TK, Tumosa  N, Morley  JE, Cruz-Oliver  DM.  Veterans Affairs Saint Louis University Mental Status Examination compared with the Montreal Cognitive Assessment and the Short Test of Mental Status.  J Am Geriatr Soc. 2014;62(7):1341-1346.PubMedGoogle ScholarCrossref
58.
Luis  CA, Keegan  AP, Mullan  M.  Cross validation of the Montreal Cognitive Assessment in community dwelling older adults residing in the Southeastern US.  Int J Geriatr Psychiatry. 2009;24(2):197-201.PubMedGoogle ScholarCrossref
59.
Martinelli  JE, Cecato  JF, Bartholomeu  D, Montiel  JM.  Comparison of the diagnostic accuracy of neuropsychological tests in differentiating Alzheimer’s disease from mild cognitive impairment: can the Montreal Cognitive Assessment be better than the Cambridge Cognitive Examination?  Dement Geriatr Cogn Dis Extra. 2014;4(2):113-121.PubMedGoogle ScholarCrossref
60.
Smith  T, Gildeh  N, Holmes  C.  The Montreal Cognitive Assessment: validity and utility in a memory clinic setting.  Can J Psychiatry. 2007;52(5):329-332.PubMedGoogle Scholar
61.
Mitchell  AJ.  A meta-analysis of the accuracy of the Mini-Mental State Examination in the detection of dementia and mild cognitive impairment.  J Psychiatr Res. 2009;43(4):411-431.PubMedGoogle ScholarCrossref
62.
Mitchell  AJ, Malladi  S.  Screening and case finding tools for the detection of dementia, part I: evidence-based meta-analysis of multidomain tests.  Am J Geriatr Psychiatry. 2010;18(9):759-782.PubMedGoogle ScholarCrossref
63.
Mitchell  AJ, Malladi  S.  Screening and case-finding tools for the detection of dementia, part II: evidence-based meta-analysis of single-domain tests.  Am J Geriatr Psychiatry. 2010;18(9):783-800.PubMedGoogle ScholarCrossref
64.
Diamond  GA, Forrester  JS, Hirsch  M,  et al.  Application of conditional probability analysis to the clinical diagnosis of coronary artery disease.  J Clin Invest. 1980;65(5):1210-1221.PubMedGoogle ScholarCrossref
×