Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies: The PRISMA-DTA Statement | Guidelines | JAMA | JAMA Network
[Skip to Navigation]
Access to paid content on this site is currently suspended due to excessive activity being detected from your IP address 34.239.150.57. Please contact the publisher to request reinstatement.
1.
McInnes  MD, Bossuyt  PM.  Pitfalls of systematic reviews and meta-analyses in imaging research.  Radiology. 2015;277(1):13-21.PubMedGoogle ScholarCrossref
2.
Bastian  H, Glasziou  P, Chalmers  I.  Seventy-five trials and eleven systematic reviews a day: how will we ever keep up?  PLoS Med. 2010;7(9):e1000326.PubMedGoogle Scholar
3.
Glasziou  P, Altman  DG, Bossuyt  P,  et al.  Reducing waste from incomplete or unusable reports of biomedical research.  Lancet. 2014;383(9913):267-276.PubMedGoogle ScholarCrossref
4.
Tunis  AS, McInnes  MD, Hanna  R, Esmail  K.  Association of study quality with completeness of reporting: have completeness of reporting and quality of systematic reviews and meta-analyses in major radiology journals changed since publication of the PRISMA statement?  Radiology. 2013;269(2):413-426.PubMedGoogle ScholarCrossref
5.
McGrath  TA, McInnes  MD, Korevaar  DA, Bossuyt  PM.  Meta-analyses of diagnostic accuracy in imaging journals: analysis of pooling techniques and their effect on summary estimates of diagnostic accuracy.  Radiology. 2016;281(1):78-85.PubMedGoogle ScholarCrossref
6.
Willis  BH, Quigley  M.  The assessment of the quality of reporting of meta-analyses in diagnostic research: a systematic review.  BMC Med Res Methodol. 2011;11:163.PubMedGoogle ScholarCrossref
7.
Willis  BH, Quigley  M.  Uptake of newer methodological developments and the deployment of meta-analysis in diagnostic test research: a systematic review.  BMC Med Res Methodol. 2011;11:27.PubMedGoogle ScholarCrossref
8.
Naaktgeboren  CA, van Enst  WA, Ochodo  EA,  et al.  Systematic overview finds variation in approaches to investigating and reporting on sources of heterogeneity in systematic reviews of diagnostic studies.  J Clin Epidemiol. 2014;67(11):1200-1209.PubMedGoogle ScholarCrossref
9.
Ochodo  EA, van Enst  WA, Naaktgeboren  CA,  et al.  Incorporating quality assessments of primary studies in the conclusions of diagnostic accuracy reviews: a cross-sectional study.  BMC Med Res Methodol. 2014;14:33.PubMedGoogle ScholarCrossref
10.
Naaktgeboren  CA, Ochodo  EA, Van Enst  WA,  et al.  Assessing variability in results in systematic reviews of diagnostic studies.  BMC Med Res Methodol. 2016;16(1):6.PubMedGoogle ScholarCrossref
11.
McGrath  TA, McInnes  MDF, Langer  FW, Hong  J, Korevaar  DA, Bossuyt  PMM.  Treatment of multiple test readers in diagnostic accuracy systematic reviews-meta-analyses of imaging studies.  Eur J Radiol. 2017;93:59-64.PubMedGoogle ScholarCrossref
12.
Moher  D, Liberati  A, Tetzlaff  J, Altman  DG; PRISMA Group.  Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement.  J Clin Epidemiol. 2009;62(10):1006-1012.PubMedGoogle ScholarCrossref
13.
Liberati  A, Altman  DG, Tetzlaff  J,  et al.  The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration.  J Clin Epidemiol. 2009;62(10):e1-e34.PubMedGoogle ScholarCrossref
14.
Macaskill  PGC, Deeks  JJ, Harbord  RM, Takwoingi  Y. Analysing and presenting results. In: Deeks  JJ, Bossuyt  PM, Gatsonis  C, eds.  Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy. Oxford, England: Cochrane Collaboration; 2010.
15.
Whiting  PF, Rutjes  AW, Westwood  ME,  et al; QUADAS-2 Group.  QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies.  Ann Intern Med. 2011;155(8):529-536.PubMedGoogle ScholarCrossref
16.
Bossuyt  PM, Reitsma  JB, Bruns  DE,  et al; STARD Group.  STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies.  Radiology. 2015;277(3):826-832.PubMedGoogle ScholarCrossref
17.
Equator Network. Reporting guidelines under development. http://www.equator-network.org/library/reporting-guidelines-under-development/#99. Accessed November 28, 2014.
18.
Moher  D, Schulz  KF, Simera  I, Altman  DG.  Guidance for developers of health research reporting guidelines.  PLoS Med. 2010;7(2):e1000217.PubMedGoogle Scholar
19.
McGrath  TA, Alabousi  M, Skidmore  B,  et al.  Recommendations for reporting of systematic reviews and meta-analyses of diagnostic test accuracy: a systematic review.  Syst Rev. 2017;6(1):194.PubMedGoogle ScholarCrossref
20.
Trevelyan  E, Robinson  N.  Delphi methodology in health research: how to do it?  Eur J Integr Med. 2015;7:423-428.Google ScholarCrossref
21.
Boulkedid  R, Abdoul  H, Loustau  M, Sibony  O, Alberti  C.  Using and reporting the Delphi method for selecting healthcare quality indicators: a systematic review.  PLoS One. 2011;6(6):e20476.PubMedGoogle Scholar
22.
Whiting  P, Savović  J, Higgins  JP,  et al; ROBIS Group.  ROBIS: a new tool to assess risk of bias in systematic reviews was developed.  J Clin Epidemiol. 2016;69:225-234.PubMedGoogle ScholarCrossref
23.
Korevaar  DA, Cohen  JF, Reitsma  JB,  et al.  Updating standards for reporting diagnostic accuracy: the development of STARD 2015 [published online June 7, 2016].  Res Integr Peer Rev. doi:10.1186/s41073-016-0014-7Google Scholar
24.
Cohen  JF, Korevaar  DA, Gatsonis  CA,  et al; STARD Group.  STARD for abstracts: essential items for reporting diagnostic accuracy studies in journal or conference abstracts.  BMJ. 2017;358:j3751.PubMedGoogle ScholarCrossref
25.
Korevaar  DA, Bossuyt  PM, Hooft  L.  Infrequent and incomplete registration of test accuracy studies: analysis of recent study reports.  BMJ Open. 2014;4(1):e004596.PubMedGoogle Scholar
26.
Korevaar  DA, Cohen  JF, Spijker  R,  et al.  Reported estimates of diagnostic accuracy in ophthalmology conference abstracts were not associated with full-text publication.  J Clin Epidemiol. 2016;79:96-103.PubMedGoogle ScholarCrossref
27.
Deeks  J, Bossuyt  P, Gatsonis  C.  Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy. 1.0.0 ed. Oxford, England: Cochrane Collaboration; 2013.
28.
van Enst  WA, Ochodo  E, Scholten  RJ, Hooft  L, Leeflang  MM.  Investigation of publication bias in meta-analyses of diagnostic test accuracy: a meta-epidemiological study.  BMC Med Res Methodol. 2014;14:70.PubMedGoogle ScholarCrossref
29.
Deeks  JJ, Macaskill  P, Irwig  L.  The performance of tests of publication bias and other sample size effects in systematic reviews of diagnostic test accuracy was assessed.  J Clin Epidemiol. 2005;58(9):882-893.PubMedGoogle ScholarCrossref
30.
Cohen  JF, Korevaar  DA, Altman  DG,  et al.  STARD 2015 guidelines for reporting diagnostic accuracy studies: explanation and elaboration.  BMJ Open. 2016;6(11):e012799.PubMedGoogle Scholar
31.
McGrath  TA, McInnes  MDF, van Es  N, Leeflang  MMG, Korevaar  DA, Bossuyt  PMM.  Overinterpretation of research findings: evidence of “spin” in systematic reviews of diagnostic accuracy studies.  Clin Chem. 2017;63(8):1353-1362.PubMedGoogle ScholarCrossref
32.
Rutter  CM, Gatsonis  CA.  A hierarchical regression approach to meta-analysis of diagnostic test accuracy evaluations.  Stat Med. 2001;20(19):2865-2884.PubMedGoogle ScholarCrossref
33.
Reitsma  JB, Glas  AS, Rutjes  AW, Scholten  RJ, Bossuyt  PM, Zwinderman  AH.  Bivariate analysis of sensitivity and specificity produces informative summary measures in diagnostic reviews.  J Clin Epidemiol. 2005;58(10):982-990.PubMedGoogle ScholarCrossref
34.
Dinnes  J, Mallett  S, Hopewell  S, Roderick  PJ, Deeks  JJ.  The Moses-Littenberg meta-analytical method generates systematic differences in test accuracy compared to hierarchical meta-analytical models.  J Clin Epidemiol. 2016;80:77-87.PubMedGoogle ScholarCrossref
35.
Irwig  L, Macaskill  P, Glasziou  P, Fahey  M.  Meta-analytic methods for diagnostic test accuracy.  J Clin Epidemiol. 1995;48(1):119-130.PubMedGoogle ScholarCrossref
36.
Lijmer  JG, Mol  BW, Heisterkamp  S,  et al.  Empirical evidence of design-related bias in studies of diagnostic tests.  JAMA. 1999;282(11):1061-1066.PubMedGoogle ScholarCrossref
37.
Zwinderman  AH, Glas  AS, Bossuyt  PM, Florie  J, Bipat  S, Stoker  J.  Statistical models for quantifying diagnostic accuracy with multiple lesions per patient.  Biostatistics. 2008;9(3):513-522.PubMedGoogle ScholarCrossref
38.
Beller  EM, Glasziou  PP, Altman  DG,  et al; PRISMA for Abstracts Group.  PRISMA for abstracts: reporting systematic reviews in journal and conference abstracts.  PLoS Med. 2013;10(4):e1001419.PubMedGoogle Scholar
39.
de Vet  HCW, Eisinga  A, Riphagen  II, Aertgeerts  B, Pewsner  D. Searching for studies. In:  Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy Version 0.4. Oxford, England: Cochrane Collaboration; 2008.
40.
Hong  PJ, Korevaar  DA, McGrath  TA,  et al.  Reporting of imaging diagnostic accuracy studies with focus on MRI subgroup: adherence to STARD 2015 [published online June 22, 2017].  J Magn Reson Imaging. doi:10.1002/jmri.25797PubMedGoogle Scholar
Special Communication
January 23/30, 2018

Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies: The PRISMA-DTA Statement

Author Affiliations
  • 1Department of Radiology, University of Ottawa, Ottawa, Ontario, Canada
  • 2Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
  • 3Lady Davis Institute of the Jewish General Hospital, Montreal, Quebec, Canada
  • 4Department of Psychiatry, McGill University, Montreal, Quebec, Canada
  • 5University of Ottawa Department of Radiology, Ottawa, Ontario, Canada
  • 6Department of Clinical Epidemiology, Biostatistics and Bioinformatics, University of Amsterdam, Academic Medical Center, Amsterdam, the Netherlands
JAMA. 2018;319(4):388-396. doi:10.1001/jama.2017.19163
Key Points

Question  What items should be reported to allow readers to evaluate the validity and applicability and to enhance the replicability of systematic reviews of diagnostic test accuracy studies?

Findings  This diagnostic test accuracy guideline is an extension of the original Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. Two PRISMA items have been omitted, 2 were added, and 17 were modified to reflect specific or optimal contemporary systematic review methods of diagnostic test accuracy studies.

Meaning  The guideline checklist can facilitate transparent reporting of reviews of diagnostic test accuracy studies, and may help assist evaluations of validity and applicability, enhance replicability of reviews, and make the results more useful for clinicians, journal editors, reviewers, guideline authors, and funders.

Abstract

Importance  Systematic reviews of diagnostic test accuracy synthesize data from primary diagnostic studies that have evaluated the accuracy of 1 or more index tests against a reference standard, provide estimates of test performance, allow comparisons of the accuracy of different tests, and facilitate the identification of sources of variability in test accuracy.

Objective  To develop the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) diagnostic test accuracy guideline as a stand-alone extension of the PRISMA statement. Modifications to the PRISMA statement reflect the specific requirements for reporting of systematic reviews and meta-analyses of diagnostic test accuracy studies and the abstracts for these reviews.

Design  Established standards from the Enhancing the Quality and Transparency of Health Research (EQUATOR) Network were followed for the development of the guideline. The original PRISMA statement was used as a framework on which to modify and add items. A group of 24 multidisciplinary experts used a systematic review of articles on existing reporting guidelines and methods, a 3-round Delphi process, a consensus meeting, pilot testing, and iterative refinement to develop the PRISMA diagnostic test accuracy guideline. The final version of the PRISMA diagnostic test accuracy guideline checklist was approved by the group.

Findings  The systematic review (produced 64 items) and the Delphi process (provided feedback on 7 proposed items; 1 item was later split into 2 items) identified 71 potentially relevant items for consideration. The Delphi process reduced these to 60 items that were discussed at the consensus meeting. Following the meeting, pilot testing and iterative feedback were used to generate the 27-item PRISMA diagnostic test accuracy checklist. To reflect specific or optimal contemporary systematic review methods for diagnostic test accuracy, 8 of the 27 original PRISMA items were left unchanged, 17 were modified, 2 were added, and 2 were omitted.

Conclusions and Relevance  The 27-item PRISMA diagnostic test accuracy checklist provides specific guidance for reporting of systematic reviews. The PRISMA diagnostic test accuracy guideline can facilitate the transparent reporting of reviews, and may assist in the evaluation of validity and applicability, enhance replicability of reviews, and make the results from systematic reviews of diagnostic test accuracy studies more useful.

×