[Skip to Content]
Access to paid content on this site is currently suspended due to excessive activity being detected from your IP address 35.172.233.2. Please contact the publisher to request reinstatement.
[Skip to Content Landing]
1.
Silverstein  M.  Where’s the outrage?  J Am Coll Surg. 2009;208(1):78-79.PubMedGoogle ScholarCrossref
2.
Silverstein  MJ, Recht  A, Lagios  MD,  et al.  Special report: Consensus conference III: image-detected breast cancer: state-of-the-art diagnosis and treatment [published correction appears in J Am Coll Surg. 2009 Dec;209(6):802].  J Am Coll Surg. 2009;209(4):504-520.PubMedGoogle ScholarCrossref
3.
Weaver  DL, Rosenberg  RD, Barlow  WE,  et al.  Pathologic findings from the Breast Cancer Surveillance Consortium: population-based outcomes in women undergoing biopsy after screening mammography.  Cancer. 2006;106(4):732-742.PubMedGoogle ScholarCrossref
4.
Harris  JR, Lippman  ME, Morrow  M, Osborne  CK.  Diseases of the Breast.5th ed. Philadelphia, PA: Wolters Kluwer Health; 2014.
5.
Bleyer  A, Welch  HG.  Effect of 3 decades of screening mammography on breast-cancer incidence.  N Engl J Med. 2012;367(21):1998-2005.PubMedGoogle ScholarCrossref
6.
Hall  FM.  Identification, biopsy, and treatment of poorly understood premalignant, in situ, and indolent low-grade cancers: are we becoming victims of our own success?  Radiology. 2010;254(3):655-659.PubMedGoogle ScholarCrossref
7.
O'Malley  FP, Pinder  SE, Mulligan  AM.  Breast pathology. Philadelphia, PA: Elsevier/Saunders; 2011.
8.
Schnitt  SJ, Collins  LC.  Biopsy interpretation of the breast. Philadelphia, PA: Wolters Kluwer Health/Lippincott Williams & Wilkins; 2009.
9.
Rosai  J.  Borderline epithelial lesions of the breast.  Am J Surg Pathol. 1991;15(3):209-221.PubMedGoogle ScholarCrossref
10.
Schnitt  SJ, Connolly  JL, Tavassoli  FA,  et al.  Interobserver reproducibility in the diagnosis of ductal proliferative breast lesions using standardized criteria.  Am J Surg Pathol. 1992;16(12):1133-1143.PubMedGoogle ScholarCrossref
11.
Wells  WA, Carney  PA, Eliassen  MS, Tosteson  AN, Greenberg  ER.  Statewide study of diagnostic agreement in breast pathology.  J Natl Cancer Inst. 1998;90(2):142-145.PubMedGoogle ScholarCrossref
12.
Della Mea  V, Puglisi  F, Bonzanini  M,  et al.  Fine-needle aspiration cytology of the breast: a preliminary report on telepathology through Internet multimedia electronic mail.  Mod Pathol. 1997;10(6):636-641.PubMedGoogle Scholar
13.
Oster  NV, Carney  PA, Allison  KH,  et al.  Development of a diagnostic test set to assess agreement in breast pathology: practical application of the Guidelines for Reporting Reliability and Agreement Studies (GRRAS).  BMC Womens Health. 2013;13(1):3.PubMedGoogle ScholarCrossref
14.
Feng  S, Weaver  DL, Carney  PA,  et al.  A framework for evaluating diagnostic discordance in pathology discovered during research studies.  Arch Pathol Lab Med. 2014;138(7):955-961.PubMedGoogle ScholarCrossref
15.
Allison  KH, Reisch  LM, Carney  PA,  et al.  Understanding diagnostic variability in breast pathology: lessons learned from an expert consensus review panel.  Histopathology. 2014;65(2):240-251.PubMedGoogle ScholarCrossref
16.
National Cancer Institute.  Breast cancer surveillance consortium.http://breastscreening.cancer.gov/. Accessed June 1, 2011.
17.
Ginsburg  OM, Martin  LJ, Boyd  NF.  Mammographic density, lobular involution, and risk of breast cancer.  Br J Cancer. 2008;99(9):1369-1374.PubMedGoogle ScholarCrossref
18.
Helmer  O.  The systematic use of expert judgment in operations research.http://www.rand.org/pubs/papers/P2795.html. Accessed March 27, 2012.
19.
Willis  GB.  Cognitive Interviewing: A Tool For Improving Questionnaire Design. Thousand Oaks, CA: Sage Publications; 2005.
20.
American Medical Association.  Physicians.http://www.dmddata.com/data_lists_physicians.asp. Accessed January 27, 2015.
21.
Rubin  E, Visscher  DW, Alexander  RW, Urist  MM, Maddox  WA.  Proliferative disease and atypia in biopsies performed for nonpalpable lesions detected mammographically.  Cancer. 1988;61(10):2077-2082.PubMedGoogle ScholarCrossref
22.
Zahl  PH, Jørgensen  KJ, Gøtzsche  PC.  Overestimated lead times in cancer screening has led to substantial underestimation of overdiagnosis.  Br J Cancer. 2013;109(7):2014-2019.PubMedGoogle ScholarCrossref
23.
Gøtzsche  PC, Jørgensen  KJ.  Screening for breast cancer with mammography.  Cochrane Database Syst Rev. 2013;6(6):CD001877.PubMedGoogle Scholar
24.
Collins  LC, Connolly  JL, Page  DL,  et al.  Diagnostic agreement in the evaluation of image-guided breast core needle biopsies: results from a randomized clinical trial.  Am J Surg Pathol. 2004;28(1):126-131.PubMedGoogle ScholarCrossref
25.
Haas  JS, Cook  EF, Puopolo  AL, Burstin  HR, Brennan  TA.  Differences in the quality of care for women with an abnormal mammogram or breast complaint.  J Gen Intern Med. 2000;15(5):321-328.PubMedGoogle ScholarCrossref
26.
Saul  S.  Prone to error: earliest steps to find cancer.New York Times. July 19, 2010. http://www.nytimes.com/2010/07/20/health/20cancer.html?pagewanted=all&_r=0. Accessed February 16, 2015.
27.
Rakovitch  E, Mihai  A, Pignol  JP,  et al.  Is expert breast pathology assessment necessary for the management of ductal carcinoma in situ?  Breast Cancer Res Treat. 2004;87(3):265-272.PubMedGoogle ScholarCrossref
28.
Dupont  WD, Page  DL.  Risk factors for breast cancer in women with proliferative breast disease.  N Engl J Med. 1985;312(3):146-151.PubMedGoogle ScholarCrossref
29.
Dupont  WD, Parl  FF, Hartmann  WH,  et al.  Breast cancer risk associated with proliferative breast disease and atypical hyperplasia.  Cancer. 1993;71(4):1258-1265.PubMedGoogle ScholarCrossref
30.
London  SJ, Connolly  JL, Schnitt  SJ, Colditz  GA.  A prospective study of benign breast disease and the risk of breast cancer.  JAMA. 1992;267(7):941-944.PubMedGoogle ScholarCrossref
31.
Hartmann  LC, Degnim  AC, Santen  RJ, Dupont  WD, Ghosh  K.  Atypical hyperplasia of the breast—risk assessment and management options.  N Engl J Med. 2015;372(1):78-89.PubMedGoogle ScholarCrossref
32.
Feinstein  AR.  A bibliography of publications on observer variability.  J Chronic Dis. 1985;38(8):619-632.PubMedGoogle ScholarCrossref
33.
Elmore  JG, Feinstein  AR.  A bibliography of publications on observer variability (final installment).  J Clin Epidemiol. 1992;45(6):567-580.PubMedGoogle ScholarCrossref
34.
Elmore  JG, Wells  CK, Lee  CH, Howard  DH, Feinstein  AR.  Variability in radiologists’ interpretations of mammograms.  N Engl J Med. 1994;331(22):1493-1499.PubMedGoogle ScholarCrossref
35.
Lee  CI, Bassett  LW, Lehman  CD.  Breast density legislation and opportunities for patient-centered outcomes research.  Radiology. 2012;264(3):632-636.PubMedGoogle ScholarCrossref
36.
Carney  PA, Eliassen  MS, Wells  WA, Swartz  WG.  Can we improve breast pathology reporting practices? a community-based breast pathology quality improvement program in New Hampshire.  J Community Health. 1998;23(2):85-98.PubMedGoogle ScholarCrossref
37.
Trocchi  P, Ursin  G, Kuss  O,  et al.  Mammographic density and inter-observer variability of pathologic evaluation of core biopsies among women with mammographic abnormalities.  BMC Cancer. 2012;12:554.PubMedGoogle ScholarCrossref
38.
Shaw  EC, Hanby  AM, Wheeler  K,  et al.  Observer agreement comparing the use of virtual slides with glass slides in the pathology review component of the POSH breast cancer cohort study.  J Clin Pathol. 2012;65(5):403-408.PubMedGoogle ScholarCrossref
39.
Stang  A, Trocchi  P, Ruschke  K,  et al.  Factors influencing the agreement on histopathological assessments of breast biopsies among pathologists.  Histopathology. 2011;59(5):939-949.PubMedGoogle ScholarCrossref
Original Investigation
March 17, 2015

Diagnostic Concordance Among Pathologists Interpreting Breast Biopsy Specimens

Author Affiliations
  • 1Department of Medicine, University of Washington School of Medicine, Seattle
  • 2Program in Biostatistics and Biomathematics, Fred Hutchinson Cancer Research Center, Seattle, Washington
  • 3Department of Family Medicine, Oregon Health and Science University, Portland
  • 4Department of Family Medicine, University of Vermont, Vineyard Haven, Massachusetts
  • 5Department of Community and Family Medicine, The Dartmouth Institute for Health Policy and Clinical Practice, Geisel School of Medicine at Dartmouth, Norris Cotton Cancer Center, Lebanon, New Hampshire
  • 6Department of Medicine, Geisel School of Medicine at Dartmouth, Lebanon, New Hampshire
  • 7Providence Cancer Center, Providence Health and Services Oregon, Portland
  • 8Department of Medical Informatics and Clinical Epidemiology, Oregon Health and Science University, Portland
  • 9Department of Clinical Epidemiology and Medicine, Oregon Health and Science University, Portland
  • 10Department of Pathology, Stanford University School of Medicine, Stanford, California
  • 11Department of Pathology, Beth Israel Deaconess Medical Center, Boston, Massachusetts
  • 12Harvard Medical School, Boston, Massachusetts
  • 13Department of Laboratory Medicine and the Keenan Research Centre of the Li Ka Shing Knowledge Institute, Toronto, Ontario, Canada
  • 14St Michael’s Hospital and the University of Toronto, Ontario, Canada
  • 15Department of Pathology and University of Vermont Cancer Center, University of Vermont, Burlington
JAMA. 2015;313(11):1122-1132. doi:10.1001/jama.2015.1405
Abstract

Importance  A breast pathology diagnosis provides the basis for clinical treatment and management decisions; however, its accuracy is inadequately understood.

Objectives  To quantify the magnitude of diagnostic disagreement among pathologists compared with a consensus panel reference diagnosis and to evaluate associated patient and pathologist characteristics.

Design, Setting, and Participants  Study of pathologists who interpret breast biopsies in clinical practices in 8 US states.

Exposures  Participants independently interpreted slides between November 2011 and May 2014 from test sets of 60 breast biopsies (240 total cases, 1 slide per case), including 23 cases of invasive breast cancer, 73 ductal carcinoma in situ (DCIS), 72 with atypical hyperplasia (atypia), and 72 benign cases without atypia. Participants were blinded to the interpretations of other study pathologists and consensus panel members. Among the 3 consensus panel members, unanimous agreement of their independent diagnoses was 75%, and concordance with the consensus-derived reference diagnoses was 90.3%.

Main Outcomes and Measures  The proportions of diagnoses overinterpreted and underinterpreted relative to the consensus-derived reference diagnoses were assessed.

Results  Sixty-five percent of invited, responding pathologists were eligible and consented to participate. Of these, 91% (N = 115) completed the study, providing 6900 individual case diagnoses. Compared with the consensus-derived reference diagnosis, the overall concordance rate of diagnostic interpretations of participating pathologists was 75.3% (95% CI, 73.4%-77.0%; 5194 of 6900 interpretations). Among invasive carcinoma cases (663 interpretations), 96% (95% CI, 94%-97%) were concordant, and 4% (95% CI, 3%-6%) were underinterpreted; among DCIS cases (2097 interpretations), 84% (95% CI, 82%-86%) were concordant, 3% (95% CI, 2%-4%) were overinterpreted, and 13% (95% CI, 12%-15%) were underinterpreted; among atypia cases (2070 interpretations), 48% (95% CI, 44%-52%) were concordant, 17% (95% CI, 15%-21%) were overinterpreted, and 35% (95% CI, 31%-39%) were underinterpreted; and among benign cases without atypia (2070 interpretations), 87% (95% CI, 85%-89%) were concordant and 13% (95% CI, 11%-15%) were overinterpreted. Disagreement with the reference diagnosis was statistically significantly higher among biopsies from women with higher (n = 122) vs lower (n = 118) breast density on prior mammograms (overall concordance rate, 73% [95% CI, 71%-75%] for higher vs 77% [95% CI, 75%-80%] for lower, P < .001), and among pathologists who interpreted lower weekly case volumes (P < .001) or worked in smaller practices (P = .034) or nonacademic settings (P = .007).

Conclusions and Relevance  In this study of pathologists, in which diagnostic interpretation was based on a single breast biopsy slide, overall agreement between the individual pathologists’ interpretations and the expert consensus–derived reference diagnoses was 75.3%, with the highest level of concordance for invasive carcinoma and lower levels of concordance for DCIS and atypia. Further research is needed to understand the relationship of these findings with patient management.

×