Accuracy of Practitioner Estimates of Probability of Diagnosis Before and After Testing | Breast Cancer | JAMA Internal Medicine | JAMA Network
[Skip to Navigation]
Figure.  Distribution of Practitioner Assessments of Probability of Disease Before Testing and After Positive or Negative Test Results for 4 Testing Questions Representing Scenarios Commonly Encountered in Primary Care
Distribution of Practitioner Assessments of Probability of Disease Before Testing and After Positive or Negative Test Results for 4 Testing Questions Representing Scenarios Commonly Encountered in Primary Care

A, Scenario: a previously healthy 35-year-old woman who smokes tobacco presents with 5 days of fatigue, productive cough, worsening shortness of breath, temperatures to 38.9°C, and decreased breath sounds in the lower right field. She has a heart rate of 105 beats/min, but vital signs are otherwise normal. B, Scenario: a 45-year-old woman comes in for an annual visit. She has no specific risk factors or symptoms for breast cancer. C, Scenario: a 43-year-old premenopausal woman presents with atypical chest pain and normal ECG results. She has no risk factors and has normal vital signs and examination findings. D, Scenario: a 65-year-old man is seen for osteoarthritis. He has noted foul-smelling urine and no pain or difficulty with urination. A urine dipstick shows trace blood. ECG indicates electrocardiography.

Table 1.  Survey Responses
Survey Responses
Table 2.  Variables Associated With Practice Among Enrolled Practitioners
Variables Associated With Practice Among Enrolled Practitioners
Table 3.  Estimates of Probability of Disease Before Testing and After Positive or Negative Test Results for 5 Testing Questions
Estimates of Probability of Disease Before Testing and After Positive or Negative Test Results for 5 Testing Questions
Table 4.  Imputed Positive and Negative Likelihood Ratios Calculated for Each Practitioner Based on Their Pretest and Posttest Positive or Negative Responses
Imputed Positive and Negative Likelihood Ratios Calculated for Each Practitioner Based on Their Pretest and Posttest Positive or Negative Responses
1.
Hunter  DJ.  Uncertainty in the era of precision medicine.   N Engl J Med. 2016;375(8):711-713. doi:10.1056/NEJMp1608282 PubMedGoogle ScholarCrossref
2.
Schünemann  HJ, Mustafa  RA, Brozek  J,  et al; GRADE Working Group.  GRADE guidelines: 22. the GRADE approach for tests and strategies—from test accuracy to patient-important outcomes and recommendations.   J Clin Epidemiol. 2019;111:69-82. doi:10.1016/j.jclinepi.2019.02.003 PubMedGoogle ScholarCrossref
3.
Armstrong  KA, Metlay  JP.  Annals clinical decision making: using a diagnostic test.   Ann Intern Med. 2020;172(9):604-609. doi:10.7326/M19-1940 PubMedGoogle ScholarCrossref
4.
Centers for Disease Control and Prevention. About DLS. Published March 26, 2020. Accessed May 20, 2020. https://www.cdc.gov/csels/dls/about-us.html
5.
Whiting  PF, Davenport  C, Jameson  C,  et al.  How well do health professionals interpret diagnostic information? a systematic review.   BMJ Open. 2015;5(7):e008155. doi:10.1136/bmjopen-2015-008155 PubMedGoogle Scholar
6.
Reid  MC, Lane  DA, Feinstein  AR.  Academic calculations versus clinical judgments: practicing physicians’ use of quantitative measures of test accuracy.   Am J Med. 1998;104(4):374-380. doi:10.1016/S0002-9343(98)00054-0 PubMedGoogle ScholarCrossref
7.
Berwick  DM, Fineberg  HV, Weinstein  MC.  When doctors meet numbers.   Am J Med. 1981;71(6):991-998. doi:10.1016/0002-9343(81)90325-9 PubMedGoogle ScholarCrossref
8.
Gigerenzer  G.  Reckoning With Risk: Learning to Live With Uncertainty. Penguin Books; 2003.
9.
Manrai  AK, Bhatia  G, Strymish  J, Kohane  IS, Jain  SH.  Medicine’s uncomfortable relationship with math: calculating positive predictive value.   JAMA Intern Med. 2014;174(6):991-993. doi:10.1001/jamainternmed.2014.1059 PubMedGoogle ScholarCrossref
10.
Casscells  W, Schoenberger  A, Graboys  TB.  Interpretation by physicians of clinical laboratory results.   N Engl J Med. 1978;299(18):999-1001. doi:10.1056/NEJM197811022991808 PubMedGoogle ScholarCrossref
11.
Krouss  M, Croft  L, Morgan  DJ.  Physician understanding and ability to communicate harms and benefits of common medical treatments.   JAMA Intern Med. 2016;176(10):1565-1567. doi:10.1001/jamainternmed.2016.5027 PubMedGoogle ScholarCrossref
12.
Korenstein  D.  Charting the route to high value care: the role of medical education.   JAMA. 2015;314(22):2359-2361. doi:10.1001/jama.2015.15406 PubMedGoogle ScholarCrossref
13.
Gagliardi  JP, Stinnett  SS, Schardt  C.  Innovation in evidence-based medicine education and assessment: an interactive class for third- and fourth-year medical students.   J Med Libr Assoc. 2012;100(4):306-309. doi:10.3163/1536-5050.100.4.014 PubMedGoogle ScholarCrossref
14.
Tversky  A, Kahneman  D.  Judgment under uncertainty: heuristics and biases.   Science. 1974;185(4157):1124-1131. doi:10.1126/science.185.4157.1124 PubMedGoogle ScholarCrossref
15.
CNNPolitics. Top doctor says White House Coronavirus Task Force still missing 50% of testing data. Accessed May 6, 2020. https://www.cnn.com/2020/04/02/politics/birx-task-force-coronavirus-testing/index.html
16.
Morgan  DJ, Scherer  LD, Korenstein  D.  Improving physician communication about treatment decisions: reconsideration of “risks vs benefits”.   JAMA. 2020;324(10):937-938. Published online March 9, 2020. doi:10.1001/jama.2020.0354 PubMedGoogle ScholarCrossref
17.
Morgan  DJ, Brownlee  S, Leppin  AL,  et al. Setting a research agenda for medical overuse. BMJ. 2015;351:h4534. doi:10.1136/bmj.h4534
18.
Fagerlin  A, Sepucha  KR, Couper  MP, Levin  CA, Singer  E, Zikmund-Fisher  BJ.  Patients’ knowledge about 9 common health conditions: the DECISIONS survey.   Med Decis Making. 2010;30(5)(suppl):35S-52S. doi:10.1177/0272989X10378700 PubMedGoogle ScholarCrossref
19.
Steurer  J, Fischer  JE, Bachmann  LM, Koller  M, ter Riet  G.  Communicating accuracy of tests to general practitioners: a controlled study.   BMJ. 2002;324(7341):824-826. doi:10.1136/bmj.324.7341.824 PubMedGoogle ScholarCrossref
20.
Heneghan  C, Glasziou  P, Thompson  M,  et al.  Diagnostic strategies used in primary care.   BMJ. 2009;338:b946. doi:10.1136/bmj.b946 PubMedGoogle ScholarCrossref
21.
Bernstein  PL.  Against the Gods: The Remarkable Story of Risk. John Wiley & Sons; 1996.
22.
Saposnik  G, Redelmeier  D, Ruff  CC, Tobler  PN.  Cognitive biases associated with medical decisions: a systematic review.   BMC Med Inform Decis Mak. 2016;16(1):138. doi:10.1186/s12911-016-0377-1 PubMedGoogle ScholarCrossref
23.
Crowley  RS, Legowski  E, Medvedeva  O,  et al.  Automated detection of heuristics and biases among pathologists in a computer-based system.   Adv Health Sci Educ Theory Pract. 2013;18(3):343-363. doi:10.1007/s10459-012-9374-z PubMedGoogle ScholarCrossref
24.
Nicolle  LE, Gupta  K, Bradley  SF,  et al.  Clinical practice guideline for the management of asymptomatic bacteriuria: 2019 update by the Infectious Diseases Society of America.   Clin Infect Dis. 2019;68(10):1611-1615. doi:10.1093/cid/ciz021 PubMedGoogle ScholarCrossref
25.
Wells’ Criteria for Pulmonary Embolism. MDCalc. Accessed April 27, 2020. https://www.mdcalc.com/wells-criteria-pulmonary-embolism
26.
Sheridan  SL, Donahue  KE, Brenner  AT.  Beginning with high value care in mind: a scoping review and toolkit to support the content, delivery, measurement, and sustainment of high value care.   Patient Educ Couns. 2019;102(2):238-252. doi:10.1016/j.pec.2018.05.014 PubMedGoogle ScholarCrossref
27.
Morgan  DJ, Leppin  AL, Smith  CD, Korenstein  D.  A practical framework for understanding and reducing medical overuse: conceptualizing overuse through the patient-clinician interaction.   J Hosp Med. 2017;12(5):346-351. doi:10.12788/jhm.2738 PubMedGoogle ScholarCrossref
28.
Back  AL, Fromme  EK, Meier  DE.  Training clinicians with communication skills needed to match medical treatments to patient values.   J Am Geriatr Soc. 2019;67(S2):S435-S441. doi:10.1111/jgs.15709 PubMedGoogle ScholarCrossref
29.
Armstrong  KA, Metlay  JP.  Annals clinical decision making: communicating risk and engaging patients in shared decision making.   Ann Intern Med. 2020;172(10):688-692. doi:10.7326/M19-3495 PubMedGoogle ScholarCrossref
30.
Richardson  WS.  Five uneasy pieces about pre-test probability.   J Gen Intern Med. 2002;17(11):882-883. doi:10.1046/j.1525-1497.2002.20916.x PubMedGoogle ScholarCrossref
31.
Genders  TSS, Steyerberg  EW, Hunink  MGM,  et al.  Prediction model to estimate presence of coronary artery disease: retrospective pooled analysis of existing cohorts.   BMJ. 2012;344:e3485. doi:10.1136/bmj.e3485 PubMedGoogle Scholar
32.
Fihn  SD, Gardin  JM, Abrams  J,  et al; American College of Cardiology Foundation; American Heart Association Task Force on Practice Guidelines; American College of Physicians; American Association for Thoracic Surgery; Preventive Cardiovascular Nurses Association; Society for Cardiovascular Angiography and Interventions; Society of Thoracic Surgeons.  2012 ACCF/AHA/ACP/AATS/PCNA/SCAI/STS Guideline for the diagnosis and management of patients with stable ischemic heart disease: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines, and the American College of Physicians, American Association for Thoracic Surgery, Preventive Cardiovascular Nurses Association, Society for Cardiovascular Angiography and Interventions, and Society of Thoracic Surgeons.   J Am Coll Cardiol. 2012;60(24):e44-e164. doi:10.1016/j.jacc.2012.07.013 PubMedGoogle ScholarCrossref
33.
Testing Wisely. Accessed February 25, 2021. https://calculator.testingwisely.com/
34.
Klein  G.  A naturalistic decision making perspective on studying intuitive decision making.   J Appl Res Memory Cognit. 2015;4(3):164-168. doi:10.1016/j.jarmac.2015.07.001Google ScholarCrossref

Limit 200 characters
Limit 25 characters
Conflicts of Interest Disclosure

Identify all potential conflicts of interest that might be relevant to your comment.

Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.

Err on the side of full disclosure.

If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.

Not all submitted comments are published. Please see our commenting policy for details.

Limit 140 characters
Limit 3600 characters or approximately 600 words
    3 Comments for this article
    EXPAND ALL
    More evidence of the need for more evidence-based practice
    William Phillips, MD, MPH | University of Washington
    Morgan (1) and team document the inaccuracy of physician estimates of probability of commons diseases, both before and after diagnostic testing. Overestimates were common and often large. Manrai (2) discusses the importance of these errors for clinical decision-making and points to the need to improve clinician knowledge of base rates and skills in using pre-test and post-test probabilities.

    Similar deficiencies in physician estimates of the rates of complications of common diagnostic and therapeutic clinical procedures were documented decades ago.(3) Primary care physicians and general surgeons made reasonable estimates only about a quarter of the time, while making overestimates,
    underestimates and admitting ignorance at about the same rate. As with the Morgan study, errors were common and large. Many were several orders of magnitude above or below the rates documented by the best available evidence. Physicians were often unable to make reasonable estimates of the rates of complications of procedures they commonly ordered or performed.

    Innumeracy is a big enough problem among patients and policymakers; we do not need to complicate it further by clinicians. Patient care, informed consent and shared decision-making all require knowledge of evidence and skills in evidence-based practice.

    1. Morgan DJ, Pineles L, Owczarzak J, et al. Accuracy of Practitioner Estimates of Probability of Diagnosis Before and After Testing. JAMA Intern Med. Published online April 05, 2021. doi:10.1001/jamainternmed.2021.0269

    2. Manrai AK. Physicians, Probabilities, and Populations—Estimating the Likelihood of Disease for Common Clinical Scenarios. JAMA Intern Med. Published online April 05, 2021. doi:10.1001/jamainternmed.2021.0240

    3. Kronlund SF, Phillips WR. Physician knowledge of risks of surgical and invasive diagnostic procedures. Western Journal of Medicine 1985;142:565-569.
    CONFLICT OF INTEREST: None Reported
    READ MORE
    Accuracy of Biochemical Testing
    Eleftherios Diamandis, MD, PhD | Mount Sinai Hospital
    In my commentary I will use the following abbreviations: TP, true positive; TN, true negative; FP, false positive; FN, false negative; PPV, positive predictive value; NPV, negative predictive value.
    The findings of Morgan et al. that practitioners overestimate the probability of diagnosis before and after testing is interesting, but not surprising. In my 30 years of teaching biochemists and medical students the principles of laboratory testing to identify or exclude disease, I observed that the students experience difficulties with some concepts. For example, they understand easily the concepts of sensitivity=TP/(TP+FN) and specificity=TN/(TN+FP) and consider values of >90% as excellent for
    these parameters. But they get confused when you teach them that when they test to rule-in a disease, the critical parameter is the PPV=TP/(TP+FP) and when they test to rule out disease the critical parameter is the NPV=TN/TN+FN). PPV represents the chance that somebody may have the disease if the test is positive and NPV is the chance of somebody not having the disease if the test is negative. The students are usually surprised when you teach them that PPV and NPV are dependent on disease prevalence. For example, they will be astonished to hear that the PPV of a test may be very low (e.g.,<2%) if the disease is rare (e.g.<0.1%, such as in cancer screening), even if the test has excellent sensitivity and specificity (e.g. >90%). I found that the best way to illustrate the point is to present them with Tables of PPV and NPV at certain diseases prevalences and sensitivities and specificities. We prepared one such table that applies to COVID-19 testing (2). In one example of disease prevalence of 1%, a test with 90/95% sensitivity/specificity has a PPV of only 15%, mostly due to the high number of FP.
    Students are also perplexed when we teach them that tests with a PPV of let’s say 2% may have clinical utility. An example is maternal screening for fetal abnormalities by using biochemical markers. The prevalence of Down Syndrome (DS) in middle-aged pregnant women is approximately 1:200. A biochemical test that operates at high sensitivity, to catch >99% of abnormal fetuses (you do not want to miss any of those), may have a PPV of only 2%. In such case, the chance of DS in the positive population is 1:50, thus potentially saving 150 amniocenteses (an invasive test).
    We should continue teaching medical students the principles of laboratory testing by providing various examples, in order to avoid tasting abuse and misinterpretations and help with optimal test utilization, in order to save resources. What the students should realize is that laboratory tests with seemingly low performance characteristics (sensitivity, specificity) can find appropriate clinical utilities and tests with spectacular characteristics may fail, due to low disease prevalence (rare diseases).


    References


    1. Morgan DJ, Pineles L, Owczarzak J, et al. Accuracy of practitioner estimates of probability of diagnosis before and after testing. JAMA Intern Med. Published online April 05, 2021. doi:10.1001/jamainternmed.2021.0269
    2. Prassas I, Fiala C, Diamandis EP. Assay requirements for COVID-19 testing: serology vs. rapid antigen tests. Clin Chem Lab Med. 2021 Mar 18. doi: 10.1515/cclm-2021-0234. Epub ahead of print. PMID: 33730769.
    CONFLICT OF INTEREST: None Reported
    READ MORE
    Atypical vs non-anginal chest pain
    Gary Martin, M.D. | Northwestern University
    Morgan and coauthors work is near and dear to my heart. As someone who reviews related medical decision making principles monthly with students I applaud their efforts and agree with their overall findings. One clinically significant point I would flag though is their cardiac ischemia case with a 43 year old woman with "atypical chest pain". A useful nuance is the distinction between atypical chest pain which is meant to be used when a patient has about half of the features of classic/typical angina vs non-anginal chest pain which has ~ none of those features. From my read of Diamond and Forrester as well as the European CAD consortium the authors refer to, their assigned estimates for pretest probability are more in line with non-anginal chest pain and not atypical chest pain. I am assuming this is for recurrent outpatient chest pain and not a one time emergency department presentation which should use different data sources. Again I appreciate their wonderful work supporting the need for more education in this field.
    CONFLICT OF INTEREST: None Reported
    READ MORE
    Views 26,461
    Citations 0
    Original Investigation
    Less Is More
    April 5, 2021

    Accuracy of Practitioner Estimates of Probability of Diagnosis Before and After Testing

    Author Affiliations
    • 1Department of Epidemiology and Public Health, University of Maryland School of Medicine, Baltimore
    • 2Veterans Affairs (VA) Maryland Healthcare System, Baltimore
    • 3Department of Health, Behavior, and Society, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
    • 4Adult and Child Consortium of Health Outcomes Research and Delivery Science, University of Colorado School of Medicine, Aurora
    • 5Division of Cardiology, University of Colorado School of Medicine, Aurora
    • 6Center of Innovation for Veteran-Centered and Value-Driven Care, VA Denver, Denver, Colorado
    • 7Division of General Internal Medicine & Geriatrics, Department of Medicine, Oregon Health & Science University, Portland
    • 8Department of Medicine, Dell Medical School, the University of Texas at Austin, Austin
    • 9Department of Medicine, South Texas Veterans Health Care System, San Antonio
    • 10Department of Medicine, University of Wisconsin School of Medicine and Public Health, Madison
    • 11Department of Medicine, Penn State College of Medicine, Hershey, Pennsylvania
    • 12Department of Public Health Sciences, Penn State College of Medicine, Hershey, Pennsylvania
    • 13Department of Medicine, University of Maryland School of Medicine, Baltimore
    • 14Department of Informatics, Genomic Medicine Institute, Geisinger, Danville, Pennsylvania
    • 15Division of Infectious Diseases, New York University Grossman School of Medicine, New York
    • 16Division of General Internal Medicine, Memorial Sloan Kettering Cancer Center, New York, New York
    JAMA Intern Med. Published online April 5, 2021. doi:10.1001/jamainternmed.2021.0269
    Key Points

    Question  Do practitioners understand the probability of common clinical diagnoses?

    Findings  In this survey study of 553 practitioners performing primary care, respondents overestimated the probability of diagnosis before and after testing. This posttest overestimation was associated with consistent overestimates of pretest probability and overestimates of disease after specific diagnostic test results.

    Meaning  These findings suggest that many practitioners are unaccustomed to using probability in diagnosis and clinical practice. Widespread overestimates of the probability of disease likely contribute to overdiagnosis and overuse.

    Abstract

    Importance  Accurate diagnosis is essential to proper patient care.

    Objective  To explore practitioner understanding of diagnostic reasoning.

    Design, Setting, and Participants  In this survey study, 723 practitioners at outpatient clinics in 8 US states were asked to estimate the probability of disease for 4 scenarios common in primary care (pneumonia, cardiac ischemia, breast cancer screening, and urinary tract infection) and the association of positive and negative test results with disease probability from June 1, 2018, to November 26, 2019. Of these practitioners, 585 responded to the survey, and 553 answered all of the questions. An expert panel developed the survey and determined correct responses based on literature review.

    Results  A total of 553 (290 resident physicians, 202 attending physicians, and 61 nurse practitioners and physician assistants) of 723 practitioners (76.5%) fully completed the survey (median age, 32 years; interquartile range, 29-44 years; 293 female [53.0%]; 296 [53.5%] White). Pretest probability was overestimated in all scenarios. Probabilities of disease after positive results were overestimated as follows: pneumonia after positive radiology results, 95% (evidence range, 46%-65%; comparison P < .001); breast cancer after positive mammography results, 50% (evidence range, 3%-9%; P < .001); cardiac ischemia after positive stress test result, 70% (evidence range, 2%-11%; P < .001); and urinary tract infection after positive urine culture result, 80% (evidence range, 0%-8.3%; P < .001). Overestimates of probability of disease with negative results were also observed as follows: pneumonia after negative radiography results, 50% (evidence range, 10%-19%; P < .001); breast cancer after negative mammography results, 5% (evidence range, <0.05%; P < .001); cardiac ischemia after negative stress test result, 5% (evidence range, 0.43%-2.5%; P < .001); and urinary tract infection after negative urine culture result, 5% (evidence range, 0%-0.11%; P < .001). Probability adjustments in response to test results varied from accurate to overestimates of risk by type of test (imputed median positive and negative likelihood ratios [LRs] for practitioners for chest radiography for pneumonia: positive LR, 4.8; evidence, 2.6; negative LR, 0.3; evidence, 0.3; mammography for breast cancer: positive LR, 44.3; evidence range, 13.0-33.0; negative LR, 1.0; evidence range, 0.05-0.24; exercise stress test for cardiac ischemia: positive LR, 21.0; evidence range, 2.0-2.7; negative LR, 0.6; evidence range, 0.5-0.6; urine culture for urinary tract infection: positive LR, 9.0; evidence, 9.0; negative LR, 0.1; evidence, 0.1).

    Conclusions and Relevance  This survey study suggests that for common diseases and tests, practitioners overestimate the probability of disease before and after testing. Pretest probability was overestimated in all scenarios, whereas adjustment in probability after a positive or negative result varied by test. Widespread overestimates of the probability of disease likely contribute to overdiagnosis and overuse.

    Introduction

    Diagnosis of disease is complex and taught using estimated probabilities based on the patient’s history, physical examination findings, and diagnostic test results.1-3 Correct ordering and interpretation of tests are increasingly important given the increase in the number and complexity of tests, with more than 14 billion tests performed yearly in the US alone.4 Although practitioners are taught to estimate pretest probability and to apply the sensitivity and specificity of a test to interpret a positive or negative result, data suggest that historically most practitioners perform poorly on assessments of these skills and do not use these approaches in day-to-day practice.5-11

    Test ordering and interpretation are taught briefly in medical schools,12 with curricular evaluation often limited to self-assessment of skills.13 The impact of such education on clinical practice is unclear. Estimating the probability of disease and deciding to test may be influenced by training, experience, and personality.8,14 Medical decisions, like other human decisions, may not be rational and are prone to errors associated with poor knowledge of the base rate of disease or other errors associated with probability.14 Test performance and interpretation have increasingly become a point of discussion in medicine and for the general public during the COVID-19 pandemic.15 Erroneous estimates of disease probability likely impact practitioner treatment decisions.3,16 Lack of accurate diagnostic reasoning may lead to overdiagnosis and overtreatment.17

    Few studies have systematically examined how practitioners interpret diagnostic test results within the context of actual clinical scenarios. We performed a multicenter survey of practitioners in primary care practice to explore practitioner understanding of the probability of disease before and after test results for common clinical scenarios.

    Methods
    Survey

    We developed a survey to assess practitioner test understanding and the process of making a diagnosis using probability as well as actions taken by practitioners in similar scenarios in their practice. The survey also included items regarding basic demographic characteristics, educational background, and practice setting. Institutional review board approval was obtained at each of the 3 coordinating sites (Baltimore, Maryland; San Antonio, Texas; and Portland, Oregon). Verbal informed consent with a waiver of documentation was approved at all sites. The study followed the American Association for Public Opinion Research (AAPOR) reporting guideline.

    A draft survey was developed by primary investigators (D.J.M., L.L., D.K., D.F., L.S., J.P.B., A.F., S.W., C.P., J.O., and L.P.) based in part on previous surveys of risk understanding.5,8-11,18 This survey was reviewed by an expert panel of practitioners with different areas of expertise, practicing in community and academic settings (D.J.M., L.L., D.F., A.F., S.W., and D.K.), a qualitative research expert (J.O.), an epidemiologist (J.P.B.) and a psychologist (L.S.) with expertise in survey design, and a senior biostatistician (L.M.). The survey was further revised by the expert panel during an in-person meeting and 2 conference calls. A pilot test of the survey was conducted with 10 practitioners for comprehension and interpretation of questions, and minor language adjustments were made.

    Practitioner Risk Understanding

    The survey assessed risk understanding for common testing clinical decisions encountered by primary care practitioners in routine scenarios similar to previous small surveys.8-11,18 Individual testing questions pertained to mammograms for breast cancer, stress testing for cardiac ischemia, chest radiography for pneumonia, and urine cultures for urinary tract infection (UTI) (eAppendix 1 in the Supplement).

    Practitioners were presented with a clinical scenario and asked to estimate pretest probability of disease and posttest probabilities after both positive and negative test results. Each scenario was created for a general situation but included essential details to calculate true risk for patients (eg, age and absence of any risk factors for breast cancer in mammogram screening questions). The primary outcome of testing questions was to accurately identify the probability that a patient had disease after positive or negative results. Questions were designed to assess whether errors in test interpretation associated with poor pretest estimates or inaccurate updating of probability after testing. Additional questions provided sensitivity and specificity of a theoretical test and asked participants to calculate positive and negative predictive value at particular levels of disease prevalence.

    To assess the accuracy of participant responses, we used a hierarchical method to identify the scientific evidence for pretest probability, sensitivity, and specificity from the literature, which was completed after survey finalization. We first reviewed high-quality recent systematic reviews and meta-analyses. If only older systematic reviews and meta-analyses were available, with newer high-impact studies after publication, we considered data from both (attempting to understand the most accurate numbers for current technology and practice). If no systematic reviews or meta-analyses were available, we used data from studies commonly cited in recent guidelines, creating weighted means by consensus. The expert panel of physicians overseeing the study was presented with the best evidence identified, had a comment and question period, and determined consensus evidence-based answers presented in the Results section (eAppendix 2 in the Supplement).

    Enrollment Procedure

    People in leadership positions for group practices or residency programs were contacted and informed of the study. Investigators sought permission to give a short presentation or email introduction that described the study during a group practice meeting. Individual practitioners were then approached by a coordinator and/or physician investigator to request participation. The survey was offered to 723 primary care physicians, nurse practitioners, and physician assistants practicing in Delaware, Maryland, Oregon, Pennsylvania, Texas, Virginia, Washington, and the District of Columbia (Table 1). The survey was administered in paper format. The coordinator generally remained at the clinic, office, or meeting location until the practitioner had completed the survey. If practitioners requested to complete the survey at a later date, they were provided with an addressed, stamped envelope and could return the survey by mail, email, or clinic drop-off. Respondents were provided with a US $50 gift card for completion, if permitted by their employer.

    Practitioners who initially agreed to participate but did not return the survey within 2 weeks were contacted by study staff via email and/or in person up to 5 times during 3 months. Practitioners who did not complete the survey after these subsequent contacts were considered nonparticipants. Practitioners who declined to participate at initial enrollment or after reminders were asked to provide a reason for not participating from a standardized list to assess for selection bias. Of the contacted practitioners, 585 responded to the survey, and 553 answered all the questions.

    Imputed Likelihood Ratios

    To understand the adjustment in probability of disease after a positive or negative test result, we calculated an imputed likelihood ratio.19 By comparing estimated probability of disease before and after testing, we could impute the likelihood ratio that was consciously or unconsciously applied to modify probabilities. The imputed likelihood ratio was calculated by dividing posttest odds by pretest odds, where odds were calculated as probability divided by 1 minus probability.19 Responses of 0% or 100% were modified to 0.1% and 99.9% to allow for calculation of a likelihood ratio. Likelihood ratios were estimated from the literature as described above by the expert panel of physicians (eAppendix 2 in the Supplement).

    Statistical Analysis

    Survey responses were entered into a REDCap (Research Electronic Data Capture) database with double data entry. A sample size of 500 was planned based on desire for generalizable results across enrollment sites. The target sample was surpassed while we collected outstanding surveys. Data were analyzed with R software (R Foundation for Statistical Computing) for creation of density plots. SAS statistical software, version 9.4 (SAS Institute Inc) was used for calculation of descriptive statistics and all other statistical analyses. Comparison of those who completed all key survey questions with those who did not was performed with the χ2 test. To assess the statistical significance of differences between respondent estimates of diagnostic probabilities and the probabilities determined from scientific evidence, we used Wilcoxon signed-rank tests. To display the range of results for estimates of probability, we used density plots. These were created using R software (GGPlot2). A 2-sided P < .05 was considered to be statistically significant.

    Results
    Participant Demographics

    A total of 553 of 723 practitioners (76.5%) fully completed the survey (median age, 32 years; interquartile range, 29-44 years; 293 female [53.0%]; 296 [53.5%] White) from June 1, 2018, to November 26, 2019 (Table 2). A total of 492 of the 553 respondents (89.0%) had MD or DO degrees, and 290 (52.4% were in residency). The survey required a median of 20 minutes to complete (interquartile range [IQR], 15-25 minutes).

    We compared the 32 respondents who did not complete all necessary questions with the final cohort of 553 practitioners with complete responses. We found that those not completing the survey were more likely to be female (26 [81.3%] noncompleters vs 293 [53.0%] final cohort, P < .001), to have been in practice more than 10 years (15 [46.9%] noncompleters vs 145 [26.2%] final cohort, P = .01), to be nonresidents (27 [84.4%] noncompleters vs 263 [47.6%] final cohort, P < .001), or to be nurse practitioners or physicians assistants (13 [40.6%] noncompleters vs 61 [11.0%] final cohort, P < .001).

    Estimates of Disease Probability

    Estimates of probability of disease were consistently higher than scientific evidence (Figure). We also broke down answers by type of practitioner (resident physician, attending physician, and nurse practitioner or physician assistant) (Table 3). All types of practitioners overestimated probability of disease before and after testing.

    For pneumonia, the median clinical scenario–based estimate of pretest probability by participants was 80% (IQR, 75%-90%; evidence range, 25%-42%; P < .001). Median estimated probability of pneumonia was 95% (IQR, 90%-100%; evidence range, 46%-65%; P < .001) after a positive radiology result and 50% (IQR, 30%-80%; evidence range, 10%-19%; P < .001) after a negative radiology result. After a positive radiology result, 551 practitioners (99.6%) would treat with antibiotics, whereas 401 (72.5%) would treat with antibiotics after a negative radiology result.

    For breast cancer, the clinical scenario–based estimate of pretest probability by participants was 5% (IQR, 1%-10%; evidence range, 0.2%-0.3%; P < .001). Median estimated probability of breast cancer was 50% (IQR, 30%-80%; evidence range, 3%-9%; P < .001) after a positive mammography result and 5% (IQR, 1%-10%; evidence range, <0.05%; P < .001) after a negative mammography result.

    For cardiac ischemia, the median clinical scenario–based estimate of pretest probability by participants was 10% (IQR, 5%-20%; evidence range, 1%-4.4%; P < .001). The median estimated probability of cardiac ischemia was 70% (IQR, 50%-90%; evidence range, 2%-11%; P < .001) after a positive exercise stress test result and 5% (IQR, 1%-10%; evidence range, 0.43%-2.5%; P < .001) after a negative exercise stress test result. After a positive test result, 432 (78.1%) would treat for cardiac ischemia.

    For UTI, the description was of asymptomatic bacteriuria. The median clinical scenario–based estimate of pretest probability by participants was 20% (IQR, 10%-50%; evidence range, 0%-1%; P < .001). The median estimated probability of a UTI was 80% (IQR, 30%-95%; evidence range, 0%-8.3%; P < .001) after a positive urine culture result and 5% (IQR, 0%-10%; evidence range, 0%-0.11%; P < .001) after a negative urine culture result. After a positive test result, 393 (71.1%) would treat with antibiotics. After a negative test result, 43 practitioners (7.8%) would treat with antibiotics.

    Scenarios requesting identical test interpretation based on hypothetical numbers revealed similar tendencies. For the question, “A test to detect a disease for which prevalence is 1 out of 1000 has a sensitivity of 100% and specificity of 95%. What is the chance that a person found to have a positive result actually has the disease?” the median answer was 95% (IQR, 95%-100%), whereas the correct answer was 2%. For the related question, “What is the chance that a person found to have a negative result actually has the disease?” the median answer was 5% (IQR, 0%-5%), whereas the correct answer was 0%.

    Imputed Likelihood Ratios

    Imputed likelihood ratios were of variable accuracy across clinical scenarios. The most accurate were those for the impact of chest radiography for the diagnosis of pneumonia and urine culture for the diagnosis of UTI; the least accurate were those for negative mammography results for breast cancer and positive exercise stress test results for cardiac ischemia (imputed median positive and negative likelihood ratios for practitioners for chest radiography for pneumonia: positive likelihood ratio, 4.8; evidence, 2.6; negative likelihood ratio, 0.3; evidence, 0.3; those for mammography for breast cancer: positive likelihood ratio, 44.3; evidence range, 13.0-33.0; negative likelihood ratio, 1.0; evidence range, 0.05-0.24; those for exercise stress test for cardiac ischemia: positive likelihood ratio, 21.0; evidence range, 2.0-2.7; negative likelihood ratio, 0.6; evidence range, 0.5-0.6; those for urine culture for urinary tract infection: positive likelihood ratio, 9.0; evidence, 9.0: negative likelihood ratio, 0.1; evidence, 0.1.) (Table 4). Estimates of probability and imputed likelihood ratios were similar between residents and primary care practitioners (Table 4).

    Discussion

    In this survey study, in scenarios commonly encountered in primary care practice, practitioners overestimated the probability of disease by 2 to 10 times compared with the scientific evidence, both before and after testing. This result was mostly associated with overestimates of pretest probability, which were observed across all scenarios. Adjustments to probability in response to test results varied from accurate to overestimates of risk by type of test. There was variation in accuracy between type of practitioner that was small compared with the magnitude of difference between practitioners and the scientific evidence. Many practitioners reported that they would treat patients for disease for which likelihood had been overestimated.

    The most striking finding from this study was that practitioners consistently and significantly overestimate the likelihood of disease. Small studies with limited generalizability have had similar findings, often asking practitioners to perform one isolated aspect of diagnosis, such as interpreting a test result. However, past studies8-11 have not explored a range of questions or clarified estimates at different steps in the diagnostic pathway. The reason for inaccurate estimates of probability are not clear, although anecdotes reported during the current study imply that practitioners often do not think in terms of probability. One participant stated that estimating probability of disease “isn’t how you do medicine.” This attitude is consistent with a previous study20 of diagnostic strategies that describe an initial pattern recognition phase of care with only 10% of practitioners engaging in a secondary phase of probabilistic reasoning.

    This study found that probability estimates were consistently biased toward overestimation, as has been seen in other contexts, such as expectations of high stock returns among investors.21 This overestimation is consistent with cognitive biases, including base rate neglect, anchoring bias, and confirmation bias.14 These biases drive overestimation because true base rates are usually lower than expected and anchoring tends to reflect experiences that represent improbable events or those in which a diagnosis was missed. Such cognitive biases have been associated with diagnostic errors that may occur from errors in estimating risk.5,22,23 Notably, practitioners in this survey were often residents or academic physicians who often practice with populations with higher prevalence of disease. This experience may have also contributed to higher estimates of disease.

    Pretest probabilities were consistently overestimated for all questions, but overestimates were particularly apparent for the pneumonia and UTI scenarios. Estimates of pretest probability generally reflect clinical knowledge. Reasons for overestimates for these infectious diseases may relate to the fact that antibiotics are often appropriately given even when the likelihood of infection is moderate. In the UTI scenario, estimates of high pretest probability may reflect the evolution of the definition of asymptomatic bacteriuria as a separate entity from UTI.24

    In contrast to past literature,8-10,19 practitioners accurately adjusted estimates of disease based on the results of some tests, as demonstrated by the imputed likelihood ratios. This adjustment could be artifactual because of inability to adjust probability for tests that had high pretest estimates (ie, pneumonia and UTI). In other cases, practitioners markedly overestimated the probability of disease after testing, specifically after a positive or negative mammography result or a positive exercise stress test result. Practitioners are known to overestimate chance of disease when completing a theoretical estimate of likelihood of disease after a positive test result when pretest probability was 1 in 1000 tests.9,10 The current study included the identical question with an identical response, with participants estimating the likelihood of disease at 95% when the correct answer was 2%.5,8-10,19 The findings regarding real-life examples are consistent with evidence from limited past studies,8-11 for example, physician interpretation of a positive mammography result in a typical woman as conveying 81% probability of breast cancer.8

    The assessment of test results in this study was simplified to positive or negative. This dichotomization reflects the literature on the sensitivity and specificity of testing.5,6 However, in clinical medicine, these tests often present a range of descriptions for a positive result from mild positives, such as well-circumscribed density on a mammogram, to a strongly positive result, such as inducible ischemia on a stress test or spiculated mass on a mammogram. A more strongly positive or abnormal result would be less sensitive but more specific for disease. This study did not evaluate interpretation of more complex test results.

    There are important implications of the finding of a gap between practitioner estimates and scientific estimates of the probability of disease. Practitioners who overestimate the probability of disease would be expected to use that overestimation when deciding whether to initiate therapy, which could lead to overuse of medications and procedures with associated patient harms. Practitioners in the study reported that they would initiate treatment based on estimates of disease, including 78.2% who would treat cardiac ischemia and 71.0% who would treat a UTI when a positive test result would place their patient at 11% or less chance of disease. These errors would similarly corrupt shared decision-making with patients, which relies on practitioner understanding and communication of the likelihood of various outcomes.25-27 Training in shared decision-making has focused on communication skills,28 not on understanding the probability of disease,29 but the findings suggest another important educational target.

    More focus on diagnostic reasoning in medical education is important. The finding of a primary problem with pretest probability estimates may be more amenable to intervention than the more commonly discussed bayesian adjustment to probability from test results.30 Pretest probability is commonly discussed in medical education, but a standard method for estimating pretest probability has not been described.30 Ideally, such estimates incorporate knowledge of disease prevalence and the predictive value of components of the history and physical examination, but for many conditions this information is difficult to find. The fact that estimates are so far from scientific evidence identifies a pressing need for improvement. There are a limited number of well-characterized diseases with pretest probability calculators, notably cardiac ischemia.31,32 Despite the fact that respondents in this study had no access to external aids while completing the survey, pretest estimates of cardiac ischemia were more accurate than for other clinical scenarios, implying that access to these calculators may improve knowledge and impact clinical reasoning. There is also a need to improve bayesian adjustment in probability from test results, which requires readily accessible references for clinical sensitivity and specificity. Computer visual decision aids that guide estimates of probability may also have a role.5,33 Alternative approaches, such as natural frequencies and naturalistic decision-making or use of heuristics, may improve decisions.34

    Limitations

    This study has limitations. One is that the small fraction of respondents who did not complete the survey were more likely to be female, nurse practitioners, or physician assistants or to have been in practice for more than 10 years. However, the overall response rate was high. The format of survey questions required participants to estimate pretest probability before giving interpretation of positive or negative test results, which may not reflect their natural practice. Finally, although validity was extensively assessed via a multidisciplinary expert panel, reliability of our novel survey was not assessed.

    Conclusions

    In this study, large overestimates of the probability of disease before and after diagnostic testing were observed. Probability adjustments in response to test results varied from accurate to overestimates of risk by type of test. This significant overestimation of disease likely limits the ability of practitioners to engage in precise and evidence-based medical practice or shared decision-making.

    Back to top
    Article Information

    Accepted for Publication: November 21, 2020.

    Published Online: April 5, 2021. doi:10.1001/jamainternmed.2021.0269

    Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2021 Morgan DJ et al. JAMA Internal Medicine.

    Corresponding Author: Daniel J. Morgan, MD, MS, Department of Epidemiology and Public Health, University of Maryland School of Medicine, 10 S Pine St, Medical Student Teaching Facility Room 334, Baltimore, MD 21201 (dmorgan@som.umaryland.edu).

    Author Contributions: Dr Morgan had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

    Concept and design: Morgan, Pineles, Owczarzak, Magder, Scherer, Brown, Terndrup, Feldstein, Foy, Stevens, Koch, Weisenberg, Korenstein.

    Acquisition, analysis, or interpretation of data: Morgan, Pineles, Magder, Scherer, Pfeiffer, Leykum, Stevens, Koch, Masnick.

    Drafting of the manuscript: Morgan, Pineles, Magder, Stevens.

    Critical revision of the manuscript for important intellectual content: Morgan, Pineles, Owczarzak, Magder, Scherer, Brown, Pfeiffer, Terndrup, Leykum, Feldstein, Foy, Koch, Masnick, Weisenberg, Korenstein.

    Statistical analysis: Morgan, Magder.

    Obtained funding: Morgan, Pineles.

    Administrative, technical, or material support: Morgan, Pineles, Owczarzak, Scherer, Brown, Pfeiffer, Terndrup, Leykum, Stevens.

    Supervision: Morgan, Terndrup.

    Conflict of Interest Disclosures: Dr Morgan reported receiving grants from the National Institutes of Health (NIH) during the conduct of the study and grants from the US Department of Veterans Affairs, the Agency for Healthcare Research and Quality, and the Centers for Disease Control and Prevention outside the submitted work. Ms Pineles reported receiving grants from the NIH to the University of Maryland School of Medicine during the conduct of the study. Dr Scherer reported receiving grants from the NIH during the conduct of the study. Dr Brown reported receiving grants from the NIH during the conduct of the study. Dr Pfeiffer reported receiving grants from Pfizer to serve as site investigator for a Clostridium difficile vaccine trial (protocol B5091007) since July 2020 under a Cooperative Research and Development Agreement with VA Portland outside the submitted work. Dr Korenstein reported receiving grants from the NIH and grants from the National Cancer Institute to Memorial Sloan Kettering Cancer Center during the conduct of the study and that her spouse serves on the scientific advisory board and as a consultant for Vedanta Biosciences, serves as a consultant for Takeda, serves on the scientific advisory board and as a consultant for Opentrons. No other disclosures were reported.

    Funding/Support: This project was funded by grant NLM DP2LM012890 (New Innovator Award) from the NIH (Dr Morgan, principal investigator). Dr Korenstein’s work on this project was supported in part by Cancer Center Support Grant P30 CA008748 from the National Cancer Institute to Memorial Sloan Kettering Cancer Center.

    Role of the Funder/Sponsor: The funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

    References
    1.
    Hunter  DJ.  Uncertainty in the era of precision medicine.   N Engl J Med. 2016;375(8):711-713. doi:10.1056/NEJMp1608282 PubMedGoogle ScholarCrossref
    2.
    Schünemann  HJ, Mustafa  RA, Brozek  J,  et al; GRADE Working Group.  GRADE guidelines: 22. the GRADE approach for tests and strategies—from test accuracy to patient-important outcomes and recommendations.   J Clin Epidemiol. 2019;111:69-82. doi:10.1016/j.jclinepi.2019.02.003 PubMedGoogle ScholarCrossref
    3.
    Armstrong  KA, Metlay  JP.  Annals clinical decision making: using a diagnostic test.   Ann Intern Med. 2020;172(9):604-609. doi:10.7326/M19-1940 PubMedGoogle ScholarCrossref
    4.
    Centers for Disease Control and Prevention. About DLS. Published March 26, 2020. Accessed May 20, 2020. https://www.cdc.gov/csels/dls/about-us.html
    5.
    Whiting  PF, Davenport  C, Jameson  C,  et al.  How well do health professionals interpret diagnostic information? a systematic review.   BMJ Open. 2015;5(7):e008155. doi:10.1136/bmjopen-2015-008155 PubMedGoogle Scholar
    6.
    Reid  MC, Lane  DA, Feinstein  AR.  Academic calculations versus clinical judgments: practicing physicians’ use of quantitative measures of test accuracy.   Am J Med. 1998;104(4):374-380. doi:10.1016/S0002-9343(98)00054-0 PubMedGoogle ScholarCrossref
    7.
    Berwick  DM, Fineberg  HV, Weinstein  MC.  When doctors meet numbers.   Am J Med. 1981;71(6):991-998. doi:10.1016/0002-9343(81)90325-9 PubMedGoogle ScholarCrossref
    8.
    Gigerenzer  G.  Reckoning With Risk: Learning to Live With Uncertainty. Penguin Books; 2003.
    9.
    Manrai  AK, Bhatia  G, Strymish  J, Kohane  IS, Jain  SH.  Medicine’s uncomfortable relationship with math: calculating positive predictive value.   JAMA Intern Med. 2014;174(6):991-993. doi:10.1001/jamainternmed.2014.1059 PubMedGoogle ScholarCrossref
    10.
    Casscells  W, Schoenberger  A, Graboys  TB.  Interpretation by physicians of clinical laboratory results.   N Engl J Med. 1978;299(18):999-1001. doi:10.1056/NEJM197811022991808 PubMedGoogle ScholarCrossref
    11.
    Krouss  M, Croft  L, Morgan  DJ.  Physician understanding and ability to communicate harms and benefits of common medical treatments.   JAMA Intern Med. 2016;176(10):1565-1567. doi:10.1001/jamainternmed.2016.5027 PubMedGoogle ScholarCrossref
    12.
    Korenstein  D.  Charting the route to high value care: the role of medical education.   JAMA. 2015;314(22):2359-2361. doi:10.1001/jama.2015.15406 PubMedGoogle ScholarCrossref
    13.
    Gagliardi  JP, Stinnett  SS, Schardt  C.  Innovation in evidence-based medicine education and assessment: an interactive class for third- and fourth-year medical students.   J Med Libr Assoc. 2012;100(4):306-309. doi:10.3163/1536-5050.100.4.014 PubMedGoogle ScholarCrossref
    14.
    Tversky  A, Kahneman  D.  Judgment under uncertainty: heuristics and biases.   Science. 1974;185(4157):1124-1131. doi:10.1126/science.185.4157.1124 PubMedGoogle ScholarCrossref
    15.
    CNNPolitics. Top doctor says White House Coronavirus Task Force still missing 50% of testing data. Accessed May 6, 2020. https://www.cnn.com/2020/04/02/politics/birx-task-force-coronavirus-testing/index.html
    16.
    Morgan  DJ, Scherer  LD, Korenstein  D.  Improving physician communication about treatment decisions: reconsideration of “risks vs benefits”.   JAMA. 2020;324(10):937-938. Published online March 9, 2020. doi:10.1001/jama.2020.0354 PubMedGoogle ScholarCrossref
    17.
    Morgan  DJ, Brownlee  S, Leppin  AL,  et al. Setting a research agenda for medical overuse. BMJ. 2015;351:h4534. doi:10.1136/bmj.h4534
    18.
    Fagerlin  A, Sepucha  KR, Couper  MP, Levin  CA, Singer  E, Zikmund-Fisher  BJ.  Patients’ knowledge about 9 common health conditions: the DECISIONS survey.   Med Decis Making. 2010;30(5)(suppl):35S-52S. doi:10.1177/0272989X10378700 PubMedGoogle ScholarCrossref
    19.
    Steurer  J, Fischer  JE, Bachmann  LM, Koller  M, ter Riet  G.  Communicating accuracy of tests to general practitioners: a controlled study.   BMJ. 2002;324(7341):824-826. doi:10.1136/bmj.324.7341.824 PubMedGoogle ScholarCrossref
    20.
    Heneghan  C, Glasziou  P, Thompson  M,  et al.  Diagnostic strategies used in primary care.   BMJ. 2009;338:b946. doi:10.1136/bmj.b946 PubMedGoogle ScholarCrossref
    21.
    Bernstein  PL.  Against the Gods: The Remarkable Story of Risk. John Wiley & Sons; 1996.
    22.
    Saposnik  G, Redelmeier  D, Ruff  CC, Tobler  PN.  Cognitive biases associated with medical decisions: a systematic review.   BMC Med Inform Decis Mak. 2016;16(1):138. doi:10.1186/s12911-016-0377-1 PubMedGoogle ScholarCrossref
    23.
    Crowley  RS, Legowski  E, Medvedeva  O,  et al.  Automated detection of heuristics and biases among pathologists in a computer-based system.   Adv Health Sci Educ Theory Pract. 2013;18(3):343-363. doi:10.1007/s10459-012-9374-z PubMedGoogle ScholarCrossref
    24.
    Nicolle  LE, Gupta  K, Bradley  SF,  et al.  Clinical practice guideline for the management of asymptomatic bacteriuria: 2019 update by the Infectious Diseases Society of America.   Clin Infect Dis. 2019;68(10):1611-1615. doi:10.1093/cid/ciz021 PubMedGoogle ScholarCrossref
    25.
    Wells’ Criteria for Pulmonary Embolism. MDCalc. Accessed April 27, 2020. https://www.mdcalc.com/wells-criteria-pulmonary-embolism
    26.
    Sheridan  SL, Donahue  KE, Brenner  AT.  Beginning with high value care in mind: a scoping review and toolkit to support the content, delivery, measurement, and sustainment of high value care.   Patient Educ Couns. 2019;102(2):238-252. doi:10.1016/j.pec.2018.05.014 PubMedGoogle ScholarCrossref
    27.
    Morgan  DJ, Leppin  AL, Smith  CD, Korenstein  D.  A practical framework for understanding and reducing medical overuse: conceptualizing overuse through the patient-clinician interaction.   J Hosp Med. 2017;12(5):346-351. doi:10.12788/jhm.2738 PubMedGoogle ScholarCrossref
    28.
    Back  AL, Fromme  EK, Meier  DE.  Training clinicians with communication skills needed to match medical treatments to patient values.   J Am Geriatr Soc. 2019;67(S2):S435-S441. doi:10.1111/jgs.15709 PubMedGoogle ScholarCrossref
    29.
    Armstrong  KA, Metlay  JP.  Annals clinical decision making: communicating risk and engaging patients in shared decision making.   Ann Intern Med. 2020;172(10):688-692. doi:10.7326/M19-3495 PubMedGoogle ScholarCrossref
    30.
    Richardson  WS.  Five uneasy pieces about pre-test probability.   J Gen Intern Med. 2002;17(11):882-883. doi:10.1046/j.1525-1497.2002.20916.x PubMedGoogle ScholarCrossref
    31.
    Genders  TSS, Steyerberg  EW, Hunink  MGM,  et al.  Prediction model to estimate presence of coronary artery disease: retrospective pooled analysis of existing cohorts.   BMJ. 2012;344:e3485. doi:10.1136/bmj.e3485 PubMedGoogle Scholar
    32.
    Fihn  SD, Gardin  JM, Abrams  J,  et al; American College of Cardiology Foundation; American Heart Association Task Force on Practice Guidelines; American College of Physicians; American Association for Thoracic Surgery; Preventive Cardiovascular Nurses Association; Society for Cardiovascular Angiography and Interventions; Society of Thoracic Surgeons.  2012 ACCF/AHA/ACP/AATS/PCNA/SCAI/STS Guideline for the diagnosis and management of patients with stable ischemic heart disease: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines, and the American College of Physicians, American Association for Thoracic Surgery, Preventive Cardiovascular Nurses Association, Society for Cardiovascular Angiography and Interventions, and Society of Thoracic Surgeons.   J Am Coll Cardiol. 2012;60(24):e44-e164. doi:10.1016/j.jacc.2012.07.013 PubMedGoogle ScholarCrossref
    33.
    Testing Wisely. Accessed February 25, 2021. https://calculator.testingwisely.com/
    34.
    Klein  G.  A naturalistic decision making perspective on studying intuitive decision making.   J Appl Res Memory Cognit. 2015;4(3):164-168. doi:10.1016/j.jarmac.2015.07.001Google ScholarCrossref
    
    ×