[Skip to Navigation]
Sign In
Viewpoint
September 17, 2021

Diagnostic Errors, Health Disparities, and Artificial Intelligence: A Combination for Health or Harm?

Author Affiliations
  • 1Division of Healthcare Delivery Science and Innovation, Weill Cornell Clinical and Translational Science Center, Weill Cornell Medicine, New York, New York
  • 2Department of Anesthesiology and Critical Care Medicine, Case Western Reserve University School of Medicine, Cleveland, Ohio
JAMA Health Forum. 2021;2(9):e212430. doi:10.1001/jamahealthforum.2021.2430

In health care, diagnostic errors remain a substantial source of morbidity, mortality, and costs. The National Academy of Medicine recognizes diagnostic errors, defined as “the failure to establish an accurate and timely explanation of a patient’s health problem(s) or to communicate that explanation to the patient,”1 as a major health care issue. Diagnostic errors include delayed diagnoses, wrong diagnoses, and missed diagnoses. Despite the personal and societal consequences of diagnostic errors, they are often unrecognized and unaddressed in patient safety and in health care quality metrics and guidelines.

The association between diagnostic errors and health care disparities has received even less attention to date. Disparities in health care access and outcomes by race and ethnicity, sex, gender, geographic location, and socioeconomic status are well documented and persistent. The ongoing COVID-19 pandemic has further exposed and been associated with increasing these deep-seated disparities in health and health care in the US.2 Even before the pandemic, in 2011, US health care disparities were estimated to cost $309 billion annually.3

Few studies have examined the association between diagnostic errors and health care disparities. One study suggested the potential for misdiagnosis based on the results of genetic testing when using diagnostic tools that were developed with less diverse patient populations.4 Another study indicated that Black patients are more likely than White patients to be underdiagnosed with depression during their first primary care visit for mental health needs.5 Similarly, Black children are less likely to be diagnosed with otitis media compared with White children with similar presentations.6 This limited evidence for diagnostic disparities most likely reflects a dearth of research rather than an absence of disparities.

Diagnostic errors and disparities, twin challenges in health care, can and should be addressed by implementing research and quality improvement initiatives, policy and payment reform, and technology and by educating clinicians about the importance of evidence-based use of diagnostic tools and the contributions of social determinants of health to disparities in care. The proliferation of big data and associated artificial intelligence (AI) methods provides yet another avenue to address these challenges. For example, computer decision support has been associated with reduced inequities in the prevention of deep vein thrombosis,7 likely by reducing implicit bias. The health care industry is emerging as a leader in the adoption of AI through sophisticated machine learning–assisted diagnostic tools. For clinical trials, AI has the potential to support every stage of the process, including finding a trial in which to enroll, addressing patient-centric enrollment issues, and assessing trial medication adherence with remote and digital monitoring. It is equally important to tap into the opportunities provided by big data and AI to detect and address diagnostic disparities in populations at risk for such disparities.

In a recent study, machine learning was used to develop a new algorithm to measure the severity of knee osteoarthritis based on existing radiologic data.8 With use of the well-established standard Kellgren-Lawrence grading system to classify osteoarthritis, Black patients had more severe osteoarthritis than did White patients. However, adjustment for Kellgren-Lawrence grading scores did not reduce or eliminate racial differences in patient-reported knee pain. Applying an AI-driven algorithmic measure markedly improved the predictive performance of the measure and reduced the racial disparity in patient-reported pain.8 Unlike the standard severity measure, which was developed more than 20 years ago with predominantly Northern European patients,8 the predictive power of the algorithm improved with the greater diversity of the AI training and development sample. The clinical significance of this AI-based diagnostic fine-tuning remains to be determined. However, one of the largest racial disparities in US health care today is in the use of knee replacement surgery for management of knee osteoarthritis, for which pain is a key clinical indication for surgery.

Conversely, the use of AI for prediction and diagnosis also has notable challenges, with increasing evidence that machine learning algorithms may cause unintended harm. An algorithm that includes a race-adjusted estimated glomerular filtration rate is an example. In the model, Black individuals were classified as having a higher baseline estimated glomerular filtration rate than White individuals, resulting in the underdiagnosis and delayed treatment of chronic kidney disease.9 Another example is the use of AI to target patients for high-risk care management programs, which rely on an algorithmic prediction of health costs to predict health needs. Black patients tend to have fewer and different types of medical expenses, and cost prediction may introduce racially biased errors that neglect structural inequalities.10 Another important factor that might contribute to the risk of harm is missing or inaccurate diagnostic information in administrative data sets and electronic health records because AI and machine learning algorithms are only as good as the data on which they are trained. A lack of data on social determinants of health and less-than-robust descriptions of patient symptoms may also contribute to the potential for unintended harm from AI and related methods.

An exciting data-centric initiative attracting the attention of health care organizations is the use of big data and associated analytic tools such as AI to identify patient populations who require high-cost, high-need care. These initiatives have the potential to help tailor interventions to improve the quality and value of health care. However, the analytic tools may be double-edged swords. By profiling patients using data and computer models that are not entirely free of bias at baseline, the tools can be used to deny care to or redline underserved populations, as demonstrated in a recent study.10 Artificial intelligence models for diagnosis should evaluate the effect of including race and ethnicity as a variable and seek to minimize structural bias in diagnostic data, as noted in a recent editorial on the potential harms of including race in equations to estimate kidney function.9

On the verge of substantial technological transformation with AI in health care, these innovations have the potential to address diagnostic errors, health care disparities, and the association of these 2 crucial issues. However, we must ensure that diverse perspectives guide AI development, AI models are trained with data from diverse populations, and the unintended negative consequences of the models are identified and addressed. Most importantly, we must be transparent about who benefits and who may be harmed by including race and ethnicity in AI models and ensure that AI is used to decrease rather than increase inequities. The way forward is not to shun innovations such as AI but to embrace them and explicitly use them to improve care and advance equity.

Back to top
Article Information

Published: September 17, 2021. doi:10.1001/jamahealthforum.2021.2430

Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2021 Ibrahim SA et al. JAMA Health Forum.

Corresponding Author: Said A. Ibrahim, MD, MPH, MBA, Division of Healthcare Delivery Science and Innovation, Weill Cornell Clinical and Translational Science Center, Weill Cornell Medicine, 402 E 67th St, Floor 2, Room LA 215, New York, NY 10065 (sai2009@med.cornell.edu).

Conflict of Interest Disclosures: None reported.

References
1.
Institute of Medicine. Balogh EP, Miller BT, Ball JR, eds. Improving diagnosis in health care. The National Academies Press; 2015.
2.
Woolf  SH, Masters  RK, Aron  LY.  Effect of the COVID-19 pandemic in 2020 on life expectancy across populations in the USA and other high income countries: simulations of provisional mortality data.   BMJ. 2021;373:n1343. doi:10.1136/bmj.n1343 PubMedGoogle Scholar
3.
LaVeist  TA, Gaskin  D, Richard  P.  Estimating the economic burden of racial health inequalities in the United States.   Int J Health Serv. 2011;41(2):231-238. doi:10.2190/HS.41.2.c PubMedGoogle ScholarCrossref
4.
Manrai  AK, Funke  BH, Rehm  HL,  et al.  Genetic misdiagnoses and the potential for health disparities.   N Engl J Med. 2016;375(7):655-665. doi:10.1056/NEJMsa1507092 PubMedGoogle ScholarCrossref
5.
Lukachko  A, Olfson  M.  Race and the clinical diagnosis of depression in new primary care patients.   Gen Hosp Psychiatry. 2012;34(1):98-100. doi:10.1016/j.genhosppsych.2011.09.008 PubMedGoogle ScholarCrossref
6.
Fleming-Dutra  KE, Shapiro  DJ, Hicks  LA, Gerber  JS, Hersh  AL.  Race, otitis media, and antibiotic selection.   Pediatrics. 2014;134(6):1059-1066. doi:10.1542/peds.2014-1781 PubMedGoogle ScholarCrossref
7.
Lau  BD, Haider  AH, Streiff  MB,  et al.  Eliminating health care disparities with mandatory clinical decision support: the venous thromboembolism (VTE) example.   Med Care. 2015;53(1):18-24. doi:10.1097/MLR.0000000000000251 PubMedGoogle ScholarCrossref
8.
Pierson  E, Cutler  DM, Leskovec  J, Mullainathan  S, Obermeyer  Z.  An algorithmic approach to reducing unexplained pain disparities in underserved populations.   Nat Med. 2021;27(1):136-140. doi:10.1038/s41591-020-01192-7 PubMedGoogle ScholarCrossref
9.
Eneanya  ND, Yang  W, Reese  PP.  Reconsidering the consequences of using race to estimate kidney function.   JAMA. 2019;322(2):113-114. doi:10.1001/jama.2019.5774 PubMedGoogle ScholarCrossref
10.
Obermeyer  Z, Powers  B, Vogeli  C, Mullainathan  S.  Dissecting racial bias in an algorithm used to manage the health of populations.   Science. 2019;366(6464):447-453. doi:10.1126/science.aax2342 PubMedGoogle ScholarCrossref
1 Comment for this article
EXPAND ALL
Age-Appropriate Diagnoses
Dianna Arens, MEPD |
I am commenting as a 75 year old female. In interactions with my primary care physician I have found that the lab analyzing my blood tests, flags certain results as "high" or "low." My physician then uses the flags in diagnosing my health.

The problem is two-fold:

1. The lab is obviously not flagging based upon age-appropriate ranges (when I have the tests taken & blood drawn, I am always asked my birth date, so they know my age).

2. The Physician uses the results and the flags to diagnose me, which leads to age-inappropriate
results. This then leads to prescriptions that can cause more problems. I know this first hand.

I had to do research to determine what age-appropriate ranges are for such tests as A1c, blood pressure, glucose levels, etc. The American Geriatrics Society has some excellent information on age appropriate diagnoses. They even distributed this widely, but indicated that it often is not heeded.

Labs should report test results age appropriately and physicians should make diagnoses using age-appropriate range results.

Thank you for paying attention to missed and simply wrong diagnoses. A lot is made of diagnoses in pediatrics. I wish more was done with geriatrics. As a footnote, I'd love to have a geriatrician as my PCP, however there is only 1 in our entire area whose office is way too far for me to drive.

CONFLICT OF INTEREST: None Reported
READ MORE
×