In health care, diagnostic errors remain a substantial source of morbidity, mortality, and costs. The National Academy of Medicine recognizes diagnostic errors, defined as “the failure to establish an accurate and timely explanation of a patient’s health problem(s) or to communicate that explanation to the patient,”1 as a major health care issue. Diagnostic errors include delayed diagnoses, wrong diagnoses, and missed diagnoses. Despite the personal and societal consequences of diagnostic errors, they are often unrecognized and unaddressed in patient safety and in health care quality metrics and guidelines.
The association between diagnostic errors and health care disparities has received even less attention to date. Disparities in health care access and outcomes by race and ethnicity, sex, gender, geographic location, and socioeconomic status are well documented and persistent. The ongoing COVID-19 pandemic has further exposed and been associated with increasing these deep-seated disparities in health and health care in the US.2 Even before the pandemic, in 2011, US health care disparities were estimated to cost $309 billion annually.3
Few studies have examined the association between diagnostic errors and health care disparities. One study suggested the potential for misdiagnosis based on the results of genetic testing when using diagnostic tools that were developed with less diverse patient populations.4 Another study indicated that Black patients are more likely than White patients to be underdiagnosed with depression during their first primary care visit for mental health needs.5 Similarly, Black children are less likely to be diagnosed with otitis media compared with White children with similar presentations.6 This limited evidence for diagnostic disparities most likely reflects a dearth of research rather than an absence of disparities.
Diagnostic errors and disparities, twin challenges in health care, can and should be addressed by implementing research and quality improvement initiatives, policy and payment reform, and technology and by educating clinicians about the importance of evidence-based use of diagnostic tools and the contributions of social determinants of health to disparities in care. The proliferation of big data and associated artificial intelligence (AI) methods provides yet another avenue to address these challenges. For example, computer decision support has been associated with reduced inequities in the prevention of deep vein thrombosis,7 likely by reducing implicit bias. The health care industry is emerging as a leader in the adoption of AI through sophisticated machine learning–assisted diagnostic tools. For clinical trials, AI has the potential to support every stage of the process, including finding a trial in which to enroll, addressing patient-centric enrollment issues, and assessing trial medication adherence with remote and digital monitoring. It is equally important to tap into the opportunities provided by big data and AI to detect and address diagnostic disparities in populations at risk for such disparities.
In a recent study, machine learning was used to develop a new algorithm to measure the severity of knee osteoarthritis based on existing radiologic data.8 With use of the well-established standard Kellgren-Lawrence grading system to classify osteoarthritis, Black patients had more severe osteoarthritis than did White patients. However, adjustment for Kellgren-Lawrence grading scores did not reduce or eliminate racial differences in patient-reported knee pain. Applying an AI-driven algorithmic measure markedly improved the predictive performance of the measure and reduced the racial disparity in patient-reported pain.8 Unlike the standard severity measure, which was developed more than 20 years ago with predominantly Northern European patients,8 the predictive power of the algorithm improved with the greater diversity of the AI training and development sample. The clinical significance of this AI-based diagnostic fine-tuning remains to be determined. However, one of the largest racial disparities in US health care today is in the use of knee replacement surgery for management of knee osteoarthritis, for which pain is a key clinical indication for surgery.
Conversely, the use of AI for prediction and diagnosis also has notable challenges, with increasing evidence that machine learning algorithms may cause unintended harm. An algorithm that includes a race-adjusted estimated glomerular filtration rate is an example. In the model, Black individuals were classified as having a higher baseline estimated glomerular filtration rate than White individuals, resulting in the underdiagnosis and delayed treatment of chronic kidney disease.9 Another example is the use of AI to target patients for high-risk care management programs, which rely on an algorithmic prediction of health costs to predict health needs. Black patients tend to have fewer and different types of medical expenses, and cost prediction may introduce racially biased errors that neglect structural inequalities.10 Another important factor that might contribute to the risk of harm is missing or inaccurate diagnostic information in administrative data sets and electronic health records because AI and machine learning algorithms are only as good as the data on which they are trained. A lack of data on social determinants of health and less-than-robust descriptions of patient symptoms may also contribute to the potential for unintended harm from AI and related methods.
An exciting data-centric initiative attracting the attention of health care organizations is the use of big data and associated analytic tools such as AI to identify patient populations who require high-cost, high-need care. These initiatives have the potential to help tailor interventions to improve the quality and value of health care. However, the analytic tools may be double-edged swords. By profiling patients using data and computer models that are not entirely free of bias at baseline, the tools can be used to deny care to or redline underserved populations, as demonstrated in a recent study.10 Artificial intelligence models for diagnosis should evaluate the effect of including race and ethnicity as a variable and seek to minimize structural bias in diagnostic data, as noted in a recent editorial on the potential harms of including race in equations to estimate kidney function.9
On the verge of substantial technological transformation with AI in health care, these innovations have the potential to address diagnostic errors, health care disparities, and the association of these 2 crucial issues. However, we must ensure that diverse perspectives guide AI development, AI models are trained with data from diverse populations, and the unintended negative consequences of the models are identified and addressed. Most importantly, we must be transparent about who benefits and who may be harmed by including race and ethnicity in AI models and ensure that AI is used to decrease rather than increase inequities. The way forward is not to shun innovations such as AI but to embrace them and explicitly use them to improve care and advance equity.
Published: September 17, 2021. doi:10.1001/jamahealthforum.2021.2430
Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2021 Ibrahim SA et al. JAMA Health Forum.
Corresponding Author: Said A. Ibrahim, MD, MPH, MBA, Division of Healthcare Delivery Science and Innovation, Weill Cornell Clinical and Translational Science Center, Weill Cornell Medicine, 402 E 67th St, Floor 2, Room LA 215, New York, NY 10065 (sai2009@med.cornell.edu).
Conflict of Interest Disclosures: None reported.
1.Institute of Medicine. Balogh EP, Miller BT, Ball JR, eds. Improving diagnosis in health care. The National Academies Press; 2015.
2.Woolf
SH, Masters
RK, Aron
LY. Effect of the COVID-19 pandemic in 2020 on life expectancy across populations in the USA and other high income countries: simulations of provisional mortality data.
BMJ. 2021;373:n1343. doi:
10.1136/bmj.n1343
PubMedGoogle Scholar