[Skip to Navigation]
Sign In
February 24, 2021

Equity and Artificial Intelligence in Surgical Care

Author Affiliations
  • 1Department of Surgery, University of Florida Health, Gainesville
  • 2Department of Medicine, University of Florida Health, Gainesville
JAMA Surg. 2021;156(6):509-510. doi:10.1001/jamasurg.2020.7208

In 2003, the Institute of Medicine reported that Black US individuals receive fewer procedures and poorer-quality care than White individuals, independent of socioeconomic determinants of health. Seventeen years later, access to care and perioperative level of care assignments potentiate disparities in surgical care, particularly affecting Black patients.1 These disparities are partially attributable to implicit bias that is entrenched deeply in US culture and media and are exacerbated when patient and physician demographics are mismatched. It is difficult to identify modifiable mechanisms of implicit bias because its latent mental constructs cannot be directly observed, but the weight of evidence suggests that many well-intentioned clinicians have 2 conflicting cognitive processes: one that is governed by a conscious, explicit system of beliefs and values, and one subconscious, implicit process that adapts to repeated stimuli. The former process is typically fair and equitable; the latter may drive implicit bias. Efforts to overcome implicit bias and health care disparities by building awareness and enacting structural changes to credentialing agencies and training curricula have yielded modest progress; additional strategies are needed. This Viewpoint endeavors to impart understanding of mechanisms by which artificial intelligence can either propagate or counteract disparities and suggests methods to tilt the balance toward fairness and equity in surgical care.

Add or change institution
Limit 200 characters
Limit 25 characters
Conflicts of Interest Disclosure

Identify all potential conflicts of interest that might be relevant to your comment.

Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.

Err on the side of full disclosure.

If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.

Not all submitted comments are published. Please see our commenting policy for details.

Limit 140 characters
Limit 3600 characters or approximately 600 words
    1 Comment for this article
    Artificial intelligence health applications in clinical practice
    Peter Goldschmidt, MD, DrPH, DMS | President, World Development Group, Inc, Bethesda MD
    The authors draw attention to several well-known issues concerning artificial intelligence (AI) health applications; in the context of surgical care [1]. They note that AI applications 1) can foster existing and/or create new disparities in care and 2) have the potential to produce better outcomes for patients, thereby reducing disparities in a positive direction. They describe potential biases in existing data that might be used to train algorithms but not those arising from data that do not exist but are necessary to provide a fuller picture of reality. Such needed but missing data can result 1) at the service level, from failure to observe or to document pertinent patient care data and 2) at the system level, from the inability of certain individuals or groups to access health care services. Clinical decision support systems (CDSS), whether or not AI-driven, have the potential to improve care processes and patient outcomes [2] and to produce other benefits [3]. No matter how well-designed or well-trained, AI-health applications can only work well if they are integrated seamlessly into electronic heath record (EHR) systems and are supplied with fit-for-purpose data. Regrettably, current EHR systems and data may not meet this mark [4]. Further, CDSS only provide recommendations to clinicians. An emerging standard requires tightly coupling a CDSS with a patient registry to learn what recommendations were followed, why they were not followed, and with what results in order to create a learning-loop that can drive continuous improvement. The authors note that standards for AI health applications are needed and would be valuable toward ensuring their safe and effective use. Various groups are working toward developing such standards [eg, 5]. Surgeons can play important roles: 1) by participating in standards development and 2) by supporting their use to evaluate AI health applications before they are deployed and, when deployed, their use in practice. Patients, providers, payors, and the public would all benefit.

    - 1. Johnson-Mann CN, Loftus TJ, Bihorac A. Equity and artificial intelligence in surgical care. JAMA Surgery 2021; online, February 24.
    - 2. Bright, TJ, Wong A, Dhurjati R, et al. Effect of clinical decision support systems: A systematic review. Annals of Internal Medicine 2012; 157: 29-43.
    - 3. Sutton, RT, Pincock D, Baumgarten DC, et al. An overview of clinical decision support systems: benefits, risks, and strategies for success. npj Digital Medicine 2020; 3: 17.
    - 4. Berdahl CT, Moran GJ, McBride O, et al. Concordance between electronic clinical documentation and physicians' observed behavior. JAMA Network Open 2019; 2(9): e1911390.
    - 5. Wiegand T, Krishnamurthy R, Kuglitsch M, et al. WHO and ITU establish benchmarking process for artificial intelligence in health. Lancet 2019; 394 (issue 10192, July 6): 9-11.

    The commenter is a participant in the ongoing WHO-ITU project (reference 5).