[Skip to Navigation]
Sign In
Figure.  Conceptual Framework for Applying Guiding Principles to Mitigate and Prevent Bias Across an Algorithm’s Life Cycle
Conceptual Framework for Applying Guiding Principles to Mitigate and Prevent Bias Across an Algorithm’s Life Cycle

This conceptual framework builds on a National Academy of Medicine13 algorithm life cycle framework adapted by Roski et al.14

Table 1.  Guiding Principles and Subprinciples for the Use of Algorithms in Health Care
Guiding Principles and Subprinciples for the Use of Algorithms in Health Care
Table 2.  Considerations for Operationalizing Guiding Principles for Algorithm Use in Health Care
Considerations for Operationalizing Guiding Principles for Algorithm Use in Health Care
1.
O’Neil  C.  Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown; 2016.
2.
The White House. Executive Order on further advancing racial equity and support for underserved communities through the federal government. Updated February 16, 2023. Accessed August 31, 2023. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/02/16/executive-order-on-further-advancing-racial-equity-and-support-for-underserved-communities-through-the-federal-government/
3.
Obermeyer  Z, Nissan  R, Stern  M, Eaneff  S, Bembeneck  EJ, Mullainathan  S.  Algorithmic Bias Playbook. Chicago Booth Center for Applied Artificial Intelligence; 2021. Accessed November 10, 2023. https://www.chicagobooth.edu/research/center-for-applied-artificial-intelligence/research/algorithmic-bias/playbook
4.
Vyas  DA, Eisenstein  LG, Jones  DS.  Hidden in plain sight—reconsidering the use of race correction in clinical algorithms.   N Engl J Med. 2020;383(9):874-882. doi:10.1056/NEJMms2004740 PubMedGoogle ScholarCrossref
5.
Obermeyer  Z, Powers  B, Vogeli  C, Mullainathan  S.  Dissecting racial bias in an algorithm used to manage the health of populations.   Science. 2019;366(6464):447-453. doi:10.1126/science.aax2342 PubMedGoogle ScholarCrossref
6.
Organization for Economic Cooperation and Development. Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449. 2019. Accessed November 10, 2023. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449#mainText
7.
World Health Organization.  Ethics and Governance of Artificial Intelligence for Health. World Health Organization; 2021. Accessed November 10, 2023. https://www.who.int/publications/i/item/9789240029200
8.
Makhni  S, Chin  MH, Fahrenbach  J, Rojas  JC.  Equity challenges for artificial intelligence algorithms in health care.   Chest. 2022;161(5):1343-1346. doi:10.1016/j.chest.2022.01.009 PubMedGoogle ScholarCrossref
9.
Marchesini  K, Smith  J, Everson  J. Increasing the transparency and trustworthiness of AI in health care. HealthITbuzz blog. April 13, 2023. Accessed August 31, 2023. https://www.healthit.gov/buzz-blog/health-innovation/transparent-and-trustworthy-ai-in-health-care
10.
Office of the National Coordinator for Health Information Technology. Health data, technology, and interoperability: certification program updates, algorithm transparency, and information sharing (HTI-1) proposed rule. HealthIT.gov. June 22, 2023. Accessed August 31, 2023. https://www.healthit.gov/topic/laws-regulation-and-policy/health-data-technology-and-interoperability-certification-program
11.
Jain  A, Brooks  JR, Alford  CC,  et al.  Awareness of racial and ethnic bias and potential solutions to address bias with use of health care algorithms.   JAMA Health Forum. 2023;4(6):e231197. doi:10.1001/jamahealthforum.2023.1197 PubMedGoogle ScholarCrossref
12.
Agency for Healthcare Research and Quality. Meetings examine impact of healthcare algorithms on racial and ethnic disparities in health and healthcare. Accessed August 31, 2023. https://effectivehealthcare.ahrq.gov/news/meetings
13.
Matheny  M, Thadaney  IS, Ahmed  M, Whicher  D, eds.  Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. National Academy of Medicine; 2022. Accessed November 10, 2023. https://nam.edu/artificial-intelligence-special-publication/
14.
Roski  J, Maier  EJ, Vigilante  K, Kane  EA, Matheny  ME.  Enhancing trust in AI through industry self-governance.   J Am Med Inform Assoc. 2021;28(7):1582-1590. doi:10.1093/jamia/ocab065 PubMedGoogle ScholarCrossref
15.
Bailey  ZD, Krieger  N, Agénor  M, Graves  J, Linos  N, Bassett  MT.  Structural racism and health inequities in the USA: evidence and interventions.   Lancet. 2017;389(10077):1453-1463. doi:10.1016/S0140-6736(17)30569-X PubMedGoogle ScholarCrossref
16.
Ng  MY, Kapur  S, Blizinsky  KD, Hernandez-Boussard  T.  The AI life cycle: a holistic approach to creating ethical AI for health decisions.   Nat Med. 2022;28(11):2247-2249. doi:10.1038/s41591-022-01993-y PubMedGoogle ScholarCrossref
17.
World Health Organization. Health equity. Accessed August 31, 2023. https://www.who.int/health-topics/health-equity#tab=tab_1
18.
Braveman  P, Arkin  E, Orleans  T, Proctor  D, Plough  A. What is health equity? Robert Wood Johnson Foundation. May 1, 2017. Accessed November 10, 2023. https://www.rwjf.org/en/insights/our-research/2017/05/what-is-health-equity-.html
19.
Health Care Payment Learning & Action Network. Advancing health equity through APMs: guidance for equity-centered design and implementation. 2021. Accessed November 10, 2023. http://hcp-lan.org/workproducts/APM-Guidance/Advancing-Health-Equity-Through-APMs.pdf
20.
US Department of Health and Human Services.  Trustworthy AI (TAI) Playbook. US Department of Health and Human Services; 2021. Accessed November 10, 2023. https://www.hhs.gov/sites/default/files/hhs-trustworthy-ai-playbook.pdf
21.
Zuckerman  BL, Karabin  JM, Parker  RA, Doane  WEJ, Williams  SR.  Options and Opportunities to Address and Mitigate the Existing and Potential Risks, as well as Promote Benefits, Associated With AI and Other Advanced Analytic Methods. OPRE Report 2022-253. US Department of Health and Human Services Office of Planning, Research, and Evaluation, Administration for Children and Families; 2022. Accessed November 10, 2023. https://www.acf.hhs.gov/opre/report/options-opportunities-address-mitigate-existing-potential-risks-promote-benefits
22.
Gonzalez  R. The spectrum of community engagement to ownership. Movement Strategy Center. 2019. Accessed November 10, 2023. https://movementstrategy.org/resources/the-spectrum-of-community-engagement-to-ownership/
23.
Loi  M, Heitz  C, Christen  M. A comparative assessment and synthesis of twenty ethics codes on AI and big data. In:  Proceedings of the 2020 7th Swiss Conference on Data Science (SDS). IEEE; 2020:41-46. doi:10.1109/SDS49233.2020.00015
24.
Hunter  DJ, Holmes  C.  Where medical statistics meets artificial intelligence.   N Engl J Med. 2023;389(13):1211-1219. doi:10.1056/NEJMra2212850 PubMedGoogle ScholarCrossref
25.
Parikh  RB, Obermeyer  Z, Navathe  AS.  Regulation of predictive analytics in medicine.   Science. 2019;363(6429):810-812. doi:10.1126/science.aaw0029 PubMedGoogle ScholarCrossref
26.
Vollmer  S, Mateen  BA, Bohner  G,  et al.  Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness.   BMJ. 2020;368:l6927. doi:10.1136/bmj.l6927 PubMedGoogle ScholarCrossref
27.
Norgeot  B, Quer  G, Beaulieu-Jones  BK,  et al.  Minimum information about clinical artificial intelligence modeling: the MI-CLAIM checklist.   Nat Med. 2020;26(9):1320-1324. doi:10.1038/s41591-020-1041-y PubMedGoogle ScholarCrossref
28.
Parikh  RB, Teeple  S, Navathe  AS.  Addressing bias in artificial intelligence in health care.   JAMA. 2019;322(24):2377-2378. doi:10.1001/jama.2019.18058 PubMedGoogle ScholarCrossref
29.
Chen  IY, Pierson  E, Rose  S, Joshi  S, Ferryman  K, Ghassemi  M.  Ethical machine learning in healthcare.   Annu Rev Biomed Data Sci. 2021;4:123-144. doi:10.1146/annurev-biodatasci-092820-114757 PubMedGoogle ScholarCrossref
30.
Linardatos  P, Papastefanopoulos  V, Kotsiantis  S.  Explainable AI: a review of machine learning interpretability methods.   Entropy (Basel). 2020;23(1):18. doi:10.3390/e23010018 PubMedGoogle ScholarCrossref
31.
Phillips  PJ, Hahn  CA, Fontana  PC,  et al.  Four Principles of Explainable Artificial Intelligence. NIST Interagency/Internal Report (NISTIR). National Institute of Standards and Technology; 2021.
32.
Vasse’i  RM, McCrosky  J.  AI Transparency in Practice. Mozilla; 2023. Accessed August 31, 2023. https://foundation.mozilla.org/en/research/library/ai-transparency-in-practice/ai-transparency-in-practice/
33.
CONSORT-AI and SPIRIT-AI Steering Group.  Reporting guidelines for clinical trials evaluating artificial intelligence interventions are needed.   Nat Med. 2019;25(10):1467-1468. doi:10.1038/s41591-019-0603-3 PubMedGoogle ScholarCrossref
34.
Wawira Gichoya  J, McCoy  LG, Celi  LA, Ghassemi  M.  Equity in essence: a call for operationalising fairness in machine learning for healthcare.   BMJ Health Care Inform. 2021;28(1):e100289. doi:10.1136/bmjhci-2020-100289 PubMedGoogle ScholarCrossref
35.
Antequera  A, Lawson  DO, Noorduyn  SG,  et al.  Improving social justice in COVID-19 health research: interim guidelines for reporting health equity in observational studies.   Int J Environ Res Public Health. 2021;18(17):9357. doi:10.3390/ijerph18179357 PubMedGoogle ScholarCrossref
36.
Welch  VA, Norheim  OF, Jull  J, Cookson  R, Sommerfelt  H, Tugwell  P; CONSORT-Equity and Boston Equity Symposium.  CONSORT-Equity 2017 extension and elaboration for better reporting of health equity in randomised trials.   BMJ. 2017;359:j5085. doi:10.1136/bmj.j5085 PubMedGoogle ScholarCrossref
37.
Joosten  YA, Israel  TL, Williams  NA,  et al.  Community engagement studios: a structured approach to obtaining meaningful input from stakeholders to inform research.   Acad Med. 2015;90(12):1646-1650. doi:10.1097/ACM.0000000000000794 PubMedGoogle ScholarCrossref
38.
The White House. Fact sheet: Biden-Harris administration announces key actions to advance tech accountability and protect the rights of the American public. October 4, 2022. Accessed November 10, 2023. https://www.whitehouse.gov/ostp/news-updates/2022/10/04/fact-sheet-biden-harris-administration-announces-key-actions-to-advance-tech-accountability-and-protect-the-rights-of-the-american-public/
39.
National Congress of American Indians. Tribal nations & the United States: an introduction. Accessed November 27, 2023. https://archive.ncai.org/about-tribes
40.
Drukker  K, Chen  W, Gichoya  J,  et al.  Toward fairness in artificial intelligence for medical image analysis: identification and mitigation of potential biases in the roadmap from data collection to model deployment.   J Med Imaging (Bellingham). 2023;10(6):061104. doi:10.1117/1.JMI.10.6.061104 PubMedGoogle ScholarCrossref
41.
Xu  J, Xiao  Y, Wang  WH,  et al.  Algorithmic fairness in computational medicine.   EBioMedicine. 2022;84:104250. doi:10.1016/j.ebiom.2022.104250 PubMedGoogle ScholarCrossref
42.
Mehrabi  N, Morstatter  F, Saxena  N, Lerman  K, Galstyan  A.  A survey on bias and fairness in machine learning.   ACM Comput Surv. 2021;54(6):1-35. doi:10.1145/3457607 Google ScholarCrossref
43.
Rajkomar  A, Hardt  M, Howell  MD, Corrado  G, Chin  MH.  Ensuring fairness in machine learning to advance health equity.   Ann Intern Med. 2018;169(12):866-872. doi:10.7326/M18-1990 PubMedGoogle ScholarCrossref
44.
Weinkauf  D. When worlds collide—the possibilities and limits of algorithmic fairness (part 1). Privacy Tech-Know blog. April 5, 2023. Accessed August 31, 2023. https://www.priv.gc.ca/en/blog/20230405_01/
45.
Pfeiffer  J, Gutschow  J, Haas  C,  et al.  Algorithmic fairness in AI.   Bus Inf Syst Eng. 2023;65:209-222. doi:10.1007/s12599-023-00787-x Google ScholarCrossref
46.
Caton  S, Haas  C.  Fairness in machine learning: a survey.   arXiv. Preprint posted online October 4, 2020. doi:10.48550/arXiv.2010.04053Google Scholar
47.
Cary  MP  Jr, Zink  A, Wei  S,  et al.  Mitigating racial and ethnic bias and advancing health equity in clinical algorithms: a scoping review.   Health Aff (Millwood). 2023;42(10):1359-1368. doi:10.1377/hlthaff.2023.00553 PubMedGoogle ScholarCrossref
48.
Jung  S, Park  T, Chun  S, Moon  T.  Re-weighting based group fairness regularization via classwise robust optimization.   arXiv. Preprint posted online March 1, 2023. doi:10.48550/arXiv.2303.00442Google Scholar
49.
Daniels  N.  Justice, health, and healthcare.   Am J Bioeth. 2001;1(2):2-16. doi:10.1162/152651601300168834 PubMedGoogle ScholarCrossref
50.
Rojas  JC, Fahrenbach  J, Makhni  S,  et al.  Framework for integrating equity into machine learning models: a case study.   Chest. 2022;161(6):1621-1627. doi:10.1016/j.chest.2022.02.001 PubMedGoogle ScholarCrossref
51.
Weinkauf  D. When worlds collide—the possibilities and limits of algorithmic fairness (part 2). Privacy Tech-Know blog. Office of the Privacy Commissioner of Canada. April 5, 2023. Accessed August 31, 2023. https://www.priv.gc.ca/en/blog/20230405_02/
52.
Bedoya  AD, Economou-Zavlanos  NJ, Goldstein  BA,  et al.  A framework for the oversight and local deployment of safe and high-quality prediction models.   J Am Med Inform Assoc. 2022;29(9):1631-1636. doi:10.1093/jamia/ocac078 PubMedGoogle ScholarCrossref
53.
Rojas  JC, Rohweder  G, Guptill  J, Arora  VM, Umscheid  CA.  Predictive analytics programs at large healthcare systems in the USA: a national survey.   J Gen Intern Med. 2022;37(15):4015-4017. doi:10.1007/s11606-022-07517-1 PubMedGoogle ScholarCrossref
54.
Eggers  W, Walsh  S, Joergensen  C, Kishnani  P. Regulation that enables innovation. Deloitte Insights. March 23, 2023. Accessed November 10, 2023. https://www2.deloitte.com/us/en/insights/industry/public-sector/government-trends/2023/regulatory-agencies-and-innovation.html
55.
McCradden  MD, Joshi  S, Anderson  JA, Mazwi  M, Goldenberg  A, Zlotnik Shaul  R.  Patient safety and quality improvement: ethical principles for a regulatory approach to bias in healthcare machine learning.   J Am Med Inform Assoc. 2020;27(12):2024-2027. doi:10.1093/jamia/ocaa085 PubMedGoogle ScholarCrossref
2 Comments for this article
EXPAND ALL
Fails to mention Medicare Advantage as driver of algorithm bias, other disparities
Linda Burke, PhD | Elmhurst University
This article fails to mention Medicare Advantage (MA) as a perpetrator of algorithm-based denials of care to vulnerable patients. (1)

For patients in MA plans, premature termination of in-patient skilled nursing care, against physician’s advice, is so notorious that the federal government has agreed to regulate the practice, beginning in 2024. (2, 3)

The trouble isn't limited to inpatient care. MA is not Medicare, but a private, for-profit insurance program. Profits are increased by diagnosis overcoding, extremely limited physician networks, prior authorizations, frequent denials of care compared to traditional Medicare, and limits on inpatient hospital stays and
rehabilitation. (4, 5, 6) All of these practices disproportionally endanger minority and low-income patients and the financially vulnerable institutions that serve these populations. (3, 4, 6).

MA has increased, not lowered, the cost of senior healthcare. Compared to traditional Medicare, MA costs the government 6% more. (5)

“Guiding principles" to address the impact of algorithm bias in the context of healthcare disparities should never fail to address the role of MA as a major source of the problem. Mainstream organized medicine should follow the lead of physicians’ groups that are taking a stand against the very existence of Medicare Advantage and the privatization of Medicare. (6)

References
1. M Chin et al. Guiding principles to address the impact of algorithm bias on racial and ethnic disparities in health and health care. Published online December 15, 2023. Accessed December 15, 2023. Guiding Principles to Address the Impact of Algorithm Bias on Racial and Ethnic Disparities in Health and Health Care
https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2812958

2. Jaffe S. U.S. to rein in algorithms for Medicare Advantage coverage decisions. The Washington Post. online October 1, 2023. Accessed December 16, 2023.
https://www.washingtonpost.com/health/2023/10/01/medicare-advantage-algorithm-changes/

3. Z Siddiqi. Humana Is Latest To Face Lawsuit for AI-Based Denials of Medicare Advantage Claims in Nursing Homes - Skilled Nursing News. published online December 13, 2023. Accessed December 16, 2023.
https://skillednursingnews.com/2023/12/humana-is-latest-to-face-lawsuit-for-ai-based-denials-of-medicare-advantage-claims-in-nursing-homes/

4. Morgenson, G. 'Deny, deny, deny': By rejecting claims, Medicare Advantage plans threaten rural hospitals and patients, CEOs say. NBC News. published online October 31, 2023. Accessed November 28, 2023. www.nbcnews.com/health/rejecting-claims-medicare-advantage-rural-hospitals-rcna121012

5. G Jacobson and D Blumenthal. The Predominance of Medicare Advantage. NEJM published online December 14, 2023. Accessed December 16, 2023. https://www.nejm.org/doi/full/10.1056/NEJMhpr2302315

6. Physicians for a National Health Program. CMS should terminate the Medicare Advantage Program PNHP Comments on CMS file code CMS-4203-NC: “Medicare Program; Request for Information on Medicare Advantage” Published online August 22, 2023. Accessed August 28, 2023. https://pnhp.org/news/cms-should-terminate-the-medicare-advantage-program/

CONFLICT OF INTEREST: None Reported
READ MORE
Unveiling Challenges in AI for Healthcare and Safety Equity: Bridging Gaps for Inclusive Innovation
Ediriweera Desapriya, PhD, Crystal Ma BSc, Hasara Illuppalle, Dave Gunaratne, Ian Pike | Department of Pediatrics, Faculty of Medicine, BC Children's Hospital; University of British Columbia-Vancouver
The promotion of AI in healthcare to advance health care and safety equity is a commendable goal with the potential for transformative impact. However, as evidenced by recent studies and concerns highlighted in the healthcare and self-driving car contexts, the current models indeed face challenges and limitations (1, 2, 3). Acknowledging these challenges is crucial for ensuring that the integration of AI in healthcare aligns with the objectives of equity and safety:

Algorithmic Bias and Inefficiency:
Despite the push for AI to advance equity, algorithmic bias remains a persistent issue. The healthcare study emphasized biases in algorithms that can affect diagnosis, treatment eligibility, and resource allocation, potentially exacerbating
existing health disparities (1).

In self-driving car studies, the inefficiency of object-detection models to accurately identify individuals with dark skin tones raises concerns about the reliability and fairness of these systems (2, 3).
Underrepresentation of Vulnerable Groups:
Available studies highlight the challenge of underrepresented data, with the healthcare study noting the potential for bias when datasets lack diversity, and the self-driving car study revealing detection inaccuracies for specific demographic groups (1, 2, 3).

The underrepresentation of vulnerable populations poses a significant barrier to achieving equity through AI in healthcare and safety, as models may not adequately account for the diverse needs and characteristics of all individuals.

Transparency and Accountability:
The inefficiency of current AI models also lies in the lack of transparency and accountability. The healthcare study emphasizes the need for transparency in algorithm development, validation, and deployment phases, while the self-driving car study points out the non-availability of data for scrutiny.

Without transparency and accountability measures, it becomes challenging to identify and rectify biases, hindering the progress toward equitable healthcare and safety outcomes.

Empowerment and Inclusive Development:
While scholars and healthcare leaders advocate for AI to advance equity, the current limitations underscore the importance of empowering vulnerable populations and ensuring their active involvement in the development process.

Inclusive development practices, which include diverse perspectives and experiences, can contribute to more robust AI models that better serve the healthcare and safety needs of all individuals.

Call for Rigorous Testing and Innovation:
The identified inefficiencies call for a renewed commitment to rigorous testing, ongoing evaluation, and continuous innovation. Both healthcare and autonomous driving AI systems must undergo thorough assessments to identify and rectify biases and shortcomings.

Both the healthcare study and the self-driving car studies underscore the pervasive issue of algorithmic bias, emphasizing the role of biased data in perpetuating inequalities. The healthcare study emphasizes how biased algorithms, particularly those underrepresenting vulnerable groups, can result in discriminatory healthcare outcomes. Similarly, the self-driving car study highlights the consequences of imbalanced datasets on the accuracy of object-detection models, particularly when it comes to dark-skinned individuals including the vulnerable child pedestrians. The common thread is the recognition that biased data contribute significantly to algorithmic discrimination.

While the vision of using AI to advance health care and safety equity is promising, it is essential to understand the current inefficiencies and challenges head-on. This requires a commitment to transparency, diversity in data representation, and active involvement of all stakeholders, including vulnerable populations, to ensure that AI in healthcare aligns with the principles of equity, safety, and inclusivity. The path forward involves iterative improvements, collaboration, and a dedication to the ethical and equitable deployment of AI technologies in critical sectors.

References
1. Chin MH, Afsar-Manesh N, Bierman AS, et al. Guiding Principles to Address the Impact of Algorithm Bias on Racial and Ethnic Disparities in Health and Health Care. JAMA Netw Open. 2023;6(12):e2345050. doi:10.1001/jamanetworkopen.2023.45050
2. Todd K. The Problem of Algorithmic Bias in Autonomous Vehicles. Law and Mobility Program and the Journal of Law and Mobility, University of Michigan Law School. https://futurist.law.umich.edu/the-problem-of-algorithmic-bias-in-autonomous-vehicles/
3. Driverless cars worse at detecting children and darker-skinned pedestrians say scientists. https://www.kcl.ac.uk/news/driverless-cars-worse-at-detecting-children-and-darker-skinned-pedestrians-say-scientists
CONFLICT OF INTEREST: None Reported
READ MORE
Special Communication
Health Informatics
December 15, 2023

Guiding Principles to Address the Impact of Algorithm Bias on Racial and Ethnic Disparities in Health and Health Care

Author Affiliations
  • 1University of Chicago, Chicago, Illinois
  • 2Oracle Health, Austin, Texas
  • 3Agency for Healthcare Research and Quality, Rockville, Maryland
  • 4US Department of Health and Human Services Office of Minority Health, Rockville, Maryland
  • 5NORC at the University of Chicago, Bethesda, Maryland
  • 6National Institute on Minority Health and Health Disparities, Bethesda, Maryland
  • 7Association of American Medical Colleges, Washington, DC
  • 8Stanford University, Stanford, California
  • 9Equality AI, Park City, Utah
  • 10American Medical Association, Chicago, Illinois
  • 11Office of the National Coordinator for Health Information Technology, Washington, DC
  • 12Prudential Financial, Arlington, Virginia
  • 13NORC at the University of Chicago, Chicago, Illinois
  • 14Elevance Health, Indianapolis, Indiana
  • 15Yale School of Medicine, New Haven, Connecticut
JAMA Netw Open. 2023;6(12):e2345050. doi:10.1001/jamanetworkopen.2023.45050
Abstract

Importance  Health care algorithms are used for diagnosis, treatment, prognosis, risk stratification, and allocation of resources. Bias in the development and use of algorithms can lead to worse outcomes for racial and ethnic minoritized groups and other historically marginalized populations such as individuals with lower income.

Objective  To provide a conceptual framework and guiding principles for mitigating and preventing bias in health care algorithms to promote health and health care equity.

Evidence Review  The Agency for Healthcare Research and Quality and the National Institute for Minority Health and Health Disparities convened a diverse panel of experts to review evidence, hear from stakeholders, and receive community feedback.

Findings  The panel developed a conceptual framework to apply guiding principles across an algorithm’s life cycle, centering health and health care equity for patients and communities as the goal, within the wider context of structural racism and discrimination. Multiple stakeholders can mitigate and prevent bias at each phase of the algorithm life cycle, including problem formulation (phase 1); data selection, assessment, and management (phase 2); algorithm development, training, and validation (phase 3); deployment and integration of algorithms in intended settings (phase 4); and algorithm monitoring, maintenance, updating, or deimplementation (phase 5). Five principles should guide these efforts: (1) promote health and health care equity during all phases of the health care algorithm life cycle; (2) ensure health care algorithms and their use are transparent and explainable; (3) authentically engage patients and communities during all phases of the health care algorithm life cycle and earn trustworthiness; (4) explicitly identify health care algorithmic fairness issues and trade-offs; and (5) establish accountability for equity and fairness in outcomes from health care algorithms.

Conclusions and Relevance  Multiple stakeholders must partner to create systems, processes, regulations, incentives, standards, and policies to mitigate and prevent algorithmic bias. Reforms should implement guiding principles that support promotion of health and health care equity in all phases of the algorithm life cycle as well as transparency and explainability, authentic community engagement and ethical partnerships, explicit identification of fairness issues and trade-offs, and accountability for equity and fairness.

Introduction

Health care algorithms, defined as mathematical models used to inform decision-making, are ubiquitous and may be used to improve health outcomes. However, algorithmic bias has harmed minoritized communities in housing, banking, and education, and health care is no different.1 Thus, addressing algorithmic bias is an urgent issue, as exemplified by a Biden Administration Executive Order stating that “agencies shall consider opportunities to prevent and remedy discrimination, including by protecting the public from algorithmic discrimination.”2

An unbiased algorithm is one that ensures patients who receive the same algorithm score or classification have the same basic needs.3 Health care algorithms are used for diagnosis, treatment, prognosis, risk stratification, triage, and resource allocation. A biased algorithm that used race to estimate kidney function resulted in higher estimates for Black patients compared with White patients, leading to delays in organ transplant referral for Black patients.4 A commercial algorithm that risk-stratified patients to determine eligibility for chronic disease management programs effectively required Black individuals to be sicker than White individuals to qualify for such services.5 Potentially biased algorithms have been developed for heart failure, cardiac surgery, kidney transplantation, vaginal birth after cesarean delivery, rectal cancer, and breast cancer, often affecting access to or eligibility for interventions or services, and resource allocation.4

The Agency for Healthcare Research and Quality (AHRQ) and the National Institute on Minority Health and Health Disparities (NIMHD) convened a panel to recommend core guiding principles for the development and use of clinical algorithms in health care, including data-driven, probability-based algorithms such as those using artificial intelligence and machine learning approaches. The panel’s core guiding principles also apply to rules-based approaches derived from data (eg, if acute myocardial infarction, give aspirin), since these rules may reflect the specific data sets and patient populations from which they were generated and the potential biases within.

The Council on Artificial Intelligence of the Organization for Economic Cooperation and Development defines an artificial intelligence system as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems are designed to operate with varying levels of autonomy.”6 Machine learning is a subset of artificial intelligence that analyzes data using mathematical modeling to learn patterns that can make predictions or guide tasks.7 Traditional statistical regression techniques, often used in earlier risk prediction models, estimate relationships between predictors and outcomes. In contrast, machine learning models can “learn” by using mathematical techniques that infer relationships within large data sets to inform predictions.8

This article describes guiding principles for health care algorithms and key operational considerations. This work is not exhaustive because synergistic efforts, such as those of the Office of the National Coordinator for Health Information Technology (ONC), are ongoing.9,10 Algorithmic bias is neither inevitable nor merely a mechanical or technical issue. Conscious decisions by algorithm developers, algorithm users, health care industry leaders, and regulators can mitigate and prevent bias and proactively advance health equity.

Methods

The AHRQ received a congressional letter in fall 2020 inquiring about the contribution of clinical algorithms to racial and ethnic bias in health care. In response, the AHRQ published a request for information to elicit perspectives from public stakeholders on this topic and commissioned an evidence review to examine the impact of health care algorithms on health disparities and to identify potential solutions to mitigate biases.11 The subsequent evidence review underscored the limits of current knowledge and research about health care algorithms in the literature.

The AHRQ, the NIMHD, the US Department of Health and Human Services (HHS) Office of Minority Health, and the ONC collaboratively recruited 9 stakeholders with diverse backgrounds and expertise to serve on a panel to develop guiding principles to address racial and ethnic bias in health and health care resulting from algorithms. The panel heard from a group of national and international thought leaders involved in algorithm design, development, implementation, and oversight during a 2-day hybrid public meeting and received feedback on draft principles from patient and community representatives and the public during a subsequent virtual meeting.12 These perspectives were particularly important for the panel’s recommendations, given the limitations of the published literature. The panel’s work, including this article, was developed iteratively.

Results
Conceptual Framework for Mitigating and Preventing Bias in Health Care Algorithms

The conceptual framework to mitigate and prevent bias in health care algorithms (Figure) built on a National Academy of Medicine13 algorithm life cycle framework adapted by Roski et al.14 Within the context of structural racism and discrimination,15 the goal is to promote health and health care equity for patients and communities. An algorithm’s life cycle comprises 5 phases that typically occur sequentially.16 Problem formulation (phase 1) defines the problem that the algorithm is designed to address, relevant actors, and priority outcomes. Problem formulation is followed by selection and management of the data used by the algorithm (phase 2) and subsequent development, training, and validation of the algorithm (phase 3). The algorithm is deployed and integrated in its intended setting (phase 4). Mechanisms should monitor performance and outcomes and maintain, update, or deimplement the algorithm accordingly (phase 5).

Guiding principles apply at each phase to mitigate and prevent bias in an algorithm. Operationalization of principles takes place at 3 levels: individual (developers and users), institutional (organizational policies and procedures), and societal (legislation, regulation, and private policy).

Guiding Principles for Mitigating and Preventing Racial and Ethnic Bias in Health Care Algorithms

Tables 1 and 2 list the guiding principles and their operational considerations. Each principle is described hereinafter.

Guiding Principle 1: Promote Health and Health Care Equity During All Phases of the Health Care Algorithm Life Cycle

Advancing health equity should be a fundamental objective of any algorithm used in health care.7 The World Health Organization defines equity as the “absence of unfair, avoidable, or remediable differences among groups of people, whether those groups are defined socially, economically, demographically, or geographically or by other dimensions of inequality (e.g., sex, gender, ethnicity, disability, or sexual orientation).”17 Algorithms should be designed with goals of advancing health equity, promoting fairness, and reducing health disparities.

Formulating the problem appropriately is critical (phase 1), and improving health and health care equity for patients and communities should be central.3 During the data selection, assessment, and management phase of the algorithm life cycle (phase 2), data used for algorithm development should be assessed for biases, accuracy, fitness for the intended purpose, and representativeness of the intended population. Engagement of key diverse stakeholders—which includes communities—during problem formulation (phase 1) and data selection (phase 2) is critical to avoid knowledge gaps. Any issues identified should be documented, and corrective actions should be taken before moving to algorithm development, training, and validation (phase 3).

It is critical to use rigorous methods, wise human judgment, and checks and balances in algorithm development to mitigate and prevent bias and ensure that conclusions are accurate, robust, and reproducible.24 Compared to traditional statistical techniques in which statisticians have more manual control over the analyses, artificial intelligence models can be more opaque and more difficult to interpret. They risk being overfitted to the data at hand, threatening generalizability. Artificial intelligence models sometimes lack common sense and are more difficult to audit. Thus, rigorous methods and processes are essential for algorithm development.25-29

Algorithms should be validated across populations to ensure fairness in performance. After an algorithm is deployed, continuous monitoring for performance and data drift is necessary. Monitoring should assess the fairness and equity of the algorithm output as well as the impact of the algorithm on patients, populations, and society, including data privacy and resource allocation. Measurement and comparison of outcomes between advantaged and historically marginalized populations such as racial and ethnic minoritized groups or individuals with lower income should be assessed routinely by health care systems, algorithm vendors, and the research community and supported by research sponsors (eg, funders, scientific journals). Algorithm end users should supplement model outputs with human judgment. Furthermore, access to information technology for all should be ensured.

Guiding Principle 2: Ensure Health Care Algorithms and Their Use Are Transparent and Explainable

Algorithm developers, health care institutions, algorithm users, and regulators are responsible for ensuring that algorithms are transparent, easy to explain, and readily interpretable at all steps in the algorithm life cycle for diverse audiences.30,31 The HHS states that “all relevant individuals should understand how their data is being used and how AI systems make decisions; algorithms, attributes and correlations should be open to inspection.”20 Development of transparent and explainable algorithms requires algorithm developers and stewards to present evidence for impact on processes and outcomes and to provide understandable and accurate explanations to clinicians and patients to enable informed decision-making.32 In addition, an algorithm should only operate under the conditions for which it was designed, and outputs should only be used when there is confidence in the results.31

Transparency includes multiple domains, such as availability of technical information, algorithm oversight, and communication of impact to stakeholders.20,31,32 Algorithm developers should create profiles of the data used to train the algorithm, describing distribution of key aspects of the population in the data set (eg, race and ethnicity, gender, socioeconomic status, and age); they should also make data exploration analysis readily available for independent review. Algorithm developers should disclose types, sizes, and overall distributions in data sets used in their formulation, testing, and validation. Regulation should require algorithm information labels or model cards sufficient to assess design, validity, and the presence of bias.10,21 Implementers should disclose the purpose of algorithms and their impact. If biases have been identified in an algorithm, the developers, implementers, and users should disclose such biases. Any bias mitigation attempts should also be disclosed to all with a stake in the algorithm, including patients, caregivers, and communities. A structured reporting process could identify signals of emerging problems both locally and nationally and facilitate addressing such problems systematically.

Several reporting guidelines promote transparency of research examining algorithms.33 However, these guidelines do not include concrete ways to report on fairness, and they rarely make explicit mention of equity.34 Reporting guidelines for algorithms should therefore be updated with specific equity approaches as has been done for observational studies and randomized clinical trials.35,36

Guiding Principle 3: Authentically Engage Patients and Communities During All Phases of the Health Care Algorithm Life Cycle, and Earn Trustworthiness

Authentically engaging and partnering with patients and communities is essential to understand both a problem affecting them and its solutions.22 Moreover, it is an ethical imperative to engage with patients and communities around health care algorithms and earn their trust, as these tools can provide great benefit or harm. Patients and communities, including populations who have been historically marginalized, should be engaged authentically and ethically when identifying and assessing a problem that requires use of an algorithm as part of its solution and during algorithm data selection, development, deployment, and monitoring.

Early and intentional engagement can help identify priorities of patients and communities and any concerns they have regarding algorithm use.37 All patients and communities should be informed when an algorithm is used in their care, should be advised about impact of the algorithm on their treatment, and should be provided alternatives if appropriate.38 They should know how the algorithm performs for their demographic group compared with other groups and be made aware of any opportunities to opt out of algorithms or to pursue alternatives to algorithm-driven decisions.

Algorithms should be bound by concepts of data sovereignty, the idea that data are subject to legal regulations of countries, nations, or states. Sovereignty is of particular importance to Indigenous nations.39 Health care organizations, vendors, and other model developers earn trustworthiness through authenticity, ethical and transparent practices, security and privacy of data, and timely disclosures of algorithm use.

Guiding Principle 4: Explicitly Identify Health Care Algorithmic Fairness Issues and Trade-offs

The panel recommends that advancing health and health care equity for patients and communities should be the goal of health care algorithms. Advancing health equity requires expertise in algorithmic fairness—the field of identifying, understanding, and mitigating bias.40-42 Health care algorithmic fairness issues arise from both ethical choices and technical decisions at different phases of the algorithm life cycle.16,43 For example, fundamental ethical choices can arise during problem formulation (phase 1; eg, Is the goal of the algorithm to improve and advance equitable outcomes or is the primary goal to maximize profit?). Additionally, if a particular algorithm use involves choosing a cutoff point for action during model development and implementation, should that cutoff be chosen to maximize sensitivity of the tool to identify someone who might benefit from an intervention, or should it be chosen to maximize specificity of the tool so inappropriate patients are not exposed to unnecessary risk from the intervention? Trade-offs among competing fairness metrics and values are common. Different technical definitions of algorithmic fairness, such as sufficiency, separation, and independence, are mathematically mutually incompatible, trading off maximizing accuracy of an algorithm and minimizing differences among groups across definitions.44 It is critical to make health care algorithm fairness issues and trade-offs explicit, transparent, and explainable. Thus, solutions to advance health equity with health care algorithms require ethical, technical, and social approaches—there is no simple cookie-cutter technical solution.43,45

Technical methods for improving fairness in algorithms can be divided into stages of modeling: preprocessing (eg, repair biased data set), in-processing (eg, use fairness metrics in the model optimization process to maximize accuracy and fairness), and postprocessing (eg, transform model output to improve prediction fairness).46,47 Key issues for fairness metrics include prioritization of fairness for group or individual, binary classification (eg, qualifies for service or not) vs continuous classification (eg, regression output), and use of regularization methods (fairness metrics to balance accuracy and fairness), reweighting methods (weight samples from underrepresented groups more highly), or both.48 Of note, technical definitions and metrics of fairness often do not translate clearly or intuitively to ethical, legal, social, and economic conceptions of fairness.46,47 Thus, close collaboration and discussion are essential among stakeholders, including algorithm developers, algorithm users, and the communities to whom the algorithm will be applied.

We recommend considering fairness of algorithms through the lens of distributive justice, the socially just distribution of outcomes and allocation of resources across different populations.49 Distributive justice metrics include clinical outcomes, resource allocation, and performance measures of algorithms (eg, sensitivity, specificity, and positive predictive value).8,43,50 When unfairness is identified, bias should be mitigated using both social (eg, diverse teams and stakeholder co-development) and technical (eg, algorithmic fairness toolkits, fairness metrics, data set collection, and deimplementation) mitigation methods.51 Algorithms and accompanying policies and regulations should also be viewed through frames of equity of harms and risks and explicit identification of trade-offs among different competing values and options.41-43 Algorithms with a higher risk of substantial harm and injustice should have stricter internal oversight by organizations and more stringent external regulation.20

Guiding Principle 5: Establish Accountability for Equity and Fairness in Outcomes From Health Care Algorithms

Model developers and users, including vendors, health care organizations, researchers, and professional societies, should accept responsibility to achieve equity and fairness in outcomes from health care algorithms and be accountable for the performance of algorithms in different populations. Institutions such as vendors and health care provider organizations should establish processes at each phase of the algorithm life cycle to promote equity and fairness. Transparency in the types of training data, processes, and evaluations used is paramount. For example, an academic medical center recently published its framework for oversight and deployment of prediction models, which includes checkpoint gates and an oversight governance structure.52 Current evidence suggests that such governance infrastructure is rare.53

Organizations should have an inventory of their algorithms and have local, periodic evaluations and processes that screen for and mitigate bias. It is crucial for organizations to engage stakeholders throughout the entire algorithm life cycle to ensure fairness and promote trust. This means incorporating model developers, end users, health care administrators, clinicians, patient advocates, and community representatives. Different organizations and experts have recommended various accountability metrics and oversight structures.3

Regulations and incentives should support equity and fairness while also promoting innovation.54 There should be redress for persons and communities who have been harmed by biased algorithms. An ethical, legal, social, and administrative framework and culture should be created that redresses harm while encouraging quality improvement, collaboration, and transparency, similar to what is recommended for patient safety.55

Conclusions

ChatGPT and other artificial intelligence language models have spurred widespread public interest in the potential value and dangers of algorithms. Multiple stakeholders must partner to create systems, processes, regulations, incentives, standards, and policies to mitigate and prevent algorithm bias in health care.47 Dedicated resources and the support of leaders and the public are critical for successful reform. It is our obligation to avoid repeating errors that tainted use of algorithms in other fields.

Back to top
Article Information

Accepted for Publication: October 5, 2023.

Published: December 15, 2023. doi:10.1001/jamanetworkopen.2023.45050

Open Access: This is an open access article distributed under the terms of the CC-BY-NC-ND License. © 2023 Chin MH et al. JAMA Network Open.

Corresponding Authors: Marshall H. Chin, MD, MPH, University of Chicago, 5841 S. Maryland Ave, MC2007, Chicago, IL 60637 (mchin@bsd.uchicago.edu); Lucila Ohno-Machado, MD, PhD, MBA, Yale School of Medicine, 333 Cedar St, New Haven, CT 06510 (lucila.ohno-machado@yale.edu).

Author Contributions: Dr Chin had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: Chin, Afsar-Manesh, Bierman, Chang, Colón-Rodríguez, Duran, Fair, Hernandez-Boussard, Hightower, Jain, Jordan, Konya, R.H. Moore, Rodriguez, Shaheen, Srinivasan, Umscheid, Ohno-Machado.

Acquisition, analysis, or interpretation of data: Chin, Bierman, Chang, Dullabh, Duran, Hernandez-Boussard, Hightower, Jain, Jordan, Konya, R.H. Moore, T.T. Moore, Rodriguez, Snyder, Srinivasan, Umscheid, Ohno-Machado.

Drafting of the manuscript: Chin, Bierman, Dullabh, Hernandez-Boussard, Hightower, Jordan, T.T. Moore, Rodriguez, Shaheen, Snyder, Srinivasan, Ohno-Machado.

Critical review of the manuscript for important intellectual content: Chin, Afsar-Manesh, Bierman, Chang, Colón-Rodríguez, Duran, Fair, Hernandez-Boussard, Hightower, Jain, Jordan, Konya, R.H. Moore, T.T. Moore, Shaheen, Umscheid, Ohno-Machado.

Obtained funding: Bierman, Chang, Dullabh, Duran, Jain.

Administrative, technical, or material support: Chin, Bierman, Chang, Colón-Rodríguez, Duran, Fair, Jain, Jordan, Konya, R.H. Moore, T.T. Moore, Rodriguez, Shaheen, Snyder, Srinivasan, Umscheid, Ohno-Machado.

Supervision: Duran, Umscheid.

Conflict of Interest Disclosures: Dr Chin reported receiving grants or contracts from the Agency for Healthcare Research and Quality (AHRQ), the California Health Care Foundation, the Health Resources and Services Administration, Kaiser Foundation Health Plan Inc, the Merck Foundation, the Patient-Centered Outcomes Research Institute, and the Robert Wood Johnson Foundation; and receiving personal fees for advisory board service from the US Centers for Medicare & Medicaid Services, Bristol Myers Squibb Company, Blue Cross Blue Shield, the US Centers for Disease Control and Prevention, and the American College of Physicians outside the submitted work. Dr Chin also reported serving as a member of the Families USA Equity and Value Task Force Advisory Council, the Essential Hospitals Institute Innovation Committee, the Institute for Healthcare Improvement and American Medical Association National Initiative for Health Equity Steering Committee for Measurement, the National Committee for Quality Assurance Expert Work Group (on the role of social determinants of health data in health care quality measurement), and The Joint Commission Health Care Equity Certification Technical Advisory Panel outside the submitted work. Dr Chin also reported being a member of the National Advisory Council of the National Institute on Minority Health and Health Disparities (NIMHD), the Health Disparities and Health Equity Working Group of the National Institute of Diabetes and Digestive and Kidney Diseases, and the National Academy of Medicine Council. Finally, Dr Chin reported receiving honoraria from the Oregon Health Authority and the Pittsburgh Regional Health Initiative and meeting and travel support from America’s Health Insurance Plans outside the submitted work. Dr Afsar-Manesh reported being employed by and holding equity in Oracle outside the submitted work, and their spouse is employed by and holds equity in Amgen. Dr Dullabh reported receiving grants from the AHRQ during the conduct of the study. Dr Hightower reported serving as cofounder and chief executive officer of Equality AI outside the submitted work. Dr Jordan reported engaging with this article as part of regular work duties as an employee of the American Medical Association. Dr Rodriguez reported receiving grants from the AHRQ during the conduct of the study. Dr Snyder reported receiving a federal contract from the US Department of Health and Human Services (to NORC at the University of Chicago) during the conduct of the study. Dr Srinivasan reported funding for this work under a contract (to NORC at the University of Chicago) from the AHRQ during the conduct of the study. No other disclosures were reported.

Funding/Support: This work was supported by funding from the AHRQ and the NIMHD. Dr Chin was supported, in part, by grant P30DK092949 from the National Institute of Diabetes and Digestive and Kidney Diseases to the Chicago Center for Diabetes Translation Research. Dr Hernandez-Boussard was supported, in part, by grant UL1TR003142 from the National Center for Advancing Translational Sciences of the National Institutes of Health (NIH). Dr Ohno-Machado was supported, in part, by grants U54HG012510 and RM1HG011558 from the NIH.

Role of the Funder/Sponsor: Coauthors from the Agency for Healthcare Research and Quality and the National Institute on Minority Health and Health Disparities participated in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Disclaimer: The findings and conclusions in this document are those of the authors, who are responsible for its content, and do not necessarily represent the views of the Agency for Healthcare Research and Quality, the Office of the National Coordinator for Health Information Technology, the Office of Minority Health, the National Institute on Minority Health and Health Disparities, the National Institutes of Health, or the US Department of Health and Human Services. No statement in this report should be construed as an official position of the Agency for Healthcare Research and Quality, the Office of the National Coordinator for Health Information Technology, the Office of Minority Health, the National Institute on Minority Health and Health Disparities, the National Institutes of Health, or the US Department of Health and Human Services. The thoughts and ideas expressed in this article are those of the authors and do not necessarily represent official American Medical Association policy. The thoughts and ideas expressed in this article are those of the authors and do not necessarily represent the views or policies of their employers or other organizations associated with the authors.

Meeting Presentation: This work was presented, in part, at an Agency for Healthcare Research and Quality virtual meeting (Opportunity for Feedback: Principles to Address the Impact of Healthcare Algorithms on Racial and Ethnic Disparities in Health and Healthcare); May 15, 2023; and at a symposium (Reconsidering Race in Clinical Algorithms: Driving Equity Through New Models in Research and Implementation) organized by the Doris Duke Foundation in partnership with the Gordon and Betty Moore Foundation, the Council of Medical Specialty Societies, and the National Academy of Medicine; June 27, 2023; Washington, DC.

References
1.
O’Neil  C.  Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown; 2016.
2.
The White House. Executive Order on further advancing racial equity and support for underserved communities through the federal government. Updated February 16, 2023. Accessed August 31, 2023. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/02/16/executive-order-on-further-advancing-racial-equity-and-support-for-underserved-communities-through-the-federal-government/
3.
Obermeyer  Z, Nissan  R, Stern  M, Eaneff  S, Bembeneck  EJ, Mullainathan  S.  Algorithmic Bias Playbook. Chicago Booth Center for Applied Artificial Intelligence; 2021. Accessed November 10, 2023. https://www.chicagobooth.edu/research/center-for-applied-artificial-intelligence/research/algorithmic-bias/playbook
4.
Vyas  DA, Eisenstein  LG, Jones  DS.  Hidden in plain sight—reconsidering the use of race correction in clinical algorithms.   N Engl J Med. 2020;383(9):874-882. doi:10.1056/NEJMms2004740 PubMedGoogle ScholarCrossref
5.
Obermeyer  Z, Powers  B, Vogeli  C, Mullainathan  S.  Dissecting racial bias in an algorithm used to manage the health of populations.   Science. 2019;366(6464):447-453. doi:10.1126/science.aax2342 PubMedGoogle ScholarCrossref
6.
Organization for Economic Cooperation and Development. Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449. 2019. Accessed November 10, 2023. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449#mainText
7.
World Health Organization.  Ethics and Governance of Artificial Intelligence for Health. World Health Organization; 2021. Accessed November 10, 2023. https://www.who.int/publications/i/item/9789240029200
8.
Makhni  S, Chin  MH, Fahrenbach  J, Rojas  JC.  Equity challenges for artificial intelligence algorithms in health care.   Chest. 2022;161(5):1343-1346. doi:10.1016/j.chest.2022.01.009 PubMedGoogle ScholarCrossref
9.
Marchesini  K, Smith  J, Everson  J. Increasing the transparency and trustworthiness of AI in health care. HealthITbuzz blog. April 13, 2023. Accessed August 31, 2023. https://www.healthit.gov/buzz-blog/health-innovation/transparent-and-trustworthy-ai-in-health-care
10.
Office of the National Coordinator for Health Information Technology. Health data, technology, and interoperability: certification program updates, algorithm transparency, and information sharing (HTI-1) proposed rule. HealthIT.gov. June 22, 2023. Accessed August 31, 2023. https://www.healthit.gov/topic/laws-regulation-and-policy/health-data-technology-and-interoperability-certification-program
11.
Jain  A, Brooks  JR, Alford  CC,  et al.  Awareness of racial and ethnic bias and potential solutions to address bias with use of health care algorithms.   JAMA Health Forum. 2023;4(6):e231197. doi:10.1001/jamahealthforum.2023.1197 PubMedGoogle ScholarCrossref
12.
Agency for Healthcare Research and Quality. Meetings examine impact of healthcare algorithms on racial and ethnic disparities in health and healthcare. Accessed August 31, 2023. https://effectivehealthcare.ahrq.gov/news/meetings
13.
Matheny  M, Thadaney  IS, Ahmed  M, Whicher  D, eds.  Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. National Academy of Medicine; 2022. Accessed November 10, 2023. https://nam.edu/artificial-intelligence-special-publication/
14.
Roski  J, Maier  EJ, Vigilante  K, Kane  EA, Matheny  ME.  Enhancing trust in AI through industry self-governance.   J Am Med Inform Assoc. 2021;28(7):1582-1590. doi:10.1093/jamia/ocab065 PubMedGoogle ScholarCrossref
15.
Bailey  ZD, Krieger  N, Agénor  M, Graves  J, Linos  N, Bassett  MT.  Structural racism and health inequities in the USA: evidence and interventions.   Lancet. 2017;389(10077):1453-1463. doi:10.1016/S0140-6736(17)30569-X PubMedGoogle ScholarCrossref
16.
Ng  MY, Kapur  S, Blizinsky  KD, Hernandez-Boussard  T.  The AI life cycle: a holistic approach to creating ethical AI for health decisions.   Nat Med. 2022;28(11):2247-2249. doi:10.1038/s41591-022-01993-y PubMedGoogle ScholarCrossref
17.
World Health Organization. Health equity. Accessed August 31, 2023. https://www.who.int/health-topics/health-equity#tab=tab_1
18.
Braveman  P, Arkin  E, Orleans  T, Proctor  D, Plough  A. What is health equity? Robert Wood Johnson Foundation. May 1, 2017. Accessed November 10, 2023. https://www.rwjf.org/en/insights/our-research/2017/05/what-is-health-equity-.html
19.
Health Care Payment Learning & Action Network. Advancing health equity through APMs: guidance for equity-centered design and implementation. 2021. Accessed November 10, 2023. http://hcp-lan.org/workproducts/APM-Guidance/Advancing-Health-Equity-Through-APMs.pdf
20.
US Department of Health and Human Services.  Trustworthy AI (TAI) Playbook. US Department of Health and Human Services; 2021. Accessed November 10, 2023. https://www.hhs.gov/sites/default/files/hhs-trustworthy-ai-playbook.pdf
21.
Zuckerman  BL, Karabin  JM, Parker  RA, Doane  WEJ, Williams  SR.  Options and Opportunities to Address and Mitigate the Existing and Potential Risks, as well as Promote Benefits, Associated With AI and Other Advanced Analytic Methods. OPRE Report 2022-253. US Department of Health and Human Services Office of Planning, Research, and Evaluation, Administration for Children and Families; 2022. Accessed November 10, 2023. https://www.acf.hhs.gov/opre/report/options-opportunities-address-mitigate-existing-potential-risks-promote-benefits
22.
Gonzalez  R. The spectrum of community engagement to ownership. Movement Strategy Center. 2019. Accessed November 10, 2023. https://movementstrategy.org/resources/the-spectrum-of-community-engagement-to-ownership/
23.
Loi  M, Heitz  C, Christen  M. A comparative assessment and synthesis of twenty ethics codes on AI and big data. In:  Proceedings of the 2020 7th Swiss Conference on Data Science (SDS). IEEE; 2020:41-46. doi:10.1109/SDS49233.2020.00015
24.
Hunter  DJ, Holmes  C.  Where medical statistics meets artificial intelligence.   N Engl J Med. 2023;389(13):1211-1219. doi:10.1056/NEJMra2212850 PubMedGoogle ScholarCrossref
25.
Parikh  RB, Obermeyer  Z, Navathe  AS.  Regulation of predictive analytics in medicine.   Science. 2019;363(6429):810-812. doi:10.1126/science.aaw0029 PubMedGoogle ScholarCrossref
26.
Vollmer  S, Mateen  BA, Bohner  G,  et al.  Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness.   BMJ. 2020;368:l6927. doi:10.1136/bmj.l6927 PubMedGoogle ScholarCrossref
27.
Norgeot  B, Quer  G, Beaulieu-Jones  BK,  et al.  Minimum information about clinical artificial intelligence modeling: the MI-CLAIM checklist.   Nat Med. 2020;26(9):1320-1324. doi:10.1038/s41591-020-1041-y PubMedGoogle ScholarCrossref
28.
Parikh  RB, Teeple  S, Navathe  AS.  Addressing bias in artificial intelligence in health care.   JAMA. 2019;322(24):2377-2378. doi:10.1001/jama.2019.18058 PubMedGoogle ScholarCrossref
29.
Chen  IY, Pierson  E, Rose  S, Joshi  S, Ferryman  K, Ghassemi  M.  Ethical machine learning in healthcare.   Annu Rev Biomed Data Sci. 2021;4:123-144. doi:10.1146/annurev-biodatasci-092820-114757 PubMedGoogle ScholarCrossref
30.
Linardatos  P, Papastefanopoulos  V, Kotsiantis  S.  Explainable AI: a review of machine learning interpretability methods.   Entropy (Basel). 2020;23(1):18. doi:10.3390/e23010018 PubMedGoogle ScholarCrossref
31.
Phillips  PJ, Hahn  CA, Fontana  PC,  et al.  Four Principles of Explainable Artificial Intelligence. NIST Interagency/Internal Report (NISTIR). National Institute of Standards and Technology; 2021.
32.
Vasse’i  RM, McCrosky  J.  AI Transparency in Practice. Mozilla; 2023. Accessed August 31, 2023. https://foundation.mozilla.org/en/research/library/ai-transparency-in-practice/ai-transparency-in-practice/
33.
CONSORT-AI and SPIRIT-AI Steering Group.  Reporting guidelines for clinical trials evaluating artificial intelligence interventions are needed.   Nat Med. 2019;25(10):1467-1468. doi:10.1038/s41591-019-0603-3 PubMedGoogle ScholarCrossref
34.
Wawira Gichoya  J, McCoy  LG, Celi  LA, Ghassemi  M.  Equity in essence: a call for operationalising fairness in machine learning for healthcare.   BMJ Health Care Inform. 2021;28(1):e100289. doi:10.1136/bmjhci-2020-100289 PubMedGoogle ScholarCrossref
35.
Antequera  A, Lawson  DO, Noorduyn  SG,  et al.  Improving social justice in COVID-19 health research: interim guidelines for reporting health equity in observational studies.   Int J Environ Res Public Health. 2021;18(17):9357. doi:10.3390/ijerph18179357 PubMedGoogle ScholarCrossref
36.
Welch  VA, Norheim  OF, Jull  J, Cookson  R, Sommerfelt  H, Tugwell  P; CONSORT-Equity and Boston Equity Symposium.  CONSORT-Equity 2017 extension and elaboration for better reporting of health equity in randomised trials.   BMJ. 2017;359:j5085. doi:10.1136/bmj.j5085 PubMedGoogle ScholarCrossref
37.
Joosten  YA, Israel  TL, Williams  NA,  et al.  Community engagement studios: a structured approach to obtaining meaningful input from stakeholders to inform research.   Acad Med. 2015;90(12):1646-1650. doi:10.1097/ACM.0000000000000794 PubMedGoogle ScholarCrossref
38.
The White House. Fact sheet: Biden-Harris administration announces key actions to advance tech accountability and protect the rights of the American public. October 4, 2022. Accessed November 10, 2023. https://www.whitehouse.gov/ostp/news-updates/2022/10/04/fact-sheet-biden-harris-administration-announces-key-actions-to-advance-tech-accountability-and-protect-the-rights-of-the-american-public/
39.
National Congress of American Indians. Tribal nations & the United States: an introduction. Accessed November 27, 2023. https://archive.ncai.org/about-tribes
40.
Drukker  K, Chen  W, Gichoya  J,  et al.  Toward fairness in artificial intelligence for medical image analysis: identification and mitigation of potential biases in the roadmap from data collection to model deployment.   J Med Imaging (Bellingham). 2023;10(6):061104. doi:10.1117/1.JMI.10.6.061104 PubMedGoogle ScholarCrossref
41.
Xu  J, Xiao  Y, Wang  WH,  et al.  Algorithmic fairness in computational medicine.   EBioMedicine. 2022;84:104250. doi:10.1016/j.ebiom.2022.104250 PubMedGoogle ScholarCrossref
42.
Mehrabi  N, Morstatter  F, Saxena  N, Lerman  K, Galstyan  A.  A survey on bias and fairness in machine learning.   ACM Comput Surv. 2021;54(6):1-35. doi:10.1145/3457607 Google ScholarCrossref
43.
Rajkomar  A, Hardt  M, Howell  MD, Corrado  G, Chin  MH.  Ensuring fairness in machine learning to advance health equity.   Ann Intern Med. 2018;169(12):866-872. doi:10.7326/M18-1990 PubMedGoogle ScholarCrossref
44.
Weinkauf  D. When worlds collide—the possibilities and limits of algorithmic fairness (part 1). Privacy Tech-Know blog. April 5, 2023. Accessed August 31, 2023. https://www.priv.gc.ca/en/blog/20230405_01/
45.
Pfeiffer  J, Gutschow  J, Haas  C,  et al.  Algorithmic fairness in AI.   Bus Inf Syst Eng. 2023;65:209-222. doi:10.1007/s12599-023-00787-x Google ScholarCrossref
46.
Caton  S, Haas  C.  Fairness in machine learning: a survey.   arXiv. Preprint posted online October 4, 2020. doi:10.48550/arXiv.2010.04053Google Scholar
47.
Cary  MP  Jr, Zink  A, Wei  S,  et al.  Mitigating racial and ethnic bias and advancing health equity in clinical algorithms: a scoping review.   Health Aff (Millwood). 2023;42(10):1359-1368. doi:10.1377/hlthaff.2023.00553 PubMedGoogle ScholarCrossref
48.
Jung  S, Park  T, Chun  S, Moon  T.  Re-weighting based group fairness regularization via classwise robust optimization.   arXiv. Preprint posted online March 1, 2023. doi:10.48550/arXiv.2303.00442Google Scholar
49.
Daniels  N.  Justice, health, and healthcare.   Am J Bioeth. 2001;1(2):2-16. doi:10.1162/152651601300168834 PubMedGoogle ScholarCrossref
50.
Rojas  JC, Fahrenbach  J, Makhni  S,  et al.  Framework for integrating equity into machine learning models: a case study.   Chest. 2022;161(6):1621-1627. doi:10.1016/j.chest.2022.02.001 PubMedGoogle ScholarCrossref
51.
Weinkauf  D. When worlds collide—the possibilities and limits of algorithmic fairness (part 2). Privacy Tech-Know blog. Office of the Privacy Commissioner of Canada. April 5, 2023. Accessed August 31, 2023. https://www.priv.gc.ca/en/blog/20230405_02/
52.
Bedoya  AD, Economou-Zavlanos  NJ, Goldstein  BA,  et al.  A framework for the oversight and local deployment of safe and high-quality prediction models.   J Am Med Inform Assoc. 2022;29(9):1631-1636. doi:10.1093/jamia/ocac078 PubMedGoogle ScholarCrossref
53.
Rojas  JC, Rohweder  G, Guptill  J, Arora  VM, Umscheid  CA.  Predictive analytics programs at large healthcare systems in the USA: a national survey.   J Gen Intern Med. 2022;37(15):4015-4017. doi:10.1007/s11606-022-07517-1 PubMedGoogle ScholarCrossref
54.
Eggers  W, Walsh  S, Joergensen  C, Kishnani  P. Regulation that enables innovation. Deloitte Insights. March 23, 2023. Accessed November 10, 2023. https://www2.deloitte.com/us/en/insights/industry/public-sector/government-trends/2023/regulatory-agencies-and-innovation.html
55.
McCradden  MD, Joshi  S, Anderson  JA, Mazwi  M, Goldenberg  A, Zlotnik Shaul  R.  Patient safety and quality improvement: ethical principles for a regulatory approach to bias in healthcare machine learning.   J Am Med Inform Assoc. 2020;27(12):2024-2027. doi:10.1093/jamia/ocaa085 PubMedGoogle ScholarCrossref
×