Importance
Health care algorithms are used for diagnosis, treatment, prognosis, risk stratification, and allocation of resources. Bias in the development and use of algorithms can lead to worse outcomes for racial and ethnic minoritized groups and other historically marginalized populations such as individuals with lower income.
Objective
To provide a conceptual framework and guiding principles for mitigating and preventing bias in health care algorithms to promote health and health care equity.
Evidence Review
The Agency for Healthcare Research and Quality and the National Institute for Minority Health and Health Disparities convened a diverse panel of experts to review evidence, hear from stakeholders, and receive community feedback.
Findings
The panel developed a conceptual framework to apply guiding principles across an algorithm’s life cycle, centering health and health care equity for patients and communities as the goal, within the wider context of structural racism and discrimination. Multiple stakeholders can mitigate and prevent bias at each phase of the algorithm life cycle, including problem formulation (phase 1); data selection, assessment, and management (phase 2); algorithm development, training, and validation (phase 3); deployment and integration of algorithms in intended settings (phase 4); and algorithm monitoring, maintenance, updating, or deimplementation (phase 5). Five principles should guide these efforts: (1) promote health and health care equity during all phases of the health care algorithm life cycle; (2) ensure health care algorithms and their use are transparent and explainable; (3) authentically engage patients and communities during all phases of the health care algorithm life cycle and earn trustworthiness; (4) explicitly identify health care algorithmic fairness issues and trade-offs; and (5) establish accountability for equity and fairness in outcomes from health care algorithms.
Conclusions and Relevance
Multiple stakeholders must partner to create systems, processes, regulations, incentives, standards, and policies to mitigate and prevent algorithmic bias. Reforms should implement guiding principles that support promotion of health and health care equity in all phases of the algorithm life cycle as well as transparency and explainability, authentic community engagement and ethical partnerships, explicit identification of fairness issues and trade-offs, and accountability for equity and fairness.
Health care algorithms, defined as mathematical models used to inform decision-making, are ubiquitous and may be used to improve health outcomes. However, algorithmic bias has harmed minoritized communities in housing, banking, and education, and health care is no different.1 Thus, addressing algorithmic bias is an urgent issue, as exemplified by a Biden Administration Executive Order stating that “agencies shall consider opportunities to prevent and remedy discrimination, including by protecting the public from algorithmic discrimination.”2
An unbiased algorithm is one that ensures patients who receive the same algorithm score or classification have the same basic needs.3 Health care algorithms are used for diagnosis, treatment, prognosis, risk stratification, triage, and resource allocation. A biased algorithm that used race to estimate kidney function resulted in higher estimates for Black patients compared with White patients, leading to delays in organ transplant referral for Black patients.4 A commercial algorithm that risk-stratified patients to determine eligibility for chronic disease management programs effectively required Black individuals to be sicker than White individuals to qualify for such services.5 Potentially biased algorithms have been developed for heart failure, cardiac surgery, kidney transplantation, vaginal birth after cesarean delivery, rectal cancer, and breast cancer, often affecting access to or eligibility for interventions or services, and resource allocation.4
The Agency for Healthcare Research and Quality (AHRQ) and the National Institute on Minority Health and Health Disparities (NIMHD) convened a panel to recommend core guiding principles for the development and use of clinical algorithms in health care, including data-driven, probability-based algorithms such as those using artificial intelligence and machine learning approaches. The panel’s core guiding principles also apply to rules-based approaches derived from data (eg, if acute myocardial infarction, give aspirin), since these rules may reflect the specific data sets and patient populations from which they were generated and the potential biases within.
The Council on Artificial Intelligence of the Organization for Economic Cooperation and Development defines an artificial intelligence system as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems are designed to operate with varying levels of autonomy.”6 Machine learning is a subset of artificial intelligence that analyzes data using mathematical modeling to learn patterns that can make predictions or guide tasks.7 Traditional statistical regression techniques, often used in earlier risk prediction models, estimate relationships between predictors and outcomes. In contrast, machine learning models can “learn” by using mathematical techniques that infer relationships within large data sets to inform predictions.8
This article describes guiding principles for health care algorithms and key operational considerations. This work is not exhaustive because synergistic efforts, such as those of the Office of the National Coordinator for Health Information Technology (ONC), are ongoing.9,10 Algorithmic bias is neither inevitable nor merely a mechanical or technical issue. Conscious decisions by algorithm developers, algorithm users, health care industry leaders, and regulators can mitigate and prevent bias and proactively advance health equity.
The AHRQ received a congressional letter in fall 2020 inquiring about the contribution of clinical algorithms to racial and ethnic bias in health care. In response, the AHRQ published a request for information to elicit perspectives from public stakeholders on this topic and commissioned an evidence review to examine the impact of health care algorithms on health disparities and to identify potential solutions to mitigate biases.11 The subsequent evidence review underscored the limits of current knowledge and research about health care algorithms in the literature.
The AHRQ, the NIMHD, the US Department of Health and Human Services (HHS) Office of Minority Health, and the ONC collaboratively recruited 9 stakeholders with diverse backgrounds and expertise to serve on a panel to develop guiding principles to address racial and ethnic bias in health and health care resulting from algorithms. The panel heard from a group of national and international thought leaders involved in algorithm design, development, implementation, and oversight during a 2-day hybrid public meeting and received feedback on draft principles from patient and community representatives and the public during a subsequent virtual meeting.12 These perspectives were particularly important for the panel’s recommendations, given the limitations of the published literature. The panel’s work, including this article, was developed iteratively.
Conceptual Framework for Mitigating and Preventing Bias in Health Care Algorithms
The conceptual framework to mitigate and prevent bias in health care algorithms (Figure) built on a National Academy of Medicine13 algorithm life cycle framework adapted by Roski et al.14 Within the context of structural racism and discrimination,15 the goal is to promote health and health care equity for patients and communities. An algorithm’s life cycle comprises 5 phases that typically occur sequentially.16 Problem formulation (phase 1) defines the problem that the algorithm is designed to address, relevant actors, and priority outcomes. Problem formulation is followed by selection and management of the data used by the algorithm (phase 2) and subsequent development, training, and validation of the algorithm (phase 3). The algorithm is deployed and integrated in its intended setting (phase 4). Mechanisms should monitor performance and outcomes and maintain, update, or deimplement the algorithm accordingly (phase 5).
Guiding principles apply at each phase to mitigate and prevent bias in an algorithm. Operationalization of principles takes place at 3 levels: individual (developers and users), institutional (organizational policies and procedures), and societal (legislation, regulation, and private policy).
Guiding Principles for Mitigating and Preventing Racial and Ethnic Bias in Health Care Algorithms
Tables 1 and 2 list the guiding principles and their operational considerations. Each principle is described hereinafter.
Guiding Principle 1: Promote Health and Health Care Equity During All Phases of the Health Care Algorithm Life Cycle
Advancing health equity should be a fundamental objective of any algorithm used in health care.7 The World Health Organization defines equity as the “absence of unfair, avoidable, or remediable differences among groups of people, whether those groups are defined socially, economically, demographically, or geographically or by other dimensions of inequality (e.g., sex, gender, ethnicity, disability, or sexual orientation).”17 Algorithms should be designed with goals of advancing health equity, promoting fairness, and reducing health disparities.
Formulating the problem appropriately is critical (phase 1), and improving health and health care equity for patients and communities should be central.3 During the data selection, assessment, and management phase of the algorithm life cycle (phase 2), data used for algorithm development should be assessed for biases, accuracy, fitness for the intended purpose, and representativeness of the intended population. Engagement of key diverse stakeholders—which includes communities—during problem formulation (phase 1) and data selection (phase 2) is critical to avoid knowledge gaps. Any issues identified should be documented, and corrective actions should be taken before moving to algorithm development, training, and validation (phase 3).
It is critical to use rigorous methods, wise human judgment, and checks and balances in algorithm development to mitigate and prevent bias and ensure that conclusions are accurate, robust, and reproducible.24 Compared to traditional statistical techniques in which statisticians have more manual control over the analyses, artificial intelligence models can be more opaque and more difficult to interpret. They risk being overfitted to the data at hand, threatening generalizability. Artificial intelligence models sometimes lack common sense and are more difficult to audit. Thus, rigorous methods and processes are essential for algorithm development.25-29
Algorithms should be validated across populations to ensure fairness in performance. After an algorithm is deployed, continuous monitoring for performance and data drift is necessary. Monitoring should assess the fairness and equity of the algorithm output as well as the impact of the algorithm on patients, populations, and society, including data privacy and resource allocation. Measurement and comparison of outcomes between advantaged and historically marginalized populations such as racial and ethnic minoritized groups or individuals with lower income should be assessed routinely by health care systems, algorithm vendors, and the research community and supported by research sponsors (eg, funders, scientific journals). Algorithm end users should supplement model outputs with human judgment. Furthermore, access to information technology for all should be ensured.
Guiding Principle 2: Ensure Health Care Algorithms and Their Use Are Transparent and Explainable
Algorithm developers, health care institutions, algorithm users, and regulators are responsible for ensuring that algorithms are transparent, easy to explain, and readily interpretable at all steps in the algorithm life cycle for diverse audiences.30,31 The HHS states that “all relevant individuals should understand how their data is being used and how AI systems make decisions; algorithms, attributes and correlations should be open to inspection.”20 Development of transparent and explainable algorithms requires algorithm developers and stewards to present evidence for impact on processes and outcomes and to provide understandable and accurate explanations to clinicians and patients to enable informed decision-making.32 In addition, an algorithm should only operate under the conditions for which it was designed, and outputs should only be used when there is confidence in the results.31
Transparency includes multiple domains, such as availability of technical information, algorithm oversight, and communication of impact to stakeholders.20,31,32 Algorithm developers should create profiles of the data used to train the algorithm, describing distribution of key aspects of the population in the data set (eg, race and ethnicity, gender, socioeconomic status, and age); they should also make data exploration analysis readily available for independent review. Algorithm developers should disclose types, sizes, and overall distributions in data sets used in their formulation, testing, and validation. Regulation should require algorithm information labels or model cards sufficient to assess design, validity, and the presence of bias.10,21 Implementers should disclose the purpose of algorithms and their impact. If biases have been identified in an algorithm, the developers, implementers, and users should disclose such biases. Any bias mitigation attempts should also be disclosed to all with a stake in the algorithm, including patients, caregivers, and communities. A structured reporting process could identify signals of emerging problems both locally and nationally and facilitate addressing such problems systematically.
Several reporting guidelines promote transparency of research examining algorithms.33 However, these guidelines do not include concrete ways to report on fairness, and they rarely make explicit mention of equity.34 Reporting guidelines for algorithms should therefore be updated with specific equity approaches as has been done for observational studies and randomized clinical trials.35,36
Guiding Principle 3: Authentically Engage Patients and Communities During All Phases of the Health Care Algorithm Life Cycle, and Earn Trustworthiness
Authentically engaging and partnering with patients and communities is essential to understand both a problem affecting them and its solutions.22 Moreover, it is an ethical imperative to engage with patients and communities around health care algorithms and earn their trust, as these tools can provide great benefit or harm. Patients and communities, including populations who have been historically marginalized, should be engaged authentically and ethically when identifying and assessing a problem that requires use of an algorithm as part of its solution and during algorithm data selection, development, deployment, and monitoring.
Early and intentional engagement can help identify priorities of patients and communities and any concerns they have regarding algorithm use.37 All patients and communities should be informed when an algorithm is used in their care, should be advised about impact of the algorithm on their treatment, and should be provided alternatives if appropriate.38 They should know how the algorithm performs for their demographic group compared with other groups and be made aware of any opportunities to opt out of algorithms or to pursue alternatives to algorithm-driven decisions.
Algorithms should be bound by concepts of data sovereignty, the idea that data are subject to legal regulations of countries, nations, or states. Sovereignty is of particular importance to Indigenous nations.39 Health care organizations, vendors, and other model developers earn trustworthiness through authenticity, ethical and transparent practices, security and privacy of data, and timely disclosures of algorithm use.
Guiding Principle 4: Explicitly Identify Health Care Algorithmic Fairness Issues and Trade-offs
The panel recommends that advancing health and health care equity for patients and communities should be the goal of health care algorithms. Advancing health equity requires expertise in algorithmic fairness—the field of identifying, understanding, and mitigating bias.40-42 Health care algorithmic fairness issues arise from both ethical choices and technical decisions at different phases of the algorithm life cycle.16,43 For example, fundamental ethical choices can arise during problem formulation (phase 1; eg, Is the goal of the algorithm to improve and advance equitable outcomes or is the primary goal to maximize profit?). Additionally, if a particular algorithm use involves choosing a cutoff point for action during model development and implementation, should that cutoff be chosen to maximize sensitivity of the tool to identify someone who might benefit from an intervention, or should it be chosen to maximize specificity of the tool so inappropriate patients are not exposed to unnecessary risk from the intervention? Trade-offs among competing fairness metrics and values are common. Different technical definitions of algorithmic fairness, such as sufficiency, separation, and independence, are mathematically mutually incompatible, trading off maximizing accuracy of an algorithm and minimizing differences among groups across definitions.44 It is critical to make health care algorithm fairness issues and trade-offs explicit, transparent, and explainable. Thus, solutions to advance health equity with health care algorithms require ethical, technical, and social approaches—there is no simple cookie-cutter technical solution.43,45
Technical methods for improving fairness in algorithms can be divided into stages of modeling: preprocessing (eg, repair biased data set), in-processing (eg, use fairness metrics in the model optimization process to maximize accuracy and fairness), and postprocessing (eg, transform model output to improve prediction fairness).46,47 Key issues for fairness metrics include prioritization of fairness for group or individual, binary classification (eg, qualifies for service or not) vs continuous classification (eg, regression output), and use of regularization methods (fairness metrics to balance accuracy and fairness), reweighting methods (weight samples from underrepresented groups more highly), or both.48 Of note, technical definitions and metrics of fairness often do not translate clearly or intuitively to ethical, legal, social, and economic conceptions of fairness.46,47 Thus, close collaboration and discussion are essential among stakeholders, including algorithm developers, algorithm users, and the communities to whom the algorithm will be applied.
We recommend considering fairness of algorithms through the lens of distributive justice, the socially just distribution of outcomes and allocation of resources across different populations.49 Distributive justice metrics include clinical outcomes, resource allocation, and performance measures of algorithms (eg, sensitivity, specificity, and positive predictive value).8,43,50 When unfairness is identified, bias should be mitigated using both social (eg, diverse teams and stakeholder co-development) and technical (eg, algorithmic fairness toolkits, fairness metrics, data set collection, and deimplementation) mitigation methods.51 Algorithms and accompanying policies and regulations should also be viewed through frames of equity of harms and risks and explicit identification of trade-offs among different competing values and options.41-43 Algorithms with a higher risk of substantial harm and injustice should have stricter internal oversight by organizations and more stringent external regulation.20
Guiding Principle 5: Establish Accountability for Equity and Fairness in Outcomes From Health Care Algorithms
Model developers and users, including vendors, health care organizations, researchers, and professional societies, should accept responsibility to achieve equity and fairness in outcomes from health care algorithms and be accountable for the performance of algorithms in different populations. Institutions such as vendors and health care provider organizations should establish processes at each phase of the algorithm life cycle to promote equity and fairness. Transparency in the types of training data, processes, and evaluations used is paramount. For example, an academic medical center recently published its framework for oversight and deployment of prediction models, which includes checkpoint gates and an oversight governance structure.52 Current evidence suggests that such governance infrastructure is rare.53
Organizations should have an inventory of their algorithms and have local, periodic evaluations and processes that screen for and mitigate bias. It is crucial for organizations to engage stakeholders throughout the entire algorithm life cycle to ensure fairness and promote trust. This means incorporating model developers, end users, health care administrators, clinicians, patient advocates, and community representatives. Different organizations and experts have recommended various accountability metrics and oversight structures.3
Regulations and incentives should support equity and fairness while also promoting innovation.54 There should be redress for persons and communities who have been harmed by biased algorithms. An ethical, legal, social, and administrative framework and culture should be created that redresses harm while encouraging quality improvement, collaboration, and transparency, similar to what is recommended for patient safety.55
ChatGPT and other artificial intelligence language models have spurred widespread public interest in the potential value and dangers of algorithms. Multiple stakeholders must partner to create systems, processes, regulations, incentives, standards, and policies to mitigate and prevent algorithm bias in health care.47 Dedicated resources and the support of leaders and the public are critical for successful reform. It is our obligation to avoid repeating errors that tainted use of algorithms in other fields.
Accepted for Publication: October 5, 2023.
Published: December 15, 2023. doi:10.1001/jamanetworkopen.2023.45050
Open Access: This is an open access article distributed under the terms of the CC-BY-NC-ND License. © 2023 Chin MH et al. JAMA Network Open.
Corresponding Authors: Marshall H. Chin, MD, MPH, University of Chicago, 5841 S. Maryland Ave, MC2007, Chicago, IL 60637 (mchin@bsd.uchicago.edu); Lucila Ohno-Machado, MD, PhD, MBA, Yale School of Medicine, 333 Cedar St, New Haven, CT 06510 (lucila.ohno-machado@yale.edu).
Author Contributions: Dr Chin had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Concept and design: Chin, Afsar-Manesh, Bierman, Chang, Colón-Rodríguez, Duran, Fair, Hernandez-Boussard, Hightower, Jain, Jordan, Konya, R.H. Moore, Rodriguez, Shaheen, Srinivasan, Umscheid, Ohno-Machado.
Acquisition, analysis, or interpretation of data: Chin, Bierman, Chang, Dullabh, Duran, Hernandez-Boussard, Hightower, Jain, Jordan, Konya, R.H. Moore, T.T. Moore, Rodriguez, Snyder, Srinivasan, Umscheid, Ohno-Machado.
Drafting of the manuscript: Chin, Bierman, Dullabh, Hernandez-Boussard, Hightower, Jordan, T.T. Moore, Rodriguez, Shaheen, Snyder, Srinivasan, Ohno-Machado.
Critical review of the manuscript for important intellectual content: Chin, Afsar-Manesh, Bierman, Chang, Colón-Rodríguez, Duran, Fair, Hernandez-Boussard, Hightower, Jain, Jordan, Konya, R.H. Moore, T.T. Moore, Shaheen, Umscheid, Ohno-Machado.
Obtained funding: Bierman, Chang, Dullabh, Duran, Jain.
Administrative, technical, or material support: Chin, Bierman, Chang, Colón-Rodríguez, Duran, Fair, Jain, Jordan, Konya, R.H. Moore, T.T. Moore, Rodriguez, Shaheen, Snyder, Srinivasan, Umscheid, Ohno-Machado.
Supervision: Duran, Umscheid.
Conflict of Interest Disclosures: Dr Chin reported receiving grants or contracts from the Agency for Healthcare Research and Quality (AHRQ), the California Health Care Foundation, the Health Resources and Services Administration, Kaiser Foundation Health Plan Inc, the Merck Foundation, the Patient-Centered Outcomes Research Institute, and the Robert Wood Johnson Foundation; and receiving personal fees for advisory board service from the US Centers for Medicare & Medicaid Services, Bristol Myers Squibb Company, Blue Cross Blue Shield, the US Centers for Disease Control and Prevention, and the American College of Physicians outside the submitted work. Dr Chin also reported serving as a member of the Families USA Equity and Value Task Force Advisory Council, the Essential Hospitals Institute Innovation Committee, the Institute for Healthcare Improvement and American Medical Association National Initiative for Health Equity Steering Committee for Measurement, the National Committee for Quality Assurance Expert Work Group (on the role of social determinants of health data in health care quality measurement), and The Joint Commission Health Care Equity Certification Technical Advisory Panel outside the submitted work. Dr Chin also reported being a member of the National Advisory Council of the National Institute on Minority Health and Health Disparities (NIMHD), the Health Disparities and Health Equity Working Group of the National Institute of Diabetes and Digestive and Kidney Diseases, and the National Academy of Medicine Council. Finally, Dr Chin reported receiving honoraria from the Oregon Health Authority and the Pittsburgh Regional Health Initiative and meeting and travel support from America’s Health Insurance Plans outside the submitted work. Dr Afsar-Manesh reported being employed by and holding equity in Oracle outside the submitted work, and their spouse is employed by and holds equity in Amgen. Dr Dullabh reported receiving grants from the AHRQ during the conduct of the study. Dr Hightower reported serving as cofounder and chief executive officer of Equality AI outside the submitted work. Dr Jordan reported engaging with this article as part of regular work duties as an employee of the American Medical Association. Dr Rodriguez reported receiving grants from the AHRQ during the conduct of the study. Dr Snyder reported receiving a federal contract from the US Department of Health and Human Services (to NORC at the University of Chicago) during the conduct of the study. Dr Srinivasan reported funding for this work under a contract (to NORC at the University of Chicago) from the AHRQ during the conduct of the study. No other disclosures were reported.
Funding/Support: This work was supported by funding from the AHRQ and the NIMHD. Dr Chin was supported, in part, by grant P30DK092949 from the National Institute of Diabetes and Digestive and Kidney Diseases to the Chicago Center for Diabetes Translation Research. Dr Hernandez-Boussard was supported, in part, by grant UL1TR003142 from the National Center for Advancing Translational Sciences of the National Institutes of Health (NIH). Dr Ohno-Machado was supported, in part, by grants U54HG012510 and RM1HG011558 from the NIH.
Role of the Funder/Sponsor: Coauthors from the Agency for Healthcare Research and Quality and the National Institute on Minority Health and Health Disparities participated in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Disclaimer: The findings and conclusions in this document are those of the authors, who are responsible for its content, and do not necessarily represent the views of the Agency for Healthcare Research and Quality, the Office of the National Coordinator for Health Information Technology, the Office of Minority Health, the National Institute on Minority Health and Health Disparities, the National Institutes of Health, or the US Department of Health and Human Services. No statement in this report should be construed as an official position of the Agency for Healthcare Research and Quality, the Office of the National Coordinator for Health Information Technology, the Office of Minority Health, the National Institute on Minority Health and Health Disparities, the National Institutes of Health, or the US Department of Health and Human Services. The thoughts and ideas expressed in this article are those of the authors and do not necessarily represent official American Medical Association policy. The thoughts and ideas expressed in this article are those of the authors and do not necessarily represent the views or policies of their employers or other organizations associated with the authors.
Meeting Presentation: This work was presented, in part, at an Agency for Healthcare Research and Quality virtual meeting (Opportunity for Feedback: Principles to Address the Impact of Healthcare Algorithms on Racial and Ethnic Disparities in Health and Healthcare); May 15, 2023; and at a symposium (Reconsidering Race in Clinical Algorithms: Driving Equity Through New Models in Research and Implementation) organized by the Doris Duke Foundation in partnership with the Gordon and Betty Moore Foundation, the Council of Medical Specialty Societies, and the National Academy of Medicine; June 27, 2023; Washington, DC.
1.O’Neil
C. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown; 2016.
21.Zuckerman
BL, Karabin
JM, Parker
RA, Doane
WEJ, Williams
SR.
Options and Opportunities to Address and Mitigate the Existing and Potential Risks, as well as Promote Benefits, Associated With AI and Other Advanced Analytic Methods. OPRE Report 2022-253. US Department of Health and Human Services Office of Planning, Research, and Evaluation, Administration for Children and Families; 2022. Accessed November 10, 2023.
https://www.acf.hhs.gov/opre/report/options-opportunities-address-mitigate-existing-potential-risks-promote-benefits 23.Loi
M, Heitz
C, Christen
M. A comparative assessment and synthesis of twenty ethics codes on AI and big data. In:
Proceedings of the 2020 7th Swiss Conference on Data Science (SDS). IEEE; 2020:41-46. doi:
10.1109/SDS49233.2020.00015 26.Vollmer
S, Mateen
BA, Bohner
G,
et al. Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness.
BMJ. 2020;368:l6927. doi:
10.1136/bmj.l6927
PubMedGoogle ScholarCrossref 31.Phillips
PJ, Hahn
CA, Fontana
PC,
et al. Four Principles of Explainable Artificial Intelligence. NIST Interagency/Internal Report (NISTIR). National Institute of Standards and Technology; 2021.
35.Antequera
A, Lawson
DO, Noorduyn
SG,
et al. Improving social justice in COVID-19 health research: interim guidelines for reporting health equity in observational studies.
Int J Environ Res Public Health. 2021;18(17):9357. doi:
10.3390/ijerph18179357
PubMedGoogle ScholarCrossref 36.Welch
VA, Norheim
OF, Jull
J, Cookson
R, Sommerfelt
H, Tugwell
P; CONSORT-Equity and Boston Equity Symposium. CONSORT-Equity 2017 extension and elaboration for better reporting of health equity in randomised trials.
BMJ. 2017;359:j5085. doi:
10.1136/bmj.j5085
PubMedGoogle ScholarCrossref 40.Drukker
K, Chen
W, Gichoya
J,
et al. Toward fairness in artificial intelligence for medical image analysis: identification and mitigation of potential biases in the roadmap from data collection to model deployment.
J Med Imaging (Bellingham). 2023;10(6):061104. doi:
10.1117/1.JMI.10.6.061104
PubMedGoogle ScholarCrossref 48.Jung
S, Park
T, Chun
S, Moon
T. Re-weighting based group fairness regularization via classwise robust optimization.
arXiv. Preprint posted online March 1, 2023. doi:
10.48550/arXiv.2303.00442Google Scholar 51.Weinkauf
D. When worlds collide—the possibilities and limits of algorithmic fairness (part 2).
Privacy Tech-Know blog. Office of the Privacy Commissioner of Canada. April 5, 2023. Accessed August 31, 2023.
https://www.priv.gc.ca/en/blog/20230405_02/ 55.McCradden
MD, Joshi
S, Anderson
JA, Mazwi
M, Goldenberg
A, Zlotnik Shaul
R. Patient safety and quality improvement: ethical principles for a regulatory approach to bias in healthcare machine learning.
J Am Med Inform Assoc. 2020;27(12):2024-2027. doi:
10.1093/jamia/ocaa085
PubMedGoogle ScholarCrossref