Perspectives of Oncologists on the Ethical Implications of Using Artificial Intelligence for Cancer Care

Key Points Question What are oncologists’ views on ethical issues associated with the implementation of artificial intelligence (AI) in cancer care? Findings In this cross-sectional survey study, 84.8% of US oncologists reported that AI needs to be explainable by oncologists but not necessarily patients, and 81.4% agreed that patients should consent to AI use for cancer treatment decisions. Less than half (47.1%) of oncologists viewed medico-legal problems from AI use as physicians’ responsibility, and although most (76.5%) reported feeling responsible for protecting patients from biased AI, few (27.9%) reported feeling confident in their ability to do so. Meaning This study suggests that concerns about ethical issues, including explainability, patient consent, and responsibility, may impede optimal adoption of AI into cancer care.


Introduction
Artificial intelligence (AI) is an emerging set of technologies with the potential to advance cancer discovery and care delivery. 1Artificial intelligence models with applications for oncology have recently been approved by the US Food and Drug Administration (FDA), 2 and the increasing complexity of personalized cancer care makes the field of oncology poised for an AI revolution.
4][5] As the ethical deployment of AI in cancer care requires solutions that meet the needs of stakeholders, this study sought to examine oncologists' familiarity with AI and perspectives on these issues.As familiarity with a technology changes stakeholder perceptions of it, 6 and because academic research in AI is burgeoning, we hypothesized that responses would vary for oncologists practicing in academic settings compared with those in other practice settings.

Methods
From November 15, 2022, to July 31, 2023, we performed a cross-sectional survey study of oncologists practicing in the US.A draft instrument based on published ethical frameworks 4,5 was developed by a team of oncologists, survey methodologists, bioethicists, and AI researchers (A.H., T.P.W., J.M.M., K.L.K., R.S., E.V.A., and G.A.A.).The instrument was iteratively refined through cognitive testing with 5 practicing oncologists until meaning saturation was achieved.The final instrument (eMethods in Supplement 1) contained 24 questions including demographics and the following domains: AI familiarity, predictions, explainability, bias, deference, and responsibilities.A random sample of oncologists was identified using the National Plan & Provider Enumeration System (eMethods in Supplement 1). 7Recruitment methods followed best practices, 8 using mailed paper surveys with gift cards ($25), after which reminder letters with an electronic survey option and telephone calls were used for nonresponders.The study was approved by the Dana-Farber Office for Human Research Studies.We received a waiver of written documentation of consent from the Dana-Farber Cancer Institute institutional review board.The survey instrument was introduced with a clear consent statement (a full page on paper and a full screen in the electronic version) describing the study, its voluntary nature, the participant's rights, and what participation entailed.Completing the survey constituted consent to participate in the study.This study followed the CROSS guidelines 9 (eMethods in Supplement 1).
Responses were grouped for analysis as shown in the eMethods in Supplement 1.The χ 2 test or the Fisher exact test assessed bivariate associations between responses and primary practice (academic hospital or clinic ["academic"] vs other), with odds ratios (ORs) and 95% CIs reported.The primary outcome was respondent views on the need for patients to provide informed consent for the use of an AI model during treatment decision-making.A multivariable logistic regression model assessed associations between respondent characteristics with the primary outcome; covariates with P Յ .05 in bivariate testing were included.These covariates included sociodemographic characteristics (including self-reported race and ethnicity [racial and ethnic group categories were aligned with National Institutes of Health reporting guidelines under NOT-OD-15-089; race and ethnicity were assessed because a number of AI tools have been shown to perpetuate bias and racism that inordinately affects minoritized racial and ethnic groups]), practice setting, and prior

Discussion
In this nationally representative, cross-sectional survey study assessing oncologists' views on ethical issues associated with AI in cancer care, we found associations between practice setting and AI-related predictions, deference, and explainability.Most participants reported that patients should .36 Black or African American Eastern Asian or Other Pacific Islander 20 ( There is relatively little known about AI's clinical implementation issues as they relate to clinical stakeholders. 10Our findings begin to bridge AI development with the expectations of end users so that tools can be appropriately applied.For example, oncologists' knowledge and training were relatively uncommon compared with self-reported obligations to patients and deference to AI.This finding complements normative discussions about the erosion of human responsibilities through AI overreliance 11 and brings up the question about whether such responsibilities will always be

Figure 1 .
Figure 1.Responses to 2 Questions Assessing Which Stakeholder Types (Researcher, Oncologist, or Patient) Should Be Able to Explain an Artificial Intelligence Model for It to Be Used in Clinic

Figure 2 .
Figure 2. Responses to a Scenario Where a US Food and Drug Administration-Approved Artificial Intelligence (AI) Model Selects a Different Regimen Than the Oncologist Planned to Recommend Oncologist Perspectives on the Ethical Implications of Using AI for Cancer Care JAMA Network Open.2024;7(3):e244077.doi:10.1001/jamanetworkopen.2024.4077(Reprinted) March 28, 2024 2/8 Downloaded from jamanetwork.comby guest on 04/06/2024 training, defined as previous AI-specific education (eg, courses and lectures).Imputation was planned if question missingness was more than 5%.All P values were 2-sided; the significance level was P < .05unless otherwise specified.Statistical analyses were performed using Stata, version 16 (StataCorp LLC).

Table 1 .
Self-Reported Respondent Characteristics Downloaded from jamanetwork.comby guest on 04/06/2024 consent to the use of AI during treatment decision-making, and those without prior training were more likely to view consent as necessary.Responses about decision-making were sometimes paradoxical; patients were not expected to understand AI tools but were expected make decisions related to recommendations generated by AI.A gap was also seen between oncologist responsibilities and preparedness to combat AI-related bias.Together, these data characterize barriers that may impede the ethical adoption of AI into cancer care.
a Determined by the χ 2 or Fisher exact test.b

Table 2 .
Multivariable Logistic Regression Model of Preference for Patient Consent to the Use of a Treatment Decision AI Model by Demographic Characteristics a Abbreviation: AI, artificial intelligence.aOnly characteristics with significant bivariate associations were retained.