[Skip to Navigation]
Sign In
Viewpoint
June 16, 2021

Does Intraoperative Artificial Intelligence Decision Support Pose Ethical Issues?

Author Affiliations
  • 1Markkula Center for Applied Ethics at Santa Clara University, Santa Clara, California
JAMA Surg. 2021;156(9):809-810. doi:10.1001/jamasurg.2021.2055

Artificial intelligence (AI) holds great promise for many aspects of health care, including surgery. When used intraoperatively, AI clinical decision support systems (AI CDSSs) may reduce errors and increase clinical accuracy.1 For example, AI CDSSs may assist surgeons in correctly identifying structures in the critical view of safety (CVS) during laparoscopic cholecystectomy (LC).2 However, little consideration has been given to ethical issues that arise from the use of an intraoperative AI CDSS, aside from those of bias and privacy.1 Adoption of this technology without recognizing and addressing these and other ethical issues risks long-term effects such as loss of public trust, overly restrictive regulation of AI systems, and rejection of the technology by patients and surgeons.

Add or change institution
Limit 200 characters
Limit 25 characters
Conflicts of Interest Disclosure

Identify all potential conflicts of interest that might be relevant to your comment.

Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.

Err on the side of full disclosure.

If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.

Not all submitted comments are published. Please see our commenting policy for details.

Limit 140 characters
Limit 3600 characters or approximately 600 words
    ×