Artificial intelligence (AI) holds great promise for many aspects of health care, including surgery. When used intraoperatively, AI clinical decision support systems (AI CDSSs) may reduce errors and increase clinical accuracy.1 For example, AI CDSSs may assist surgeons in correctly identifying structures in the critical view of safety (CVS) during laparoscopic cholecystectomy (LC).2 However, little consideration has been given to ethical issues that arise from the use of an intraoperative AI CDSS, aside from those of bias and privacy.1 Adoption of this technology without recognizing and addressing these and other ethical issues risks long-term effects such as loss of public trust, overly restrictive regulation of AI systems, and rejection of the technology by patients and surgeons.
Identify all potential conflicts of interest that might be relevant to your comment.
Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.
Err on the side of full disclosure.
If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.
Not all submitted comments are published. Please see our commenting policy for details.
Binkley CE, Green BP. Does Intraoperative Artificial Intelligence Decision Support Pose Ethical Issues? JAMA Surg. 2021;156(9):809–810. doi:10.1001/jamasurg.2021.2055
Monkeypox Resource Center
Customize your JAMA Network experience by selecting one or more topics from the list below.