aFormulated by Nelson, Pérez-Chada, Hartman, and Mostaghimi.
bSimulated by Nelson, Pérez-Chada, Li, Lo, Pournamdari, and Tkachenko.
cReviewed by Barbieri, Ko, and Menon.
dConducted by Li, Lo, Pournamdari, and Tkachenko.
AI indicates artificial intelligence; CDS, clinician decision-support; DTP, direct-to-patient; and IRR, interrater reliability.
aCoding by Nelson, Pérez-Chada, Hartman, and Mostaghimi.
bReviewed by Barbieri, Ko, and Menon.
cConducted independently by Creadore and Manjalay (24 interviews) and by Li and Lo (24 interviews).
dConsensus of Nelson, Pérez-Chada, Creadore, Li, Lo, Manjaly, Hartman, and Mostaghimi.
The size of each word or phrase is proportionate to the frequency of its use by patients to describe artificial intelligence.
eTable 1. Direct-to-Patient Semi-Structured Interview Guide
eTable 2. Clinician-Decision-Support Semi-Structured Interview Guide
eTable 3. Survey
eTable 4. Codebook
Customize your JAMA Network experience by selecting one or more topics from the list below.
Identify all potential conflicts of interest that might be relevant to your comment.
Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.
Err on the side of full disclosure.
If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.
Not all submitted comments are published. Please see our commenting policy for details.
Nelson CA, Pérez-Chada LM, Creadore A, et al. Patient Perspectives on the Use of Artificial Intelligence for Skin Cancer Screening: A Qualitative Study. JAMA Dermatol. 2020;156(5):501–512. doi:10.1001/jamadermatol.2019.5014
How do patients perceive the use of artificial intelligence for skin cancer screening?
A qualitative study conducted at the Brigham and Women’s Hospital and the Dana-Farber Cancer Institute evaluated 48 patients, 33% with a history of melanoma, 33% with a history of nonmelanoma skin cancer only, and 33% with no history of skin cancer. While 75% of the patients stated that they would recommend artificial intelligence to friends and family members, 94% expressed the importance of symbiosis between humans and artificial intelligence.
Patients appear to be receptive to the use of artificial intelligence for skin cancer screening if the integrity of the human physician-patient relationship is preserved.
The use of artificial intelligence (AI) is expanding throughout the field of medicine. In dermatology, researchers are evaluating the potential for direct-to-patient and clinician decision-support AI tools to classify skin lesions. Although AI is poised to change how patients engage in health care, patient perspectives remain poorly understood.
To explore how patients conceptualize AI and perceive the use of AI for skin cancer screening.
Design, Setting, and Participants
A qualitative study using a grounded theory approach to semistructured interview analysis was conducted in general dermatology clinics at the Brigham and Women’s Hospital and melanoma clinics at the Dana-Farber Cancer Institute. Forty-eight patients were enrolled. Each interview was independently coded by 2 researchers with interrater reliability measurement; reconciled codes were used to assess code frequency. The study was conducted from May 6 to July 8, 2019.
Main Outcomes and Measures
Artificial intelligence concept, perceived benefits and risks of AI, strengths and weaknesses of AI, AI implementation, response to conflict between human and AI clinical decision-making, and recommendation for or against AI.
Of 48 patients enrolled, 26 participants (54%) were women; mean (SD) age was 53.3 (21.7) years. Sixteen patients (33%) had a history of melanoma, 16 patients (33%) had a history of nonmelanoma skin cancer only, and 16 patients (33%) had no history of skin cancer. Twenty-four patients were interviewed about a direct-to-patient AI tool and 24 patients were interviewed about a clinician decision-support AI tool. Interrater reliability ratings for the 2 coding teams were κ = 0.94 and κ = 0.89. Patients primarily conceptualized AI in terms of cognition. Increased diagnostic speed (29 participants [60%]) and health care access (29 [60%]) were the most commonly perceived benefits of AI for skin cancer screening; increased patient anxiety was the most commonly perceived risk (19 [40%]). Patients perceived both more accurate diagnosis (33 [69%]) and less accurate diagnosis (41 [85%]) to be the greatest strength and weakness of AI, respectively. The dominant theme that emerged was the importance of symbiosis between humans and AI (45 [94%]). Seeking biopsy was the most common response to conflict between human and AI clinical decision-making (32 [67%]). Overall, 36 patients (75%) would recommend AI to family members and friends.
Conclusions and Relevance
In this qualitative study, patients appeared to be receptive to the use of AI for skin cancer screening if implemented in a manner that preserves the integrity of the human physician-patient relationship.
Artificial intelligence (AI) is a branch of computer science that focuses on the automation of intelligent behavior. Machine learning is a subfield of AI that uses data-driven techniques to uncover patterns and predict behavior.1,2 Artificial intelligence tools are being explored across the field of medicine. In dermatology, researchers are evaluating the potential of machine learning to classify skin lesions using images from standard and dermoscopic cameras.3-5 Direct-to-patient AI tools classify images obtained by patients outside of a clinical setting, and clinician decision-support AI tools classify images obtained by clinicians at the point of care.6
Artificial intelligence may significantly alter how patients engage in health care, and the medical literature in this field is rapidly expanding. For example, a recent issue of a journal focused on exploring benefits and risks of AI technology, ranging from gains in health care efficiency and quality to threats to patient privacy and confidentiality, informed consent, and autonomy.7 However, our current understanding of how patients perceive AI and its application to health care lacks clarity and depth.
Our primary aims in this study were to explore how patients conceptualize AI and view the use of direct-to-patient and clinician decision-support AI tools for skin cancer screening. Specifically, we sought to elucidate perceived benefits and risks, strengths and weaknesses, implementation, response to conflict between human and AI clinical decision-making, and recommendation for or against AI. Our secondary aims were to identify which entities patients view as responsible for AI accuracy and data privacy.
Patients were prospectively enrolled from general dermatology clinics at the Brigham and Women's Hospital and melanoma clinics at the Dana-Farber Cancer Institute. The study was conducted from May 6 to July 8, 2019. Our initial enrollment target was 48 patients equally distributed between 3 cohorts: history of melanoma, history of nonmelanoma skin cancer only, and no history of skin cancer. Exclusion criteria were age younger than 18 years and impaired decision-making capacity. The institutional review board of Partners Healthcare approved this study. The participants provided informed verbal consent. There was no financial compensation. This study followed the Consolidated Criteria for Reporting Qualitative Research (COREQ) reporting guideline for qualitative studies.8
Development of the semistructured interview guides exploring direct-to-patient and clinician decision-support AI tools followed the 5-step process presented in a systematic methodologic review by Kallio et al (Figure 1).9 First, we determined that our study fulfilled prerequisites for using semistructured interviews.10 Second, we conducted a literature review. Third, 4 of us (C.A.N., L.M.P.-C., R.I.H., and A.M.) formulated the main themes and follow-up questions for the guides. Fourth, pilot testing occurred in 3 stages. During a training session for 4 of us (S.J.L., K.L., A.B.P., and E.T.) in the semistructured interview technique by 2 of us (C.A.N. and L.M.P.-C.), we tested the guides using interview simulations. After the first round of revisions, we sent the guides for review by subject matter experts on technology applications to dermatology (J.S.B. and J.M.K.) and sociology (A.V.M.). After the second round of revisions, we field tested the guides using 6 patient interviews, which resulted in no further revisions. Fifth, the complete guides are presented (eTable 1 and eTable 2 in the Supplement).
Each patient was introduced to the study by their dermatologist (R.I.H. or A.M.). Additional information was provided by either a research assistant (S.J.L. or K.L.) or medical student (A.B.P. or E.T.). This individual, who had no previously established relationship with the patient, obtained verbal informed consent and conducted and recorded the interview. Half of the patients in each cohort were interviewed about a direct-to-patient AI tool and the other half were interviewed about a clinician decision-support AI tool. To minimize bias, we alternated the question order such that half of the patients in each cohort and for each AI tool were asked about benefits prior to risks and the other half were asked about risks prior to benefits. We used a standardized data collection instrument to abstract patient age, sex, race, ethnicity, and history of melanoma and/or nonmelanoma skin cancer from the electronic medical record. After each interview, we distributed a paper survey (eTable 3 in the Supplement) to collect data on patient ownership of electronic devices, usage of digital services, prior dermatology exposure, educational level, and total household income.
We used a grounded theory approach to develop the codebook (Figure 2). Four of us (C.A.N., L.M.P.-C., R.I.H., and A.M.) independently coded the first 12 interviews, and codes were refined until consensus was reached. The codebook was sent for expert review. After the first round of revisions, the codebook was distributed to 2 coding teams (A.C. and P.M. and S.J.L. and K.L.). Each team member independently coded 24 interviews, and novel codes were refined until consensus was reached among 8 of us (C.A.N., L.M.P.-C., A.C., S.J.L., K.L., P.M., R.I.H., and A.M.). The codebook was sent for expert review. After the second round of revisions, the codebook (eTable 4 in the Supplement) was distributed to each team. No novel codes were identified in the final 12 interviews, achieving thematic saturation. Team members independently finalized coding and reconciled discrepancies. Code frequencies were calculated based on reconciled codes.
Continuous variables were summarized with means and SDs. Categorical variables are reported as proportions and percentages. Interrater reliability was assessed using the Cohen κ coefficient. Qualitative and quantitative analyses were performed in NVivo, version 12.1 (QSR International).
All patients informed about the study consented to participate. A total of 48 patients were enrolled, and the mean (SD) interview duration was 22 (9) minutes. Patient characteristics are detailed in Table 1. The mean (SD) age was 53.3 (21.7) years; 26 participants (54%) were women. Most patients self-reported race and ethnicity as white (45 [94%]) and non-Hispanic (45 [94%]). According to the study design, 16 patients (33%) had a history of melanoma, 16 patients (33%) had a history of nonmelanoma skin cancer only, and 16 patients (33%) had no history of skin cancer. Forty-three patients (90%) reported ownership of at least 1 electronic device, most often a computer (39 [81%]) and/or smartphone (39 [81%]). Forty-six patients (96%) reported use of digital services and 43 patients (90%) reported use of digital services for health. Google was the most commonly used digital service in each category, reported by 44 patients (92%) for overall digital services and 39 patients (81%) for health. Forty-six patients (96%) reported a prior dermatology clinic visit. Patients had a high educational level (20 [42%] graduate or professional degree) and total household income (20 [42%] total household income ≥$150 000).
The interrater reliability between the coding teams was κ = 0.94 (A.C. and P.M.) and κ = 0.89 (S.J.L. and K.L.), indicating almost perfect agreement.11 Table 2 presents the overall frequency of each code along with frequency in the direct-to-patient and clinician decision-support interview groups.
When patients were asked, “What comes to mind when you think about AI?”, the predominant theme that emerged was cognition (36 [75%]), such as game playing. Other themes included machine (11 [23%]) and modernity (11 [23%]). Some patients linked cognition and machine, for example, describing AI as the “replacement of human observation and decision-making by machines.” Figure 3 illustrates patient conceptions of AI in a word cloud. Ten patients (21%) had no concept of AI prior to the study.
The most commonly perceived benefits of AI tools for skin cancer screening were increased diagnostic speed (29 [60%]) and health care access (29 [60%]). These benefits were identified more often by patients in the direct-to-patient compared with the clinician decision-support interview group. Patients associated increased diagnostic speed with early skin cancer detection and lifesaving potential. Increased health care access was multifaceted, deriving from gains in labor efficiency and time for physician-patient interaction, remote diagnosis, and unburdening of the health care system. One patient noted that AI “could reach people who don't have great access to health care but may have an iPhone.” Other perceived benefits included reduced health care cost (17 [35%]), reduced patient anxiety (16 [33%]), and increased triage efficiency (14 [29%]).
The greatest perceived risk of AI for skin cancer screening was increased patient anxiety (19 [40%]), identified more often by patients in the direct-to-patient interview group. One patient asked, “Those people who are broken by the idea of getting something scary like cancer, where do they turn? They can call and make an appointment, but that’s not going to help them feel better.” Other perceived risks included human loss of social interaction (18 [38%]), patient loss of privacy (14 [29%]), patient loss to follow-up (14 [29%]), nefarious use of AI (11 [23%]), and human deskilling (10 [21%]).
Patients perceived more accurate diagnosis (33 [69%]) as the greatest strength of AI compared with human skin cancer screening. This perception was based on the ability of AI to draw on more data or experience than humans, to learn and evolve, and to share data. One patient noted that AI “has a huge database of what diagnosis A is supposed to look like as opposed to a human who only has their own life experiences.” Another commonly perceived strength of AI was patient activation (19 [40%]) to seek both health information and health care. This strength was identified more often by patients in the direct-to-patient interview group. “Rather than…pondering for weeks or months whether it’s time to go see the doctor,” one patient noted that AI could be “an immediate indicator.” Other perceived strengths were more convenient diagnosis (14 [29%]), more consistent diagnosis (13 [27%]), more objective diagnosis (11 [23%]), and patient education (11 [23%]).
At the same time, patients perceived less accurate diagnosis (41 [85%]) as the greatest weakness of AI. This perception was based on the potential for false-negatives, false-positives, inaccurate or limited training set, lack of context, lack of physical examination, and operator dependence. “Examining a photograph that you get with your phone in variable light,” commented one patient, “is not a substitute for [an] in-person exam.” Other commonly perceived weaknesses of AI were lack of verbal communication (28 [58%]), lack of emotion (20 [42%]), and lack of nonverbal communication (19 [40%]). In the realm of verbal communication, patients called attention to the inability of AI to answer follow-up questions, discuss treatment options, and educate and reassure the patient. “People can…be really anxious, sad, fearful,” one patient commented, “and the app’s not going to be able to sense that.” In the realm of emotion, patients noted AI’s lack of compassion and empathy. One patient expressed, “You can’t write an algorithm to love somebody.” In the realm of nonverbal communication, patients called attention to AI’s lack of emotion perception, “eye contact,” and “human touch.”
The dominant theme in both interview groups was the importance of a symbiotic relationship between humans and AI (45 [94%]). Patients envisioned AI referring to a physician and providing a second opinion for a physician. “The problem comes from replacing a person with [AI],” which this patient described as a “tool for a dermatologist.” Credibility (30 [63%]) was another common theme that emerged. “I would probably need…feedback from a medical professional to…trust the app,” stated one patient, “because it’s like a black box…Algorithms with databases behind them…can make errors.” Patients perceived AI as both a dynamic and static diagnostic tool (25 [52%]). One patient suggested, “Maybe if a mole was changing, there would be a way to track that.” Patients also identified setting as important (21 [44%]) in terms of both the health care institution and the patient. “I would have to know that this application was set up by the dermatologist… or [my] medical group,” said one patient, “I wouldn’t want it to be in the hands of a private company.” Other themes included perception of AI as an information tool (18 [38%]) and implementation challenges, such as malpractice (17 [35%]).
The most common response in the event that human and AI reached conflicting diagnoses of melanoma and benign skin lesion was to seek a biopsy (32 [67%]). As one patient put it, “Let’s get the biopsy and find out what the story is.” The second most common response was to “put more faith in the doctor” (29 [60%]). The third most common response was to seek an opinion from another physician (20 [42%]). “I would get another opinion from another human,” a patient stated, “another dermatologist.” And the fourth most common response was to seek longitudinal follow-up from the same physician (11 [23%]).
When asked to identify entities responsible for AI accuracy, patients most often named the technology company (25 [52%]) and the physician (20 [42%]), followed by the collective (12 [25%]) and the health care institution (11 [23%]). When asked to identify entities responsible for AI data privacy, patients most often named the health care institution (25 [52%]) and the technology company (19 [40%]).
Overall, 36 patients (75%) would recommend the AI tool to family members and friends, 9 patients (19%) were ambivalent, and 3 patients (6%) would not recommend it. Specifically, 17 patients (71%) would recommend the direct-to-patient AI tool and 19 patients (79%) would recommend the clinician decision-support AI tool.
The Institute of Medicine included patient centered as 1 of 6 specific aims for improving the health system, “ensuring that patient values guide all clinical decisions.”12[pg3] In an era of rapidly evolving technology, advancing our understanding of patient values is essential to optimize the quality of care.
The key finding of our study was that 75% of patients would recommend the use of AI for skin cancer screening to friends and family members; however, 94% expressed the importance of symbiosis between humans and AI. The term man-computer symbiosis was first used by Licklider13 to describe a form of teamwork in which humans provide strategic input while computers provide depth of analysis.14 Considering the use of AI for skin cancer screening, particularly the direct-to-patient tool, patients were enthusiastic about increased diagnostic speed (60%), increased health care access (60%), and patient activation (40%) but worried about increased patient anxiety (40%). Patients valued human verbal (58%) and nonverbal (40%) communication and emotion (42%). Rather than replacing a physician, patients envisioned AI referring to a physician and providing a second opinion for a physician. The significance of symbiosis to patients in this study suggests that they may be more receptive to augmented intelligence15 compared with AI tools for skin cancer screening.
A second key finding of our study was the emphasis that patients placed on the accuracy of AI for skin cancer screening. Patients viewed AI as a diagnostic tool (52%) but perceived accuracy to be both its greatest strength (69%) and greatest weakness (85%). Although this finding may seem contradictory, patients had a nuanced perspective. Patients recognized the ability of AI to draw on more data or experience than humans, learn and evolve, and share data; however, they were concerned about the potential for false-negatives, false-positives, inaccurate or limited training set, lack of context, lack of physical examination, and operator dependence. This finding highlights the importance of validating the accuracy of AI tools for skin cancer screening in a manner that is transparent to patients prior to implementation.
A third key finding of this study was the heterogeneity of patient perspectives on AI. Patients primarily conceptualized AI in terms of cognition (75%), but their words revealed both positive and negative vantage points. A multiplicity of themes emerged on the use of AI for skin cancer screening, but few were identified by most patients. For example, when asked to identify entities responsible for AI accuracy and data privacy, most patients named the technology company (52%) and health care institution (52%), respectively. However, the list of entities was long and diverse. This finding suggests that patient perspectives on AI have not yet solidified, substantiated by the emergence of credibility (63%) and setting (44%) as common themes under implementation. The most common responses in the event that a human and AI reached conflicting diagnoses of melanoma and benign skin lesion were to seek a biopsy (67%), trust the physician (60%), and seek an opinion from another physician (42%). This finding suggests that physicians are likely to play an important role in shaping patient perspectives on the use of AI for skin cancer screening moving forward.
Contextualizing our results within the medical literature is limited by the paucity of data on how patients perceive AI for skin cancer screening. Dieng et al16 found that patients lack confidence in their ability to undertake skin self-examination after treatment for localized melanoma and that some patients are receptive to assistance from new digital technologies. In line with the symbiosis theme in our study, Tran et al17 found that most patients were ready to accept the use of AI for skin cancer screening after reading a vignette but only under human control. In addition, Tong and Sopory18 found that people’s intentions to use AI for skin cancer detection were influenced by messaging that they received about the risks and benefits of AI.
Our results should be interpreted in the context of limitations regarding our study design. First, this was a qualitative study with a limited sample size. Because, to our knowledge, no prior study had elicited domains from patients regarding the use of AI for skin cancer screening, we adopted the semistructured interview approach with the goal of establishing a platform for future quantitative investigation. To ensure rigorous methods, we followed the COREQ checklist,8 developed the guides using an established framework,9 and measured interrater reliability.11 Expert reviewers were used to develop the guides and codebook. Second, the demographic characteristics of our patients may limit generalizability to other study populations. Future studies are essential to elucidate perspectives of patients with diverse racial, ethnic, and socioeconomic backgrounds and with varying levels of education and access to dermatologic care. This expansion is particularly important in light of concerns raised that AI tools may exacerbate health care disparities in dermatology.19 In addition, patients were interviewed about a hypothetical scenario involving an AI tool with which they lacked familiarity in practice.
Topol2[p44] opened a recent review article on the convergence of human and artificial intelligence by writing, “Over time, marked improvements in accuracy, productivity, and workflow will likely be actualized, but whether that will be used to improve the patient-doctor relationship or facilitate its erosion remains to be seen.” Our results indicate that most patients are receptive to the use of AI for skin cancer screening within the framework of human-AI symbiosis. Although additional research is required, the themes that emerged in this study have important implications across the house of medicine. Through patients’ eyes, augmented intelligence may improve health care quality but should be implemented in a manner that preserves the integrity of the human physician-patient relationship.
Accepted for Publication: January 7, 2020.
Corresponding Author: Arash Mostaghimi, MD, MPA, MPH, Harvard Medical School, Department of Dermatology, Brigham and Women's Hospital, Brigham Dermatology Associates, 221 Longwood Ave, Boston, MA 02115 (firstname.lastname@example.org).
Published Online: March 11, 2020. doi:10.1001/jamadermatol.2019.5014
Author Contributions: Drs Nelson and Mostaghimi had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.
Concept and design: Nelson, Pérez-Chada, Lo, Tkachenko, Barbieri, Ko, Mostaghimi.
Acquisition, analysis, or interpretation of data: All authors.
Drafting of the manuscript: Nelson, Pournamdari.
Critical revision of the manuscript for important intellectual content: Pérez-Chada, Creadore, Li, Lo, Manjaly, Tkachenko, Barbieri, Ko, Menon, Hartman, Mostaghimi.
Statistical analysis: Nelson, Mostaghimi.
Administrative, technical, or material support: Nelson, Pérez-Chada, Creadore, Li, Pournamdari, Tkachenko, Ko, Mostaghimi.
Supervision: Pérez-Chada, Mostaghimi.
Conflict of Interest Disclosures: Dr Pérez-Chada reported receiving grants from the National Psoriasis Foundation outside the submitted work. Dr Ko reported serving as chair of the American Academy of Dermatology Task Force on Augmented Intelligence. Dr Mostaghimi reported receiving personal fees from 3Derm and Pfizer and serving as clinical investigator, with no personal financial compensation, for Incyte, Concert, Eli Lilly, and Aclaris outside the submitted work. No other disclosures were reported.
Funding/Support: Dr Barbieri is supported by the National Institute of Arthritis and Musculoskeletal and Skin Diseases of the National Institutes of Health award T32-AR-007465 and receives partial salary support through a Pfizer Fellowship grant to the Trustees of the University of Pennsylvania. Dr Hartman is supported by American Skin Association Research Grant 120795.
Role of the Funder/Sponsor: The funding organizations had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Disclaimer: Dr Mostaghimi is an Associate Editor of JAMA Dermatology, but was not involved in any of the decisions regarding review of the manuscript or its acceptance.