[Skip to Content]
Sign In
Individual Sign In
Create an Account
Institutional Sign In
OpenAthens Shibboleth
[Skip to Content Landing]
Figure 1.  Flow Diagram for Development of the Semistructured Interview Guides
Flow Diagram for Development of the Semistructured Interview Guides

aFormulated by Nelson, Pérez-Chada, Hartman, and Mostaghimi.

bSimulated by Nelson, Pérez-Chada, Li, Lo, Pournamdari, and Tkachenko.

cReviewed by Barbieri, Ko, and Menon.

dConducted by Li, Lo, Pournamdari, and Tkachenko.

Figure 2.  Flow Diagram for Data Analysis
Flow Diagram for Data Analysis

AI indicates artificial intelligence; CDS, clinician decision-support; DTP, direct-to-patient; and IRR, interrater reliability.

aCoding by Nelson, Pérez-Chada, Hartman, and Mostaghimi.

bReviewed by Barbieri, Ko, and Menon.

cConducted independently by Creadore and Manjalay (24 interviews) and by Li and Lo (24 interviews).

dConsensus of Nelson, Pérez-Chada, Creadore, Li, Lo, Manjaly, Hartman, and Mostaghimi.

Figure 3.  Word Cloud Representing the Frequency of Terminology Used by Patients to Describe Artificial Intelligence
Word Cloud Representing the Frequency of Terminology Used by Patients to Describe Artificial Intelligence

The size of each word or phrase is proportionate to the frequency of its use by patients to describe artificial intelligence.

Table 1.  Patient Characteristics
Patient Characteristics
Table 2.  Code Frequencies
Code Frequencies
1.
Balthazar  P, Harri  P, Prater  A, Safdar  NM.  Protecting your patients’ interests in the era of big data, artificial intelligence, and predictive analytics.  J Am Coll Radiol. 2018;15(3, pt B):580-586. doi:10.1016/j.jacr.2017.11.035PubMedGoogle ScholarCrossref
2.
Topol  EJ.  High-performance medicine: the convergence of human and artificial intelligence.  Nat Med. 2019;25(1):44-56. doi:10.1038/s41591-018-0300-7PubMedGoogle ScholarCrossref
3.
Esteva  A, Kuprel  B, Novoa  RA,  et al.  Dermatologist-level classification of skin cancer with deep neural networks.  Nature. 2017;542(7639):115-118. doi:10.1038/nature21056PubMedGoogle ScholarCrossref
4.
Haenssle  HA, Fink  C, Schneiderbauer  R,  et al; Reader Study Level-I and Level-II Groups.  Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists.  Ann Oncol. 2018;29(8):1836-1842. doi:10.1093/annonc/mdy166PubMedGoogle ScholarCrossref
5.
Han  SS, Kim  MS, Lim  W, Park  GH, Park  I, Chang  SE.  Classification of the clinical images for benign and malignant cutaneous tumors using a deep learning algorithm.  J Invest Dermatol. 2018;138(7):1529-1538. doi:10.1016/j.jid.2018.01.028PubMedGoogle ScholarCrossref
6.
Zakhem  GA, Motosko  CC, Ho  RS.  How should artificial intelligence screen for skin cancer and deliver diagnostic predictions to patients?  JAMA Dermatol. 2018;154(12):1383-1384. doi:10.1001/jamadermatol.2018.2714PubMedGoogle ScholarCrossref
7.
Rigby  MJ.  Ethical dimensions of using artificial intelligence in health care.  AMA J Ethics. 2019;21(2):E121-E124. doi:10.1001/amajethics.2019.121Google ScholarCrossref
8.
Tong  A, Sainsbury  P, Craig  J.  Consolidated Criteria for Reporting Qualitative Research (COREQ): a 32-item checklist for interviews and focus groups.  Int J Qual Health Care. 2007;19(6):349-357. doi:10.1093/intqhc/mzm042PubMedGoogle ScholarCrossref
9.
Kallio  H, Pietilä  AM, Johnson  M, Kangasniemi  M.  Systematic methodological review: developing a framework for a qualitative semi-structured interview guide.  J Adv Nurs. 2016;72(12):2954-2965. doi:10.1111/jan.13031PubMedGoogle ScholarCrossref
10.
Gill  P, Stewart  K, Treasure  E, Chadwick  B.  Methods of data collection in qualitative research: interviews and focus groups.  Br Dent J. 2008;204(6):291-295. doi:10.1038/bdj.2008.192PubMedGoogle ScholarCrossref
11.
Viera  AJ, Garrett  JM.  Understanding interobserver agreement: the kappa statistic.  Fam Med. 2005;37(5):360-363.PubMedGoogle Scholar
12.
Institute of Medicine Committee on Quality of Health Care. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academies Press; 2001.
13.
Licklider  JCR.  Man-computer symbiosis.  IRE Trans Hum Factors Electron. 1960;HFE-1:4-11. doi:10.1109/THFE2.1960.4503259Google ScholarCrossref
14.
Nelson  CA, Kovarik  CL, Barbieri  JS.  Human-computer symbiosis: enhancing dermatologic care while preserving the art of healing.  Int J Dermatol. 2018;57(8):1015-1016. doi:10.1111/ijd.14071PubMedGoogle ScholarCrossref
15.
Kovarik  C, Lee  I, Ko  J; Ad Hoc Task Force on Augmented Intelligence.  Position statement on augmented intelligence (AuI).  J Am Acad Dermatol. 2019;81(4):998-1000. doi:10.1016/j.jaad.2019.06.032PubMedGoogle ScholarCrossref
16.
Dieng  M, Smit  AK, Hersch  J,  et al.  Patients’ views about skin self-examination after treatment for localized melanoma.  JAMA Dermatol. 2019. doi:10.1001/jamadermatol.2019.0434PubMedGoogle Scholar
17.
Tran  V-T, Riveros  C, Ravaud  P.  Patients’ views of wearable devices and AI in healthcare: findings from the ComPaRe e-cohort.  NPJ Digit Med. 2019;2:53. doi:10.1038/s41746-019-0132-yPubMedGoogle ScholarCrossref
18.
Tong  ST, Sopory  P.  Does integral affect influence intentions to use artificial intelligence for skin cancer screening? a test of the affect heuristic.  Psychol Health. 2019;34(7):828-849. doi:10.1080/08870446.2019.1579330PubMedGoogle ScholarCrossref
19.
Adamson  AS, Smith  A.  Machine learning and health care disparities in dermatology.  JAMA Dermatol. 2018;154(11):1247-1248. doi:10.1001/jamadermatol.2018.2348PubMedGoogle ScholarCrossref
Limit 200 characters
Limit 25 characters
Conflicts of Interest Disclosure

Identify all potential conflicts of interest that might be relevant to your comment.

Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.

Err on the side of full disclosure.

If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.

Not all submitted comments are published. Please see our commenting policy for details.

Limit 140 characters
Limit 3600 characters or approximately 600 words
    Views 1,275
    Citations 0
    Original Investigation
    March 11, 2020

    Patient Perspectives on the Use of Artificial Intelligence for Skin Cancer Screening: A Qualitative Study

    Author Affiliations
    • 1Yale School of Medicine, Department of Dermatology, New Haven, Connecticut
    • 2Harvard Medical School, Department of Dermatology, Brigham and Women's Hospital, Boston, Massachusetts
    • 3Medical student, Boston University School of Medicine, Boston, Massachusetts
    • 4Medical student, School of Medicine, University of California, San Francisco
    • 5Medical student, University of Massachusetts Medical School, Worcester
    • 6Perelman School of Medicine at the University of Pennsylvania, Department of Dermatology, Philadelphia
    • 7Stanford University School of Medicine, Department of Dermatology, Palo Alto, California
    • 8Department of Sociology, Yale University, New Haven, Connecticut
    • 9Harvard Medical School, Center for Cutaneous Oncology, Dana-Farber Cancer Institute, Boston, Massachusetts
    • 10Department of Dermatology, Veterans Affairs Integrated Service Network 1, Jamaica Plain, Massachusetts
    JAMA Dermatol. Published online March 11, 2020. doi:10.1001/jamadermatol.2019.5014
    Key Points

    Question  How do patients perceive the use of artificial intelligence for skin cancer screening?

    Findings  A qualitative study conducted at the Brigham and Women’s Hospital and the Dana-Farber Cancer Institute evaluated 48 patients, 33% with a history of melanoma, 33% with a history of nonmelanoma skin cancer only, and 33% with no history of skin cancer. While 75% of the patients stated that they would recommend artificial intelligence to friends and family members, 94% expressed the importance of symbiosis between humans and artificial intelligence.

    Meaning  Patients appear to be receptive to the use of artificial intelligence for skin cancer screening if the integrity of the human physician-patient relationship is preserved.

    Abstract

    Importance  The use of artificial intelligence (AI) is expanding throughout the field of medicine. In dermatology, researchers are evaluating the potential for direct-to-patient and clinician decision-support AI tools to classify skin lesions. Although AI is poised to change how patients engage in health care, patient perspectives remain poorly understood.

    Objective  To explore how patients conceptualize AI and perceive the use of AI for skin cancer screening.

    Design, Setting, and Participants  A qualitative study using a grounded theory approach to semistructured interview analysis was conducted in general dermatology clinics at the Brigham and Women’s Hospital and melanoma clinics at the Dana-Farber Cancer Institute. Forty-eight patients were enrolled. Each interview was independently coded by 2 researchers with interrater reliability measurement; reconciled codes were used to assess code frequency. The study was conducted from May 6 to July 8, 2019.

    Main Outcomes and Measures  Artificial intelligence concept, perceived benefits and risks of AI, strengths and weaknesses of AI, AI implementation, response to conflict between human and AI clinical decision-making, and recommendation for or against AI.

    Results  Of 48 patients enrolled, 26 participants (54%) were women; mean (SD) age was 53.3 (21.7) years. Sixteen patients (33%) had a history of melanoma, 16 patients (33%) had a history of nonmelanoma skin cancer only, and 16 patients (33%) had no history of skin cancer. Twenty-four patients were interviewed about a direct-to-patient AI tool and 24 patients were interviewed about a clinician decision-support AI tool. Interrater reliability ratings for the 2 coding teams were κ = 0.94 and κ = 0.89. Patients primarily conceptualized AI in terms of cognition. Increased diagnostic speed (29 participants [60%]) and health care access (29 [60%]) were the most commonly perceived benefits of AI for skin cancer screening; increased patient anxiety was the most commonly perceived risk (19 [40%]). Patients perceived both more accurate diagnosis (33 [69%]) and less accurate diagnosis (41 [85%]) to be the greatest strength and weakness of AI, respectively. The dominant theme that emerged was the importance of symbiosis between humans and AI (45 [94%]). Seeking biopsy was the most common response to conflict between human and AI clinical decision-making (32 [67%]). Overall, 36 patients (75%) would recommend AI to family members and friends.

    Conclusions and Relevance  In this qualitative study, patients appeared to be receptive to the use of AI for skin cancer screening if implemented in a manner that preserves the integrity of the human physician-patient relationship.

    Introduction

    Artificial intelligence (AI) is a branch of computer science that focuses on the automation of intelligent behavior. Machine learning is a subfield of AI that uses data-driven techniques to uncover patterns and predict behavior.1,2 Artificial intelligence tools are being explored across the field of medicine. In dermatology, researchers are evaluating the potential of machine learning to classify skin lesions using images from standard and dermoscopic cameras.3-5 Direct-to-patient AI tools classify images obtained by patients outside of a clinical setting, and clinician decision-support AI tools classify images obtained by clinicians at the point of care.6

    Artificial intelligence may significantly alter how patients engage in health care, and the medical literature in this field is rapidly expanding. For example, a recent issue of a journal focused on exploring benefits and risks of AI technology, ranging from gains in health care efficiency and quality to threats to patient privacy and confidentiality, informed consent, and autonomy.7 However, our current understanding of how patients perceive AI and its application to health care lacks clarity and depth.

    Our primary aims in this study were to explore how patients conceptualize AI and view the use of direct-to-patient and clinician decision-support AI tools for skin cancer screening. Specifically, we sought to elucidate perceived benefits and risks, strengths and weaknesses, implementation, response to conflict between human and AI clinical decision-making, and recommendation for or against AI. Our secondary aims were to identify which entities patients view as responsible for AI accuracy and data privacy.

    Methods

    Patients were prospectively enrolled from general dermatology clinics at the Brigham and Women's Hospital and melanoma clinics at the Dana-Farber Cancer Institute. The study was conducted from May 6 to July 8, 2019. Our initial enrollment target was 48 patients equally distributed between 3 cohorts: history of melanoma, history of nonmelanoma skin cancer only, and no history of skin cancer. Exclusion criteria were age younger than 18 years and impaired decision-making capacity. The institutional review board of Partners Healthcare approved this study. The participants provided informed verbal consent. There was no financial compensation. This study followed the Consolidated Criteria for Reporting Qualitative Research (COREQ) reporting guideline for qualitative studies.8

    Development of the semistructured interview guides exploring direct-to-patient and clinician decision-support AI tools followed the 5-step process presented in a systematic methodologic review by Kallio et al (Figure 1).9 First, we determined that our study fulfilled prerequisites for using semistructured interviews.10 Second, we conducted a literature review. Third, 4 of us (C.A.N., L.M.P.-C., R.I.H., and A.M.) formulated the main themes and follow-up questions for the guides. Fourth, pilot testing occurred in 3 stages. During a training session for 4 of us (S.J.L., K.L., A.B.P., and E.T.) in the semistructured interview technique by 2 of us (C.A.N. and L.M.P.-C.), we tested the guides using interview simulations. After the first round of revisions, we sent the guides for review by subject matter experts on technology applications to dermatology (J.S.B. and J.M.K.) and sociology (A.V.M.). After the second round of revisions, we field tested the guides using 6 patient interviews, which resulted in no further revisions. Fifth, the complete guides are presented (eTable 1 and eTable 2 in the Supplement).

    Each patient was introduced to the study by their dermatologist (R.I.H. or A.M.). Additional information was provided by either a research assistant (S.J.L. or K.L.) or medical student (A.B.P. or E.T.). This individual, who had no previously established relationship with the patient, obtained verbal informed consent and conducted and recorded the interview. Half of the patients in each cohort were interviewed about a direct-to-patient AI tool and the other half were interviewed about a clinician decision-support AI tool. To minimize bias, we alternated the question order such that half of the patients in each cohort and for each AI tool were asked about benefits prior to risks and the other half were asked about risks prior to benefits. We used a standardized data collection instrument to abstract patient age, sex, race, ethnicity, and history of melanoma and/or nonmelanoma skin cancer from the electronic medical record. After each interview, we distributed a paper survey (eTable 3 in the Supplement) to collect data on patient ownership of electronic devices, usage of digital services, prior dermatology exposure, educational level, and total household income.

    Statistical Analysis

    We used a grounded theory approach to develop the codebook (Figure 2). Four of us (C.A.N., L.M.P.-C., R.I.H., and A.M.) independently coded the first 12 interviews, and codes were refined until consensus was reached. The codebook was sent for expert review. After the first round of revisions, the codebook was distributed to 2 coding teams (A.C. and P.M. and S.J.L. and K.L.). Each team member independently coded 24 interviews, and novel codes were refined until consensus was reached among 8 of us (C.A.N., L.M.P.-C., A.C., S.J.L., K.L., P.M., R.I.H., and A.M.). The codebook was sent for expert review. After the second round of revisions, the codebook (eTable 4 in the Supplement) was distributed to each team. No novel codes were identified in the final 12 interviews, achieving thematic saturation. Team members independently finalized coding and reconciled discrepancies. Code frequencies were calculated based on reconciled codes.

    Continuous variables were summarized with means and SDs. Categorical variables are reported as proportions and percentages. Interrater reliability was assessed using the Cohen κ coefficient. Qualitative and quantitative analyses were performed in NVivo, version 12.1 (QSR International).

    Results

    All patients informed about the study consented to participate. A total of 48 patients were enrolled, and the mean (SD) interview duration was 22 (9) minutes. Patient characteristics are detailed in Table 1. The mean (SD) age was 53.3 (21.7) years; 26 participants (54%) were women. Most patients self-reported race and ethnicity as white (45 [94%]) and non-Hispanic (45 [94%]). According to the study design, 16 patients (33%) had a history of melanoma, 16 patients (33%) had a history of nonmelanoma skin cancer only, and 16 patients (33%) had no history of skin cancer. Forty-three patients (90%) reported ownership of at least 1 electronic device, most often a computer (39 [81%]) and/or smartphone (39 [81%]). Forty-six patients (96%) reported use of digital services and 43 patients (90%) reported use of digital services for health. Google was the most commonly used digital service in each category, reported by 44 patients (92%) for overall digital services and 39 patients (81%) for health. Forty-six patients (96%) reported a prior dermatology clinic visit. Patients had a high educational level (20 [42%] graduate or professional degree) and total household income (20 [42%] total household income ≥$150 000).

    The interrater reliability between the coding teams was κ = 0.94 (A.C. and P.M.) and κ = 0.89 (S.J.L. and K.L.), indicating almost perfect agreement.11Table 2 presents the overall frequency of each code along with frequency in the direct-to-patient and clinician decision-support interview groups.

    When patients were asked, “What comes to mind when you think about AI?”, the predominant theme that emerged was cognition (36 [75%]), such as game playing. Other themes included machine (11 [23%]) and modernity (11 [23%]). Some patients linked cognition and machine, for example, describing AI as the “replacement of human observation and decision-making by machines.” Figure 3 illustrates patient conceptions of AI in a word cloud. Ten patients (21%) had no concept of AI prior to the study.

    The most commonly perceived benefits of AI tools for skin cancer screening were increased diagnostic speed (29 [60%]) and health care access (29 [60%]). These benefits were identified more often by patients in the direct-to-patient compared with the clinician decision-support interview group. Patients associated increased diagnostic speed with early skin cancer detection and lifesaving potential. Increased health care access was multifaceted, deriving from gains in labor efficiency and time for physician-patient interaction, remote diagnosis, and unburdening of the health care system. One patient noted that AI “could reach people who don't have great access to health care but may have an iPhone.” Other perceived benefits included reduced health care cost (17 [35%]), reduced patient anxiety (16 [33%]), and increased triage efficiency (14 [29%]).

    The greatest perceived risk of AI for skin cancer screening was increased patient anxiety (19 [40%]), identified more often by patients in the direct-to-patient interview group. One patient asked, “Those people who are broken by the idea of getting something scary like cancer, where do they turn? They can call and make an appointment, but that’s not going to help them feel better.” Other perceived risks included human loss of social interaction (18 [38%]), patient loss of privacy (14 [29%]), patient loss to follow-up (14 [29%]), nefarious use of AI (11 [23%]), and human deskilling (10 [21%]).

    Patients perceived more accurate diagnosis (33 [69%]) as the greatest strength of AI compared with human skin cancer screening. This perception was based on the ability of AI to draw on more data or experience than humans, to learn and evolve, and to share data. One patient noted that AI “has a huge database of what diagnosis A is supposed to look like as opposed to a human who only has their own life experiences.” Another commonly perceived strength of AI was patient activation (19 [40%]) to seek both health information and health care. This strength was identified more often by patients in the direct-to-patient interview group. “Rather than…pondering for weeks or months whether it’s time to go see the doctor,” one patient noted that AI could be “an immediate indicator.” Other perceived strengths were more convenient diagnosis (14 [29%]), more consistent diagnosis (13 [27%]), more objective diagnosis (11 [23%]), and patient education (11 [23%]).

    At the same time, patients perceived less accurate diagnosis (41 [85%]) as the greatest weakness of AI. This perception was based on the potential for false-negatives, false-positives, inaccurate or limited training set, lack of context, lack of physical examination, and operator dependence. “Examining a photograph that you get with your phone in variable light,” commented one patient, “is not a substitute for [an] in-person exam.” Other commonly perceived weaknesses of AI were lack of verbal communication (28 [58%]), lack of emotion (20 [42%]), and lack of nonverbal communication (19 [40%]). In the realm of verbal communication, patients called attention to the inability of AI to answer follow-up questions, discuss treatment options, and educate and reassure the patient. “People can…be really anxious, sad, fearful,” one patient commented, “and the app’s not going to be able to sense that.” In the realm of emotion, patients noted AI’s lack of compassion and empathy. One patient expressed, “You can’t write an algorithm to love somebody.” In the realm of nonverbal communication, patients called attention to AI’s lack of emotion perception, “eye contact,” and “human touch.”

    The dominant theme in both interview groups was the importance of a symbiotic relationship between humans and AI (45 [94%]). Patients envisioned AI referring to a physician and providing a second opinion for a physician. “The problem comes from replacing a person with [AI],” which this patient described as a “tool for a dermatologist.” Credibility (30 [63%]) was another common theme that emerged. “I would probably need…feedback from a medical professional to…trust the app,” stated one patient, “because it’s like a black box…Algorithms with databases behind them…can make errors.” Patients perceived AI as both a dynamic and static diagnostic tool (25 [52%]). One patient suggested, “Maybe if a mole was changing, there would be a way to track that.” Patients also identified setting as important (21 [44%]) in terms of both the health care institution and the patient. “I would have to know that this application was set up by the dermatologist… or [my] medical group,” said one patient, “I wouldn’t want it to be in the hands of a private company.” Other themes included perception of AI as an information tool (18 [38%]) and implementation challenges, such as malpractice (17 [35%]).

    The most common response in the event that human and AI reached conflicting diagnoses of melanoma and benign skin lesion was to seek a biopsy (32 [67%]). As one patient put it, “Let’s get the biopsy and find out what the story is.” The second most common response was to “put more faith in the doctor” (29 [60%]). The third most common response was to seek an opinion from another physician (20 [42%]). “I would get another opinion from another human,” a patient stated, “another dermatologist.” And the fourth most common response was to seek longitudinal follow-up from the same physician (11 [23%]).

    When asked to identify entities responsible for AI accuracy, patients most often named the technology company (25 [52%]) and the physician (20 [42%]), followed by the collective (12 [25%]) and the health care institution (11 [23%]). When asked to identify entities responsible for AI data privacy, patients most often named the health care institution (25 [52%]) and the technology company (19 [40%]).

    Overall, 36 patients (75%) would recommend the AI tool to family members and friends, 9 patients (19%) were ambivalent, and 3 patients (6%) would not recommend it. Specifically, 17 patients (71%) would recommend the direct-to-patient AI tool and 19 patients (79%) would recommend the clinician decision-support AI tool.

    Discussion

    The Institute of Medicine included patient centered as 1 of 6 specific aims for improving the health system, “ensuring that patient values guide all clinical decisions.”12[pg3] In an era of rapidly evolving technology, advancing our understanding of patient values is essential to optimize the quality of care.

    The key finding of our study was that 75% of patients would recommend the use of AI for skin cancer screening to friends and family members; however, 94% expressed the importance of symbiosis between humans and AI. The term man-computer symbiosis was first used by Licklider13 to describe a form of teamwork in which humans provide strategic input while computers provide depth of analysis.14 Considering the use of AI for skin cancer screening, particularly the direct-to-patient tool, patients were enthusiastic about increased diagnostic speed (60%), increased health care access (60%), and patient activation (40%) but worried about increased patient anxiety (40%). Patients valued human verbal (58%) and nonverbal (40%) communication and emotion (42%). Rather than replacing a physician, patients envisioned AI referring to a physician and providing a second opinion for a physician. The significance of symbiosis to patients in this study suggests that they may be more receptive to augmented intelligence15 compared with AI tools for skin cancer screening.

    A second key finding of our study was the emphasis that patients placed on the accuracy of AI for skin cancer screening. Patients viewed AI as a diagnostic tool (52%) but perceived accuracy to be both its greatest strength (69%) and greatest weakness (85%). Although this finding may seem contradictory, patients had a nuanced perspective. Patients recognized the ability of AI to draw on more data or experience than humans, learn and evolve, and share data; however, they were concerned about the potential for false-negatives, false-positives, inaccurate or limited training set, lack of context, lack of physical examination, and operator dependence. This finding highlights the importance of validating the accuracy of AI tools for skin cancer screening in a manner that is transparent to patients prior to implementation.

    A third key finding of this study was the heterogeneity of patient perspectives on AI. Patients primarily conceptualized AI in terms of cognition (75%), but their words revealed both positive and negative vantage points. A multiplicity of themes emerged on the use of AI for skin cancer screening, but few were identified by most patients. For example, when asked to identify entities responsible for AI accuracy and data privacy, most patients named the technology company (52%) and health care institution (52%), respectively. However, the list of entities was long and diverse. This finding suggests that patient perspectives on AI have not yet solidified, substantiated by the emergence of credibility (63%) and setting (44%) as common themes under implementation. The most common responses in the event that a human and AI reached conflicting diagnoses of melanoma and benign skin lesion were to seek a biopsy (67%), trust the physician (60%), and seek an opinion from another physician (42%). This finding suggests that physicians are likely to play an important role in shaping patient perspectives on the use of AI for skin cancer screening moving forward.

    Contextualizing our results within the medical literature is limited by the paucity of data on how patients perceive AI for skin cancer screening. Dieng et al16 found that patients lack confidence in their ability to undertake skin self-examination after treatment for localized melanoma and that some patients are receptive to assistance from new digital technologies. In line with the symbiosis theme in our study, Tran et al17 found that most patients were ready to accept the use of AI for skin cancer screening after reading a vignette but only under human control. In addition, Tong and Sopory18 found that people’s intentions to use AI for skin cancer detection were influenced by messaging that they received about the risks and benefits of AI.

    Limitations

    Our results should be interpreted in the context of limitations regarding our study design. First, this was a qualitative study with a limited sample size. Because, to our knowledge, no prior study had elicited domains from patients regarding the use of AI for skin cancer screening, we adopted the semistructured interview approach with the goal of establishing a platform for future quantitative investigation. To ensure rigorous methods, we followed the COREQ checklist,8 developed the guides using an established framework,9 and measured interrater reliability.11 Expert reviewers were used to develop the guides and codebook. Second, the demographic characteristics of our patients may limit generalizability to other study populations. Future studies are essential to elucidate perspectives of patients with diverse racial, ethnic, and socioeconomic backgrounds and with varying levels of education and access to dermatologic care. This expansion is particularly important in light of concerns raised that AI tools may exacerbate health care disparities in dermatology.19 In addition, patients were interviewed about a hypothetical scenario involving an AI tool with which they lacked familiarity in practice.

    Conclusions

    Topol2[p44] opened a recent review article on the convergence of human and artificial intelligence by writing, “Over time, marked improvements in accuracy, productivity, and workflow will likely be actualized, but whether that will be used to improve the patient-doctor relationship or facilitate its erosion remains to be seen.” Our results indicate that most patients are receptive to the use of AI for skin cancer screening within the framework of human-AI symbiosis. Although additional research is required, the themes that emerged in this study have important implications across the house of medicine. Through patients’ eyes, augmented intelligence may improve health care quality but should be implemented in a manner that preserves the integrity of the human physician-patient relationship.

    Back to top
    Article Information

    Accepted for Publication: January 7, 2020.

    Corresponding Author: Arash Mostaghimi, MD, MPA, MPH, Harvard Medical School, Department of Dermatology, Brigham and Women's Hospital, Brigham Dermatology Associates, 221 Longwood Ave, Boston, MA 02115 (amostaghimi@bwh.harvard.edu).

    Published Online: March 11, 2020. doi:10.1001/jamadermatol.2019.5014

    Author Contributions: Drs Nelson and Mostaghimi had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

    Concept and design: Nelson, Pérez-Chada, Lo, Tkachenko, Barbieri, Ko, Mostaghimi.

    Acquisition, analysis, or interpretation of data: All authors.

    Drafting of the manuscript: Nelson, Pournamdari.

    Critical revision of the manuscript for important intellectual content: Pérez-Chada, Creadore, Li, Lo, Manjaly, Tkachenko, Barbieri, Ko, Menon, Hartman, Mostaghimi.

    Statistical analysis: Nelson, Mostaghimi.

    Administrative, technical, or material support: Nelson, Pérez-Chada, Creadore, Li, Pournamdari, Tkachenko, Ko, Mostaghimi.

    Supervision: Pérez-Chada, Mostaghimi.

    Conflict of Interest Disclosures: Dr Pérez-Chada reported receiving grants from the National Psoriasis Foundation outside the submitted work. Dr Ko reported serving as chair of the American Academy of Dermatology Task Force on Augmented Intelligence. Dr Mostaghimi reported receiving personal fees from 3Derm and Pfizer and serving as clinical investigator, with no personal financial compensation, for Incyte, Concert, Eli Lilly, and Aclaris outside the submitted work. No other disclosures were reported.

    Funding/Support: Dr Barbieri is supported by the National Institute of Arthritis and Musculoskeletal and Skin Diseases of the National Institutes of Health award T32-AR-007465 and receives partial salary support through a Pfizer Fellowship grant to the Trustees of the University of Pennsylvania. Dr Hartman is supported by American Skin Association Research Grant 120795.

    Role of the Funder/Sponsor: The funding organizations had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

    Disclaimer: Dr Mostaghimi is an Associate Editor of JAMA Dermatology, but was not involved in any of the decisions regarding review of the manuscript or its acceptance.

    References
    1.
    Balthazar  P, Harri  P, Prater  A, Safdar  NM.  Protecting your patients’ interests in the era of big data, artificial intelligence, and predictive analytics.  J Am Coll Radiol. 2018;15(3, pt B):580-586. doi:10.1016/j.jacr.2017.11.035PubMedGoogle ScholarCrossref
    2.
    Topol  EJ.  High-performance medicine: the convergence of human and artificial intelligence.  Nat Med. 2019;25(1):44-56. doi:10.1038/s41591-018-0300-7PubMedGoogle ScholarCrossref
    3.
    Esteva  A, Kuprel  B, Novoa  RA,  et al.  Dermatologist-level classification of skin cancer with deep neural networks.  Nature. 2017;542(7639):115-118. doi:10.1038/nature21056PubMedGoogle ScholarCrossref
    4.
    Haenssle  HA, Fink  C, Schneiderbauer  R,  et al; Reader Study Level-I and Level-II Groups.  Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists.  Ann Oncol. 2018;29(8):1836-1842. doi:10.1093/annonc/mdy166PubMedGoogle ScholarCrossref
    5.
    Han  SS, Kim  MS, Lim  W, Park  GH, Park  I, Chang  SE.  Classification of the clinical images for benign and malignant cutaneous tumors using a deep learning algorithm.  J Invest Dermatol. 2018;138(7):1529-1538. doi:10.1016/j.jid.2018.01.028PubMedGoogle ScholarCrossref
    6.
    Zakhem  GA, Motosko  CC, Ho  RS.  How should artificial intelligence screen for skin cancer and deliver diagnostic predictions to patients?  JAMA Dermatol. 2018;154(12):1383-1384. doi:10.1001/jamadermatol.2018.2714PubMedGoogle ScholarCrossref
    7.
    Rigby  MJ.  Ethical dimensions of using artificial intelligence in health care.  AMA J Ethics. 2019;21(2):E121-E124. doi:10.1001/amajethics.2019.121Google ScholarCrossref
    8.
    Tong  A, Sainsbury  P, Craig  J.  Consolidated Criteria for Reporting Qualitative Research (COREQ): a 32-item checklist for interviews and focus groups.  Int J Qual Health Care. 2007;19(6):349-357. doi:10.1093/intqhc/mzm042PubMedGoogle ScholarCrossref
    9.
    Kallio  H, Pietilä  AM, Johnson  M, Kangasniemi  M.  Systematic methodological review: developing a framework for a qualitative semi-structured interview guide.  J Adv Nurs. 2016;72(12):2954-2965. doi:10.1111/jan.13031PubMedGoogle ScholarCrossref
    10.
    Gill  P, Stewart  K, Treasure  E, Chadwick  B.  Methods of data collection in qualitative research: interviews and focus groups.  Br Dent J. 2008;204(6):291-295. doi:10.1038/bdj.2008.192PubMedGoogle ScholarCrossref
    11.
    Viera  AJ, Garrett  JM.  Understanding interobserver agreement: the kappa statistic.  Fam Med. 2005;37(5):360-363.PubMedGoogle Scholar
    12.
    Institute of Medicine Committee on Quality of Health Care. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academies Press; 2001.
    13.
    Licklider  JCR.  Man-computer symbiosis.  IRE Trans Hum Factors Electron. 1960;HFE-1:4-11. doi:10.1109/THFE2.1960.4503259Google ScholarCrossref
    14.
    Nelson  CA, Kovarik  CL, Barbieri  JS.  Human-computer symbiosis: enhancing dermatologic care while preserving the art of healing.  Int J Dermatol. 2018;57(8):1015-1016. doi:10.1111/ijd.14071PubMedGoogle ScholarCrossref
    15.
    Kovarik  C, Lee  I, Ko  J; Ad Hoc Task Force on Augmented Intelligence.  Position statement on augmented intelligence (AuI).  J Am Acad Dermatol. 2019;81(4):998-1000. doi:10.1016/j.jaad.2019.06.032PubMedGoogle ScholarCrossref
    16.
    Dieng  M, Smit  AK, Hersch  J,  et al.  Patients’ views about skin self-examination after treatment for localized melanoma.  JAMA Dermatol. 2019. doi:10.1001/jamadermatol.2019.0434PubMedGoogle Scholar
    17.
    Tran  V-T, Riveros  C, Ravaud  P.  Patients’ views of wearable devices and AI in healthcare: findings from the ComPaRe e-cohort.  NPJ Digit Med. 2019;2:53. doi:10.1038/s41746-019-0132-yPubMedGoogle ScholarCrossref
    18.
    Tong  ST, Sopory  P.  Does integral affect influence intentions to use artificial intelligence for skin cancer screening? a test of the affect heuristic.  Psychol Health. 2019;34(7):828-849. doi:10.1080/08870446.2019.1579330PubMedGoogle ScholarCrossref
    19.
    Adamson  AS, Smith  A.  Machine learning and health care disparities in dermatology.  JAMA Dermatol. 2018;154(11):1247-1248. doi:10.1001/jamadermatol.2018.2348PubMedGoogle ScholarCrossref
    ×