Google users are increasingly web searching medical concerns; these search results hold the potential to influence outreach to clinicians by offering reassurance or generating concern. Accurate web information is therefore critical. For nondermatologic concerns, these web queries are via text cues. For dermatologic concerns, however, patients often lack the linguistic descriptors to accurately inform and narrow their searches. Google Reverse Image Search permits users to upload an image and search for similar images and content. We soon anticipate greater user adoption of Google Reverse Image Search to aid in self-identification of skin lesions, as well as a role for this technology to serve as adjunct to clinical dermatologic expertise, given that patients inevitably do web search their concerns before, during, and after clinical dermatologic consultation. While potentially informative, the results of such searches may generate concern and missed diagnoses may have severe implications. We thus queried Google’s ability to accurately identify similar images with matched diagnoses.
A set of 100 classic images representing 10 of the most common benign and malignant dermatologic neoplasms were selected from a non–web-indexed database of photographs taken at a US Navy Medical Center by board-certified dermatologists (Figure). Institutional review board approval was waived by Stanford University. Images were processed based on a previously described protocol1 including image cropping to remove the influence of anatomical location and then uploaded into Google’s Reverse Image Search with or without the additional text descriptor “skin.” The top 10 “visually similar images” were analyzed for presence of the correct diagnosis within these results.
For malignant conditions except invasive squamous cell carcinoma (SCC), the additional text descriptor “skin” significantly increased the frequency of the correct diagnoses’ presence in the top results, and the accuracy, defined as the proportion of correct diagnoses within the top 10 unfiltered results (Table); image plus text search therefore likely restrictively filters to pages matching both. Google achieved greater diagnostic accuracy for all combined malignant and premalignant conditions searched with “skin” vs benign conditions searched with “skin” (P < .001, by Fisher exact test). The correct diagnosis was absent from the most similar images of skin cancers 20% to 30% of the time and from those of benign neoplasms 30% to 100% of the time, even when “skin” text-cued, highlighting the critical potential for misdiagnoses as well as uninformative results, with returned search images including those of cosmetic products and insects.
However powerful Google’s visual object recognition algorithms, they are not geared toward biological image-matching; its “deep learning” methods use large amounts of presumably nonmedically annotated data to automatically learn image features,2,3, rather than relying on the historically used manual hand crafting of key image features, termed feature extraction.4 The latter is influenced by domain expertise more analogous to dermatologic training than is deep learning, with potential to hierarchically weight clinically relevant features. Anticipating that patients will in the coming years increasingly self-generate differential diagnoses for medical conditions by both text and image searches, we highlight the present inadequacies and risk associated with the use of Google Reverse Image Search; while the present algorithms perform better with lesions representing or concerning for malignancy as well as with text cues, the error rate remains, at best, too high for the data to be safely used by those without dermatologic training. These data highlight the need for a more medically oriented toolkit, offering caution against use of the current technology. We propose Google’s partnership with the medical community in developing a hybrid approach to synergize computer learning with human medical expertise, and caution widely against the present inaccuracies and danger of patient-initiated differential diagnosis generation from input photo matches.
Corresponding Author: Kavita Y. Sarin, MD, PhD, Department of Dermatology, Stanford University School of Medicine, 450 Broadway St, Pavilion C, Second Flr, Redwood City, CA 94063 (firstname.lastname@example.org).
Accepted for Publication: May 12, 2016.
Published Online: June 29, 2016. doi:10.1001/jamadermatol.2016.2096
Author Contributions: Dr Sarin and Ms Ransohoff had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Ransohoff, Sarin.
Acquisition, analysis, or interpretation of data: Ransohoff, Li, Sarin.
Drafting of the manuscript: Ransohoff, Sarin.
Critical revision of the manuscript for important intellectual content: Ransohoff, Sarin.
Statistical analysis: Ransohoff, Li.
Administrative, technical, or material support: Sarin, Ransohoff.
Study supervision: Sarin, Ransohoff.
Conflict of Interest Disclosures: None reported.
Funding/Support: This study is funded by a Dermatology Foundation Medical Career Development Award (Dr Sarin).
Role of the Funder/Sponsor: The Dermatology Foundation had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
DD. Using Google Reverse Image Search to decipher biological images. Curr Protoc Mol Biol
. 2015;111:19.13.1-4. PubMedGoogle Scholar