[Skip to Content]
[Skip to Content Landing]
Views 238
Citations 0
Brief Report
December 20, 2018

Visualizing Deep Learning Models for the Detection of Referable Diabetic Retinopathy and Glaucoma

Author Affiliations
  • 1Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, Australia
  • 2State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong, China
JAMA Ophthalmol. Published online December 20, 2018. doi:10.1001/jamaophthalmol.2018.6035
Key Points

Question  Can the networks of 2 validated deep learning models for referable diabetic retinopathy and glaucomatous optic neuropathy be reliably visualized?

Findings  In this cross-sectional study, lesions typically observed in cases of referable diabetic retinopathy (exudate, hemorrhage, or vessel abnormality) were identified as the most important prognostic regions in 96 of 100 true-positive diabetic retinopathy cases. All 100 glaucomatous optic neuropathy cases displayed heat map visualization within traditional disease regions.

Meaning  These findings substantiate the validity of deep learning models, verifying the reliability of a visualization method that may promote clinical adoption of these models.

Abstract

Importance  Convolutional neural networks have recently been applied to ophthalmic diseases; however, the rationale for the outputs generated by these systems is inscrutable to clinicians. A visualization tool is needed that would enable clinicians to understand important exposure variables in real time.

Objective  To systematically visualize the convolutional neural networks of 2 validated deep learning models for the detection of referable diabetic retinopathy (DR) and glaucomatous optic neuropathy (GON).

Design, Setting, and Participants  The GON and referable DR algorithms were previously developed and validated (holdout method) using 48 116 and 66 790 retinal photographs, respectively, derived from a third-party database (LabelMe) of deidentified photographs from various clinical settings in China. In the present cross-sectional study, a random sample of 100 true-positive photographs and all false-positive cases from each of the GON and DR validation data sets were selected. All data were collected from March to June 2017. The original color fundus images were processed using an adaptive kernel visualization technique. The images were preprocessed by applying a sliding window with a size of 28 × 28 pixels and a stride of 3 pixels to crop images into smaller subimages to produce a feature map. Threshold scales were adjusted to optimal levels for each model to generate heat maps highlighting localized landmarks on the input image. A single optometrist allocated each image to predefined categories based on the generated heat map.

Main Outcomes and Measures  Visualization regions of the fundus.

Results  In the GON data set, 90 of 100 true-positive cases (90%; 95% CI, 82%-95%) and 15 of 22 false-positive cases (68%; 95% CI, 45%-86%) displayed heat map visualization within regions of the optic nerve head only. Lesions typically seen in cases of referable DR (exudate, hemorrhage, or vessel abnormality) were identified as the most important prognostic regions in 96 of 100 true-positive DR cases (96%; 95% CI, 90%-99%). In 39 of 46 false-positive DR cases (85%; 95% CI, 71%-94%), the heat map displayed visualization of nontraditional fundus regions with or without retinal venules.

Conclusions and Relevance  These findings suggest that this visualization method can highlight traditional regions in disease diagnosis, substantiating the validity of the deep learning models investigated. This visualization technique may promote the clinical adoption of these models.

×