Customize your JAMA Network experience by selecting one or more topics from the list below.
Kloosterboer A, Yannuzzi NA, Patel NA, Kuriyan AE, Sridhar J. Assessment of the Quality, Content, and Readability of Freely Available Online Information for Patients Regarding Diabetic Retinopathy. JAMA Ophthalmol. Published online August 22, 2019. doi:10.1001/jamaophthalmol.2019.3116
What are the quality, accuracy, and readability of online information on diabetic retinopathy?
In this cross-sectional study, 11 diabetic retinopathy websites were analyzed. All were of poor quality and had a substantial variation in content accuracy and readability.
These data suggest that available online information on diabetic retinopathy typically is not sufficient to support the patient in making appropriate medical decisions.
Diabetic retinopathy is a global leading cause of blindness. Patients increasingly use the internet to search for health-related information that may affect medical decision-making, but to date, no standard exists across published websites.
To assess the quality, content, and readability of information found online for diabetic retinopathy.
Design and Setting
This cross-sectional study analyzed 11 medical sites with information on diabetic retinopathy. Twenty-six questions were composed to include information most relevant to patients, and each website was independently evaluated by 1 vitreoretinal surgeon and 2 vitreoretinal fellows. Readability was analyzed using an online readability tool. The JAMA benchmarks were used to evaluate the quality of each site. Data were collected from December 2018 to January 2019 and analyzed in February 2019.
Main Outcomes and Measures
A 26-question survey, JAMA benchmarks, Flesch reading ease score, Flesch-Kincaid grade level, Gunning Fog Index, Coleman Liau Index, and Simple Measure of Gobbledygook Index.
The mean (SD) questionnaire score for all websites was 55.76 (13.38) (95% CI, 47.85-63.67) of 104 possible points. There was a difference between the content quality of the websites (H = 25.811, P = .004). The mean (SD) reading grade for all websites was 11.30 (1.79; 95% CI, 10.24-12.36). No correlation was found between content accuracy and the mean reading grade (r = 0.445, P = .17) or Google rank (r = −0.260, P = .43). No website achieved the full 4 JAMA benchmarks, and only 1 website achieved 3 of the 4 JAMA benchmarks. No correlation was found between the accuracy of the content of the website and JAMA benchmarks (r = 0.422, P = .20). The interobserver reproducibility was similar among the 3 observers (r = 0.87 between observers 1 and 2, r = 0.83 between observers 1 and 3, and r = 0.84 between observers 2 and 3, P < .001).
Conclusions and Relevance
These findings suggest that freely available information online about diabetic retinopathy varies by source but is generally of low quality. The material presented seems difficult to interpret and exceeds the recommended reading level for health information. Most websites reviewed did not provide sufficient information using the grading scheme used to support the patient in making medical decisions.
Diabetic retinopathy (DR) is the leading cause of vision loss in US adults.1 It is projected that by 2050, the number of people with DR and vision-threatening DR will triple.2 Vision loss is associated with lower quality of life, progression of disabilities, and increased functional dependency.3
Patients at risk for DR or with a new diagnosis of DR may attempt to supplement information from their ophthalmologist and primary care physicians by conducting research on the internet. The Pew Research Center found that 72% of adults who use the internet search for health-related material.4 Studies5,6 have found that information presented online can have an important role in patient decision-making and treatment. However, no standard exists to date regarding the accuracy or quality of the content displayed. Online information can therefore be incomplete, be inaccurate, or have an underlying commercial bias. Thus, the usefulness of these resources can vary greatly by source.
Another challenge associated with online health information is the readability of the text. Information must strike a balance between being factually correct and being comprehensible to laypeople without a background in medicine. Health literacy, as defined by the National Assessment of Adult Literacy, refers to “the degree to which individuals have the capacity to obtain, process, and understand basic health information and services needed to make appropriate health decisions.”7 The latest report from the National Assessment of Adult Literacy states that the average adult can read and comprehend at the eighth-grade level.7 Thus, the US Department of Health and Human Services recommends that information targeted for patients should be written at or below the sixth-grade level. However, online health information can exceed the recommended reading level.8,9 Lower health literacy has been associated with more hospitalizations and poorer outcomes.10 It is essential that information accessed by patients is not only complete and accurate but also easy to read and comprehend.
Previous studies11,12 have found that online materials in ophthalmology exceed the recommended reading grade level. However, these analyses were limited to information provided by major ophthalmologic associations or the top 10 ranked patient-oriented websites, which were not named. Furthermore, these studies did not investigate the quality of the information provided. One group13 analyzed the quality of online sources on rhegmatogenous retinal detachment and found that websites had a significant difference in the content provided. Because that study was centered around a specific ophthalmic condition, the information gained cannot be extrapolated to all diseases in ophthalmology. However, it highlights the importance of assessing the information patients are accessing, especially in a disease as prevalent as DR. Given the important role of health-related internet searches and the increasing rates of DR, the aim of this study was to evaluate freely available information for patients about DR by analyzing the content, quality, and readability of commonly used medical websites.
The keyword diabetic retinopathy was entered in a Google.com search, and major medical sites were selected for analysis. These sites included the American Academy of Ophthalmology (https://www.aao.org/), All About Vision (https://www.allaboutvision.com/), American Optometric Association (https://www.aoa.org/), American Society of Retinal Specialists (https://www.asrs.org/), EyeWiki (https://eyewiki.org/Main_Page), Mayo Clinic (https://www.mayoclinic.org/), Medical News Today (https://www.medicalnewstoday.com/), MedicineNet (https://www.medicinenet.com/), National Eye Institute (https://nei.nih.gov/), WebMD (https://www.webmd.com/), and Wikipedia (https://www.wikipedia.org/). The rank of each website was recorded. Twenty-six questions were composed by us based on frequently asked questions by patients in their practice. They were designed to encompass information generally conveyed during a patient evaluation and used to assess the accuracy and completeness of the content in each website (Table 1). The evaluation was conducted by 1 vitreoretinal surgeon (J.S.) and 2 vitreoretinal surgery fellows (N.A.Y. and N.A.P.). A grading scheme was created with a scale of 0 to 4, with 4 as a maximum, to assess each question. A score of 0 represents that no information for that question was available; 1 point corresponds to an answer that is unclear, is inaccurate, or omits significant information and displays poor organization; 2 points correspond to an answer that is partially complete and somewhat addresses the concept but has gaps in information and organization; 3 points correspond to a question that provides essential elements to answer the question and addresses the most relevant points in a focused and organized way; and 4 points correspond to an answer that is accurate and thorough, explaining the information in a clear, focused, and organized manner. Each question was independently assessed by each observer, and the interobserver reproducibility was measured using a Spearman correlation. The mean (SD) score among the 3 observers was used to compare the quality of each site. The mean score of each site was correlated with its rank on Google.com.
The JAMA benchmarks were used to assess the accountability of each website.14 This instrument evaluates the presence of 4 key components: authorship, attributions, disclosure, and currency. To comply with the authorship requirements as outlined by JAMA, a website must include the authors and contributors as well as their affiliations and relevant credentials. All attributions, or references, should be clearly listed and any disclosures, currency, or date of update reported.
Readability was analyzed by Flesch reading ease score, Flesch-Kincaid grade level, Gunning Fog Index, Coleman Liau Index, and Simple Measure of Gobbledygook (SMOG) Index using an online readability tool (Readable).15 The Flesch reading ease score uses total words, sentences, and syllables in a text to provide a score of 0 to 100, with a higher score indicating an easy to read text. It is accepted that a score of 80 to 70 indicates a passage that can be understood by a seventh grader. The Flesch-Kincaid grade level, Gunning Fog Index, Coleman Liau Index, and SMOG Index report a US reading grade level.
Data were collected from December 2018 to January 2019 and analyzed in February 2019. All statistical analysis was performed with IBM SPSS Statistics for Windows, version 25.0 (IBM Corp). For the content analysis, the data were treated as ordinal variables and analyzed with the Kruskal-Wallis test (with H being the test statistic). A post hoc Dunn-Bonferroni test was performed to evaluate pairwise comparisons. To evaluate the correlation between accuracy and ranking, a Spearman correlation test was used. Statistical significance was set at P ≤ .005 for the main comparisons and Spearman correlation and at P ≤ .05 for pairwise comparisons. For the readability analysis, the mean reading grade level for each website was compared using a Kruskal-Wallis test (with H being the test statistic), and a post hoc Dunn-Bonferroni test was performed to evaluate pairwise comparisons. To evaluate correlation between accuracy and readability, a Spearman correlation test was used. Statistical significance was set at P ≤ .005 for the main comparisons and Spearman correlation and at P ≤ .05 for pairwise comparisons. All P values were 2-sided.
Eleven websites were analyzed in our study. The interobserver reproducibility was similar among the 3 observers (r = 0.87 between J.S. and N.A.Y., r = 0.83 between J.S. and N.A.P., and r = 0.84 between N.A.Y. and N.A.P.; P < .001). The mean (SD) questionnaire score for all websites was 55.76 (13.38; 95% CI, 47.85-63.67) of 104 possible points. A statistically significant difference of content accuracy and completeness was found among the websites (H = 25.811, P = .004). Wikipedia was the top-scoring website, with a mean (SD) of 76.67 (0.98) points, representing 74% of total possible points. WebMD was the lowest-scoring website, with a mean (SD) of 33 (1.10) points, which represents 32% of total possible points (Table 2). A significant difference was found between the lowest-scoring website, WebMD, and the top-scoring website, Wikipedia (H = −26.667, P = .04). No significant correlation was found between the rank in Google.com and the content quality of the website (r = −0.260, P = .43).
No website achieved all 4 JAMA benchmarks, and only 1 of 11 websites (9.1%) achieved 3 of the 4 JAMA benchmarks (Table 3). The most commonly displayed attributes were authorship and currency. No correlation was found between the content quality of the website and JAMA benchmarks (r = 0.422, P = .20).
The mean (SD) Flesch reading ease score was 46.05 (11.92; 95% CI, 39.01-53.10). The mean (SD) reading grade for all websites was 11.30 (1.79; 95% CI, 10.24-12.36) (Table 4). A significant correlation was found between the Flesch reading ease score and mean reading grade level (r = −0.955, P < .001). A significant difference was found in the mean reading grade level of the websites analyzed (H = 33.828, P < .001). WebMD had a significantly lower reading level than EyeWiki (H = 34.500, P = .008) and Wikipedia (H = −33.125, P = .02). The American Academy of Ophthalmology also had a significantly lower reading level than EyeWiki (H = -30.500, P = .04). No significant correlation was found between website quality and the mean reading grade (r = 0.445, P = .17).
Patients often look at the internet to find health-related information.4 Resources must be accurate, reliable, and readable to aid patients in medical decision-making. Lack of standardization and regulation of available health care information often leads to marked variation in the quality of the material published.16 The aim of this study was to analyze the accuracy and completeness of the content displayed, as well as the quality and readability of selected websites regarding DR for patients.
We detected significant variation in the accuracy and completeness of patient information on DR. Of note, Wikipedia received the highest score of all 11 websites, with a mean (SD) score of 76.67 (0.98), which represents 74% of total possible points. This finding indicates that although Wikipedia is more informative than other sites, it is not the ideal reference for patients. Our study also found a significant difference among websites, highlighting the importance of directing patients to certain sources while avoiding others. No website achieved all 4 JAMA benchmarks, and even EyeWiki, a website launched by the American Academy of Ophthalmology, only met 3 of the 4 benchmarks. Of note, EyeWiki may be designed to educate medical professionals rather than laypeople. On average, there was 1 benchmark displayed, which indicated that the overall accountability of the websites is poor. Of interest, no correlation was found between the accountability of the website as defined by JAMA and the accuracy of the content displayed. This finding suggests that resources with inaccurate information may appear of better quality and therefore more reliable.
Websites aim to have the top rank in Google.com searches because this is associated with higher traffic and authority.17 A previous study18 found that the number of visitors that a website receives markedly changes with position in Google.com searches, with nearly 30% of users clicking on the first link and only 10% clicking on the third link. In this study, no correlation was found between the content quality of a website and its rank in Google.com searches, suggesting that this metric cannot be used by patients to gauge the reliability of the information shown.
The mean reading ease score across the 11 websites was equivalent to an 11th-grade reading grade level, which is in contrast to the recommended 6th-grade reading level suggested by the US Department of Health and Human Services guidelines. WebMD was the least complex website, with a Flesch reading ease score of 71.8, which correlates to a seventh-grade reading level, and a mean (SD) reading grade level of 7.60 (1.13). Although this website scored the lowest in our content analysis, no overall correlation was found between the mean reading level and mean questionnaire score. This analysis suggests that websites about DR are written at a level too complex for the average adult. Previous studies11,12 have found that online information regarding ophthalmic diseases is written at a reading level greater than what is recommended. This study further upholds those results, while providing material to aid in the counseling of patients with DR. The readability results, the poor content quality, and the lack of accountability displayed by most websites suggest that physicians should counsel patients on online health information–seeking behavior.
Websites that are freely available and do not require a membership or subscription were analyzed. However, some websites targeted for physicians can also be accessed by patients, with some even targeting both audiences. For example, EyeWiki is a site primarily intended for ophthalmologists that also seeks to inform the layperson.19 In our analysis, this site received the highest reading grade level and Flesch reading ease score, indicating that it uses complex language. Therefore, the usability and actionability of the information presented when read by a layperson may be less than intended by the authors.
This study has some inherent limitations. The analysis performed may not be applicable to other diseases in ophthalmology because it was limited to information regarding DR. The keyword diabetic retinopathy was used to select websites, which may be different from the terms that patients use. Another limitation is that the patients’ understanding of DR from reading the material presented was not surveyed and analyzed against our assessment of each website. Further studies should be performed that include patients and compare how they search for information about DR online and what knowledge they gain after reading each website.
The freely available information online about DR varies by source but appears to be generally of low quality. The findings suggest that the material presented is difficult to read and exceeds the recommended reading level for health information. Most websites reviewed did not provide sufficient information to support the patient in making medical decisions.
Accepted for Publication: June 16, 2019.
Corresponding Author: Jayanth Sridhar, MD, Bascom Palmer Eye Institute, Department of Ophthalmology, University of Miami Miller School of Medicine, 900 NW 17th St, Miami, FL 33136 (email@example.com).
Published Online: August 22, 2019. doi:10.1001/jamaophthalmol.2019.3116
Author Contributions: Ms Kloosterboer and Dr Sridhar had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.
Concept and design: Kloosterboer, Yannuzzi, Patel, Sridhar.
Acquisition, analysis, or interpretation of data: Kloosterboer, Yannuzzi, Kuriyan, Sridhar.
Drafting of the manuscript: Kloosterboer, Yannuzzi, Kuriyan, Sridhar.
Critical revision of the manuscript for important intellectual content: Yannuzzi, Patel, Sridhar.
Statistical analysis: Kloosterboer, Yannuzzi.
Supervision: Yannuzzi, Patel, Kuriyan, Sridhar.
Conflict of Interest Disclosures: Dr Kuriyan reported receiving personal fees from Alimera Sciences, Regeneron, Allergan, and Valeant and grants from Genentech outside the submitted work. Dr Sridhar reported receiving personal fees from Alcon, Alimera, and Thrombogenics outside the submitted work. No other disclosures were reported.
Funding/Support: The Bascom Palmer Eye Institute received funding from core grant P30EY014801 from the National Institutes of Health, grant W81XWH-13-1-0048 from the US Department of Defense, and an unrestricted grant from Research to Prevent Blindness.
Role of the Funder/Sponsor: The funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Create a personal account or sign in to: