Customize your JAMA Network experience by selecting one or more topics from the list below.
In recent years, there has been an explosion of research articles using large population databases in clinical research; ophthalmology is no exception. In 2012, using the British Columbia Ministry of Health database, we published one of the first population-based pharmacoepidemiologic studies1 that found an elevated risk of retinal detachment (RD) (relative risk, 4.50; 95% CI, 3.56-5.70) with oral fluoroquinolones (FQs). Since 2012, several large epidemiological studies on this topic have been published with conflicting results.2-4 For example, Kuo et al2 used the national health database of Taiwan and undertook a retrospective cohort study. Measured covariates and potential confounders were appropriately adjusted for using propensity score analysis. The overall relative risk for RD was found to be 2.07 (95% CI, 1.45-2.96). However, exposure to FQ was defined as greater than 3 consecutive prescriptions. It wasn’t clear from the study why this definition of exposure was used. This means that, by design, participants could not have experienced an RD before the first 3 prescriptions, leading to a survival bias referred to as immortal time bias, which usually underestimates the true risk.
Raguideau et al3 used the French national health insurance database to quantify the risk of RD associated with FQ using a case-crossover study and a self-controlled case series study; both designs only use cases (those who experienced RD). Because comparison of exposed and unexposed times is made within participants, these designs rigorously control for time-fixed confounders.3 The investigators found an odds ratio of 1.46 (95% CI, 1.15-1.87) comparing use of an FQ immediately before an RD (up to 10 days prior) with control periods (61-180 days prior to the RD event). The results were similar using the self-controlled case series design. Concordant results obtained by 2 different study designs is reassuring; however, antibiotic prescribing is subject to time trend bias in which factors that may affect prescribing, such as trends in FQ marketing, may differentially affect FQ prescribing, leading to differential exposure distribution of FQs in the risk and control periods. The authors did not state why multiple control periods, which may have further strengthened the results, were not used.
Eftekhari et al4 used a retrospective cohort study and compared the risk of retinal breaks among FQ users with that of nonusers. The adjusted risk ratio for RD among participants who were exposed 30 days and 365 days after using an FQ was 0.80 (95% CI, 0.11-5.71) and 1.35 (95% CI, 0.89-2.06), respectively. Their study used the Health Improvement Network database from the United Kingdom.4 Compared with our study,1 this study’s strength was better adjustment for intraocular surgeries and identification of rhegmatogenous RD. However, the small number of events (1 event within 30 days of FQ use), which led to the wide confidence intervals, limits the interpretation of these findings. As the author acknowledges, FQ use and RD diagnosis were only captured indirectly, through general practitioner reporting, which may have led to underreporting of both FQ use and RD events. The description of the Health Improvement Network database by the authors allowed the reader to better understand the limitation of this database in the context of this study question.
These 3 studies highlight the variability that exists in the reporting of the methodology of observational studies using population databases, which may often yield different results. There are many other similar examples outside the ophthalmology literature. A classic one is the case of 2 case-control studies5,6 that examined the risk of fractures with statin use. Despite using the same study design and the same database, they came to opposite conclusions. The authors asserted that differences in the way the data were analyzed may have led to the different results. In general, contradictory findings may arise as a result of differences in the various stages of the research process, including but not limited to study population selection, study design, adjustment for potential confounders, and exposure definition. One of the attributes of the peer review process is to examine potential methodological weaknesses in hope of highlighting limitations, improving the methodology prior to publication, and ultimately providing a more valid final research product. The challenge for reviewers is that many published observational studies using big data do not provide adequate detail on specific aspects of study design that would allow for detailed assessment of quality and limitations. Detailed, standardized reporting of methodology for big data studies would allow for more transparent, comprehensive, and meaningful review of these studies.
A number of high-ranking medical journals require that authors of all observational studies submit a Strengthening the Reporting of Observational Studies (STROBE) checklist, which requires authors to provide some detail on the different methodologic areas. Recently, more nuanced checklists for observational studies that use population databases have also become available. The Reporting of Studies Conducted Using Observational Routinely Collected Data (RECORD) statement is similar to the STROBE statement but mainly focuses on observational studies that use large population-based databases. There are also extensions of the STROBE statement tailored to different areas of research, including molecular epidemiology and nutrition.
Many high-ranking journals require authors of submitted manuscripts of randomized clinical trials to also submit the Consolidated Standards of Reporting Trials (CONSORT) checklist. JAMA and JAMA Ophthalmology require reporting guidelines for clinical trials, meta-analysis, diagnostic tests, and cost-effectiveness analysis but not observational studies, including case-control studies, cohort studies, and the more complex case-only designs. Currently, none of the 3 highest-ranking journals of ophthalmology (including JAMA Ophthalmology,7Ophthalmology,8 and American Journal of Ophthalmology9) require authors to submit any reporting checklists specific to observational studies.
As the number of research questions that can only be answered by observational studies increases, so will the number of publications using these methodologies. It is evident that the increase in the number of observational studies with varying methods and data sources may have led to the publication of many contradictory studies. Some contradictory findings can be attributed to statistical uncertainty. However, the degree of discrepancy seems to be substantially higher than what statistical variation can explain. Different methods are undoubtedly a major source of such variation. We urge editors of journals of ophthalmology to emulate the journals that require authors of observational research studies to complete appropriate checklists for that particular study design. Recent data have shown that published studies that have used the STROBE checklist have a higher reporting quality than journals that do not require authors to complete this checklist.10 Improving reporting of observational studies through these reporting checklists may increase transparency and quality research of observational studies in ophthalmology.
Corresponding Author: Mahyar Etminan, PharmD, MSc, Department of Ophthalmology and Visual Sciences, Faculty of Medicine, University of British Columbia, 2550 Willow Street, Room 323-2550, Vancouver, BC, V5Z 3N9 Canada (email@example.com).
Correction: This article was corrected on June 14, 2018, to correct the affiliation of the second author, David A. L. Maberley. He is with the Department of Ophthalmology and Visual Sciences, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada, and not with the Department of Pharmaceutical and Therapeutics at the same university.
Published Online: April 26, 2018. doi:10.1001/jamaophthalmol.2018.0987
Conflict of Interest Disclosures: Both authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. No disclosures were reported.
Etminan M, Maberley DAL. Improving Reporting Quality in Ophthalmologic Observational Studies That Use Big Data: The Case of Retinal Detachment Associated With Fluoroquinolone Use. JAMA Ophthalmol. 2018;136(6):611–612. doi:10.1001/jamaophthalmol.2018.0987
Browse and subscribe to JAMA Network podcasts!
Create a personal account or sign in to: