Key PointsQuestion
Do clinical trials specify too many or too few outcomes, and how often do outcomes overlap?
Findings
In this study, among 49 outcome domains identified within 39 ClinicalTrials.gov records of publicly funded clinical trials for 3 ophthalmologic conditions, most domains were specified in a single record. Even when the same outcome domains were registered in multiple trials, the time point, specific metric, and method of aggregation were specified in multiple ways.
Meaning
Differences in how outcomes are measured across trials for a given condition make it difficult to compare and aggregate results; a minimum set of outcomes agreed by a community of stakeholders can help harmonize outcomes and facilitate comparisons across trials.
Importance
For findings from clinical trials to be actionable, outcomes measured in trials must be fully defined and, when appropriate, defined consistently across trials. Otherwise, it is difficult to compare findings between trials, combine results in meta-analyses, or leverage findings collectively to inform health care decision-making.
Objective
To identify and characterize outcomes specified in ClinicalTrials.gov records for publicly funded clinical trials for 3 high-burden, high-prevalence eye conditions.
Design, Setting, and Participants
ClinicalTrials.gov, a registry of publicly and privately supported clinical studies, was searched on January 31, 2019, for records of clinical trials for age-related macular degeneration (AMD), dry eye, or refractive error. The search was limited to trials funded by the National Eye Institute but did not impose a date restriction. Five elements of a well-specified outcome were extracted from each outcome stated in each record, including the domain, method of measurement, metric, method of aggregation, and time points.
Main Outcomes and Measures
Number of outcome domains specified for trials for AMD, dry eye, and refractive error and the number of trial records specifying each unique domain.
Results
A total of 49 unique outcome domains specified across 39 records of trials were identified. The median (interquartile range) number of records specifying each unique outcome domain was 1 (1-3), 1.5 (1-2), and 1 (1-1) for AMD, dry eye, and refractive error, respectively. Even when the same domains were registered across multiple trials, the time point, specific metric, and method of aggregation were specified in multiple ways.
Conclusions and Relevance
There were too many outcomes with too little overlap in the sample of trials that were examined. Differences in how outcomes are measured across trials make it difficult to compare results, even for well-established domains, such as visual acuity. To reduce this waste in eye and vision research, the time is ripe for agreeing on what outcomes to measure.