The 2007 Food and Drug Administration (FDA) Amendments Act expanded requirements for ClinicalTrials.gov, a public clinical trial registry maintained by the National Library of Medicine, mandating results reporting within 12 months of trial completion for all FDA-regulated medical products. Reporting of mandatory trial registration information on ClinicalTrials.gov is fairly complete, although there are concerns about its specificity; optional trial registration information is less complete.1-4 To our knowledge, no studies have examined reporting and accuracy of trial results information. Accordingly, we compared trial information and results reported on ClinicalTrials.gov with corresponding peer-reviewed publications.
We conducted a cross-sectional analysis of clinical trials for which the primary results were published between July 1, 2010, and June 30, 2011, in Medline-indexed, high-impact journals (impact factor ≥10; Web of Knowledge, Thomson Reuters) and that were registered on ClinicalTrials.gov and reported results. For each trial, we assessed reporting of the following results information on ClinicalTrials.gov and corresponding publications and compared reported information in both sources: cohort characteristics (enrollment and completion, age/sex demographics), trial intervention, and primary and secondary efficacy end points and results. Results information was considered concordant if the described end point, time of ascertainment, and measurement scale matched. Reported results were categorized as concordant (ie, numerically equal), discordant (ie, not numerically equal), or could not be compared (ie, reported numerically in one, graphically in the other). For discordant primary efficacy end points, we determined whether the discrepancy altered study interpretation. Descriptive analyses were performed using Excel (version 14.3.1, Microsoft).
We identified 96 trials reporting results on ClinicalTrials.gov that were published in 19 high-impact journals. For 70 trials (73%), industry was the lead funder. The most common conditions studied were cardiovascular disease, diabetes, and hyperlipidemia (n = 21; 23%); cancer (n = 20; 21%); and infectious disease (n = 19; 20%). Trials were most frequently published by New England Journal of Medicine (n = 23; 24%), Lancet (n = 18; 19%), and JAMA (n = 11; 12%). Cohort, intervention, and efficacy end point information was reported for 93% to 100% of trials in both sources (Table 1). However, 93 of 96 trials had at least 1 discordance among reported trial information or reported results.
Among trials reporting each cohort characteristic and trial intervention information, discordance ranged from 2% to 22% and was highest for completion rate and trial intervention, for which different descriptions of dosages, frequencies, or duration of intervention were common.
There were 91 trials defining 156 primary efficacy end points (5 trials defined only primary safety end points), 132 (85%) of which were described in both sources, 14 (9%) only on ClinicalTrials.gov, and 10 (6%) only in publications. Among 132 end points described in both sources, results for 30 (23%) could not be compared and 21 (16%) were discordant. The majority (n = 15) of discordant results did not alter trial interpretation, although for 6, the discordance did (Table 2). Overall, 81 of 156 (52%) primary efficacy end points were described in both sources and reported concordant results.
There were 96 trials defining 2089 secondary efficacy end points, 619 (30%) of which were described in both sources, 421 (20%) only on ClinicalTrials.gov, and 1049 (50%) only in publications. Among 619 end points described in both sources, results for 228 (37%) could not be compared, whereas 53 (9%) were discordant. Overall, 338 of 2089 (16%) secondary efficacy end points were described in both sources and reported concordant results.
Among clinical trials published in high-impact journals that reported results on ClinicalTrials.gov, nearly all had at least 1 discrepancy in the cohort, intervention, or results reported between the 2 sources, including many discordances in reported primary end points. For discordances observed when both the publication and ClinicalTrials.gov reported the same end point, possible explanations include reporting and typographical errors as well as changes made during the course of the peer review process. For discordances observed when one source reported a result but not the other, possible explanations include journal space limitations and intentional dissemination of more favorable end points and results in publications.5
Our study was limited to a small number of trials that were not only registered and reported results, but also published in high-impact journals. However, because articles published in high-impact journals are generally the highest-quality research studies and undergo more rigorous peer review, the trials in our sample likely represent best-case scenarios with respect to the quality of results reporting. Our findings raise questions about accuracy of both ClinicalTrials.gov and publications, as each source’s reported results at times disagreed with the other. Further efforts are needed to ensure accuracy of public clinical trial result reporting efforts.
Corresponding Author: Joseph S. Ross, MD, MHS, Department of Internal Medicine, Yale University School of Medicine, PO Box 208093, New Haven, CT 06520 (joseph.ross@yale.edu).
Author Contributions: Ms Becker and Dr Ross had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Becker, Ben-Josef, Ross.
Acquisition of data: Becker, Ben-Josef, Ross.
Analysis and interpretation of data: Becker, Krumholz, Ross.
Drafting of the manuscript: Becker, Ross.
Critical revision of the manuscript for important intellectual content: Becker, Krumholz, Ben-Josef, Ross.
Statistical analysis: Becker, Ross.
Study supervision: Ross.
Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Drs Krumholz and Ross receive support from Medtronic to develop methods of clinical trial data sharing, from the Centers of Medicare & Medicaid Services to develop and maintain performance measures that are used for public reporting, and from the Food and Drug Administration to develop methods for postmarket surveillance of medical devices. Dr Krumholz reports that he chairs a scientific advisory board for UnitedHealthcare. Dr Ross reports that he is a member of a scientific advisory board for FAIR Health.
Funding/Support: This project was not supported by any external grants or funds. Dr Krumholz is supported by a National Heart, Lung, and Blood Institute Cardiovascular Outcomes Center Award (1U01HL105270-02). Dr Ross is supported by the National Institute on Aging (K08 AG032886) and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program.
Previous Presentation: This study was presented at the Seventh International Congress on Peer Review and Biomedical Publication; Chicago, Illinois; September 9, 2013.
1.Ross
JS, Mulvey
GK, Hines
EM, Nissen
SE, Krumholz
HM. Trial publication after registration in ClinicalTrials.gov: a cross-sectional analysis.
PLoS Med. 2009;6(9):e1000144.
PubMedGoogle ScholarCrossref 2.Zarin
DA, Tse
T, Ide
NC. Trial registration at ClinicalTrials.gov between May and October 2005.
N Engl J Med. 2005;353(26):2779-2787.
PubMedGoogle ScholarCrossref 3.Mathieu
S, Boutron
I, Moher
D, Altman
DG, Ravaud
P. Comparison of registered and published primary outcomes in randomized controlled trials.
JAMA. 2009;302(9):977-984.
PubMedGoogle ScholarCrossref 4.Zarin
DA, Tse
T, Williams
RJ, Califf
RM, Ide
NC. The ClinicalTrials.gov results database: update and key issues.
N Engl J Med. 2011;364(9):852-860.
PubMedGoogle ScholarCrossref 5.Turner
EH, Matthews
AM, Linardatos
E, Tell
RA, Rosenthal
R. Selective publication of antidepressant trials and its influence on apparent efficacy.
N Engl J Med. 2008;358(3):252-260.
PubMedGoogle ScholarCrossref