Assessment of Consistency Between Peer-Reviewed Publications and Clinical Trial Registries | Medical Journals and Publishing | JAMA Ophthalmology | JAMA Network
[Skip to Navigation]
Sign In
Table.  Discrepancies and Omissions Between Publication-Registry Pairs
Discrepancies and Omissions Between Publication-Registry Pairs
1.
Chan  AW, Hróbjartsson  A, Haahr  MT, Gøtzsche  PC, Altman  DG.  Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles.  JAMA. 2004;291(20):2457-2465. doi:10.1001/jama.291.20.2457PubMedGoogle ScholarCrossref
2.
Song  F, Parekh-Bhurke  S, Hooper  L,  et al.  Extent of publication bias in different categories of research cohorts: a meta-analysis of empirical studies.  BMC Med Res Methodol. 2009;9:79. doi:10.1186/1471-2288-9-79PubMedGoogle ScholarCrossref
3.
Jones  CW, Handler  L, Crowell  KE, Keil  LG, Weaver  MA, Platts-Mills  TF.  Non-publication of large randomized clinical trials: cross sectional analysis.  BMJ. 2013;347:f6104. doi:10.1136/bmj.f6104PubMedGoogle ScholarCrossref
4.
van Lent  M, Overbeke  J, Out  HJ.  Role of editorial and peer review processes in publication bias: analysis of drug trials submitted to eight medical journals.  PLoS One. 2014;9(8):e104846. doi:10.1371/journal.pone.0104846PubMedGoogle ScholarCrossref
5.
DeAngelis  CD.  The influence of money on medical science.  JAMA. 2006;296(8):996-998. doi:10.1001/jama.296.8.jed60051PubMedGoogle ScholarCrossref
6.
Kien  C, Nußbaumer  B, Thaler  KJ,  et al; UNCOVER Project Consortium.  Barriers to and facilitators of interventions to counter publication bias: thematic analysis of scholarly articles and stakeholder interviews.  BMC Health Serv Res. 2014;14:551. doi:10.1186/s12913-014-0551-zPubMedGoogle ScholarCrossref
7.
Song  F, Loke  Y, Hooper  L.  Why are medical and health-related studies not being published? a systematic review of reasons given by investigators.  PLoS One. 2014;9(10):e110418. doi:10.1371/journal.pone.0110418PubMedGoogle ScholarCrossref
8.
Dickersin  K, Rennie  D.  Registering clinical trials.  JAMA. 2003;290(4):516-523. doi:10.1001/jama.290.4.516PubMedGoogle ScholarCrossref
9.
Rennie  D.  Trial registration: a great idea switches from ignored to irresistible.  JAMA. 2004;292(11):1359-1362. doi:10.1001/jama.292.11.1359PubMedGoogle ScholarCrossref
10.
Simes  RJ.  Publication bias: the case for an international registry of clinical trials.  J Clin Oncol. 1986;4(10):1529-1541. doi:10.1200/JCO.1986.4.10.1529PubMedGoogle ScholarCrossref
11.
International Committee of Medical Journal Editors. Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals. Updated December 2018. http://www.icmje.org/icmje-recommendations.pdf. Accessed March 3, 2019.
12.
World Health Organization. International Clinical Trials Registry Platform. Version 3.6. http://apps.who.int/trialsearch. Accessed February 26, 2019.
13.
Levin  LA, Gottlieb  JL, Beck  RW,  et al.  Registration of clinical trials.  Arch Ophthalmol. 2005;123(9):1263-1264. doi:10.1001/archopht.123.9.1263PubMedGoogle ScholarCrossref
14.
Kaplan  RM, Irvin  VL.  Likelihood of null effects of large NHLBI clinical trials has increased over time.  PLoS One. 2015;10(8):e0132382. doi:10.1371/journal.pone.0132382PubMedGoogle ScholarCrossref
15.
Dwan  K, Altman  DG, Clarke  M,  et al.  Evidence for the selective reporting of analyses and discrepancies in clinical trials: a systematic review of cohort studies of clinical trials.  PLoS Med. 2014;11(6):e1001666. doi:10.1371/journal.pmed.1001666PubMedGoogle ScholarCrossref
16.
Hartung  DM, Zarin  DA, Guise  JM, McDonagh  M, Paynter  R, Helfand  M.  Reporting discrepancies between the ClinicalTrials.gov results database and peer-reviewed publications.  Ann Intern Med. 2014;160(7):477-483. doi:10.7326/M13-0480PubMedGoogle ScholarCrossref
17.
Wieseler  B, Wolfram  N, McGauran  N,  et al. Completeness of reporting of patient-relevant clinical trial outcomes: comparison of unpublished clinical study reports with publicly available data. PLoS Med. 2013;10(10):e1001526.
18.
Miller  JE, Wilenzick  M, Ritcey  N, Ross  JS, Mello  MM.  Measuring clinical trial transparency: an empirical analysis of newly approved drugs and large pharmaceutical companies.  BMJ Open. 2017;7(12):e017917. doi:10.1136/bmjopen-2017-017917PubMedGoogle ScholarCrossref
19.
Simmons  JP, Nelson  LD, Simonsohn  U.  False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant.  Psychol Sci. 2011;22(11):1359-1366. doi:10.1177/0956797611417632PubMedGoogle ScholarCrossref
Original Investigation
April 4, 2019

Assessment of Consistency Between Peer-Reviewed Publications and Clinical Trial Registries

Author Affiliations
  • 1Department of Ophthalmology, Eye Institute, Medical College of Wisconsin, Milwaukee
  • 2Medical College of Wisconsin, Milwaukee
  • 3Department of Pathology, University of Chicago, Chicago, Illinois
JAMA Ophthalmol. 2019;137(5):552-556. doi:10.1001/jamaophthalmol.2019.0312
Key Points

Question  Do clinical trial registries provide meaningful information regarding recently published clinical trials?

Findings  In this cross-sectional study of 106 clinical trials published in 3 ophthalmology journals in 2014, corresponding registry entries frequently could not be found. Among registry entries identified, inconsistencies between published and registered information were often noted in trial design, results, and funding sources, often because of missing data, and in a few cases, inconsistencies included explicitly discrepant data.

Meaning  These findings suggest a need for more attention to accuracy and consistency of reporting in clinical trial registries.

Abstract

Importance  Clinical trial registries are intended to increase clinical research transparency by nonselectively identifying and documenting clinical trial designs and outcomes. Inconsistencies in reported data undermine the utility of such registries and have previously been noted in general medical literature.

Objective  To assess whether inconsistencies in reported data exist between ophthalmic literature and clinical trial registries.

Design, Setting, and Participants  In this retrospective, cross-sectional study, interventional clinical trials published from January 1, 2014, to December 31, 2014, in the American Journal of Ophthalmology, JAMA Ophthalmology, and Ophthalmology were reviewed. Observational, retrospective, uncontrolled, and post hoc reports were excluded, yielding a sample size of 106 articles. Data collection was performed from January through September 2016. Data review and adjudication continued through January 2017.

Main Outcomes and Measures  If possible, articles were matched to registry entries listed in the ClinicalTrials.gov database or in 1 of 16 international registries indexed by the World Health Organization International Clinical Trials Registry Platform version 3.2 search engine. Each article-registry pair was assessed for inconsistencies in design, results, and funding (each of which was further divided into subcategories) by 2 reviewers and adjudicated by a third.

Results  Of 106 trials that met the study criteria, matching registry entries were found for 68 (64.2%), whereas no matching registry entries were found for 38 (35.8%). Inconsistencies were identified in study design, study results, and funding sources, including specific interventions in 8 (11.8%), primary outcome measure (POM) designs in 32 (47.1%), and POM results in 48 (70.6%). In addition, numerous data pieces were unreported, including analysis methods in 52 (76.5%) and POM results in 38 (55.9%).

Conclusions and Relevance  Clinical trial registries were underused in this sample of ophthalmology clinical trials. For studies with registry data, inconsistency rates between published and registered data were similar to those previously reported for general medical literature. In most cases, inconsistencies involved missing data, but explicit discrepancies in methods and/or data were also found. Transparency and credibility of published trials may be improved by closer attention to their registration and reporting.

Introduction

Clinical trials are among the most reliable sources of evidence used to guide medical practice. Peer-reviewed publication is an important quality control practice in the dissemination of research results. In recent years, evidence of bias and selectivity in reporting of clinical trial results has emerged. In 2004, Chan et al1 reported significant inconsistency between randomized clinical trial protocols and published articles. Song et al2 reported the odds ratios of formal publication in a series of general medical clinical research trials at different time points in the inception-to-publication cycle. Trials with positive results were significantly more likely to be published at all points, as were trials with statistically significant results. A follow-up study by Jones et al3 noted that 29% of proposed clinical trials in internal medicine were never published in peer-reviewed journals and that nonpublication was significantly more common among trials with industry funding than among those without. At least one group4 investigated possible sources of publication bias and concluded that there was no increased acceptance of manuscripts with positive results, suggesting that bias likely occurred before submission. Although the financial incentive associated with positive results was proposed as the driving factor behind such bias,5 2 studies6,7 presented self-provided rationales of clinical principal investigators. Unimportant or negative results were given as a reason by approximately 12% of authors,7 but other factors included poor study quality or design, incomplete studies, manuscript in preparation or under review, fear of rejection or rejection by journals, lack of financial resources, lack of time or low priority, personal interest, and author or coauthor problems.

In response to reported publication biases, a web-based clinical trial registry, ClinicalTrials.gov, was proposed and implemented in the United States to promote transparency in clinical research.8-10 Similar registries were established in other countries, including Australia and New Zealand (Australian-New Zealand Clinical Trials Registry), China (Chinese Clinical Trial Registry), the European Union (EU Clinical Trials Register), the Netherlands (Netherlands National Trial Register), Germany (German Clinical Trials Register), and Japan (Japan Primary Registries Network).

The International Committee of Medical Journal Editors (ICMJE), a working group of editors of some of the most respected medical journals in the world, “requires, and recommends that all medical journal editors require, registration of clinical trials in a public trial registry at or before the time of first patient enrollment as a condition of consideration for publication.”11(p12) For a registry to be acceptable to the ICMJE, it must be “accessible to the public at no charge, open to all prospective registrants, managed by a not-for-profit organization, have a mechanism to ensure the validity of the registration data, and are electronically searchable. An acceptable registry must include the minimum 21-item trial registration dataset.”11(p13) Registries that comply with these requirements are listed in the World Health Organization International Clinical Trials Registry Platform (ICTRP) and accessible through the ICTRP search portal.12 In accordance with the above recommendations, the editors of JAMA Ophthalmology (formerly Archives of Ophthalmology), Ophthalmology, and American Journal of Ophthalmology jointly published an editorial announcing a policy that all clinical trials submitted for review must be registered effective March 1, 2006.13

Since the establishment of clinical trial registries, the number of null results reported in the literature has increased markedly.14 However, a series of studies15-17 from multiple groups provided evidence of notable inconsistencies in reporting of study populations, analytical methods, outcome measures, and adverse events in clinical trial registries vs peer-reviewed publications. These articles have again raised the question of transparency and reliability in clinical trials reporting. Hartung et al16 report that 80% of surveyed trials contained inconsistencies in the number of secondary outcome measures reported. A total of 15% of surveyed trials contained inconsistencies in primary outcome values, and 20% contained inconsistencies in secondary outcomes. A total of 35% reported inconsistent numbers of individuals with serious adverse events (SAEs); 28% of trials that reported deaths provided inconsistent data in ClinicalTrials.gov vs published results. A previous study18 found that clinical trials sponsored by large companies or involving new drugs were reliably reported. Nonetheless, registry reporting remains poor16 and the necessity of more rigorous reporting has been recognized by the medical-scientific community at large.8

This study explores similar inconsistencies in ophthalmic literature. With the rationale that the physician is likely to base clinical decisions on peer-reviewed published results, we began with published results of prospective interventional clinical trials in 3 major ophthalmic journals (the American Journal of Ophthalmology, JAMA Ophthalmology, and Ophthalmology). We then compared clinical trial design and results reported in peer-reviewed publications with those reported on web-based clinical trial registries for consistency.

Methods
Selection of Trials

A list of articles was initially generated by searching PubMed with the following keywords: American Journal of Ophthalmology [journal] OR JAMA Ophthalmology [journal] OR Ophthalmology [journal] AND 2014/01/01 [date - publication] to 2014/12/31 [date - publication]. These results were then filtered for studies marked clinical trials, clinical trials phase 1, clinical trials phase 2, clinical trials phase 3, or clinical trials phase 4. This search created an initial database of 360 articles, which was imported into a citation manager. Only articles that reported the primary outcomes of prospective, interventional, controlled human trials were included, yielding a database of 106 studies. A single author (L.W.S.) determined initial inclusion and exclusion of studies.

Data Collection and Comparison

Reviewers met as a team before evaluating the data set to discuss the review process and to establish rules for determining consistencies vs inconsistencies. Each of the 106 publications was independently reviewed by 2 reviewers from a team of 6 (D.J.L., J.A.C., T.C.C., K.R., S.J.S., and J.G.U.). Disagreements were adjudicated by a third reviewer (L.W.S.). Each publication was available to all reviewers until claimed by 2 reviewers. When a reviewer completed review of an article, he/she selected the next available to review until all articles had been reviewed by 2 reviewers. The data abstraction form is in the eAppendix in the Supplement.

Articles were matched with clinical trial registries when available. For articles that provided a ClinicalTrials.gov database number, published results were compared against results posted on the corresponding ClinicalTrials.gov entry. For articles that did not provide a ClinicalTrials.gov database number, the ICTRP version 3.2 search engine,12 which catalogs 16 international clinical trial registries, was used to attempt to find a match. A clinical trial was determined to be a match for a given publication based on study intervention(s), condition, principal investigator (if supplied), and date of trial completion.

Nine abstracted characteristics were compared between a peer-reviewed publication and its corresponding clinical trial registry record (publication-registry pair), divided into the following categories: study design, study results, and funding source. Within the study design category, specific interventions, planned sample size, primary outcome measure (POM) design, POM analysis methods, and secondary outcome measure (SOM) design were compared. Within the study result category, POM results, SOM results, and SAEs were compared.

For registered trials, the term inconsistency refers to differences between the information in the publication and the registry. Inconsistencies were subcategorized as discrepancies and omissions. The term discrepancies refers to information present in both sources that did not match. Numeric data were considered to be discrepant when reported values differed by greater than 0.1, whereas descriptive data were considered to be discrepant if deletions, transpositions, or additions were made within each element. The term omissions refers to missing data from the publication, the registry, or both. Data collection was performed from January through September 2016. Data review and adjudication continued through January 2017.

Results

A total of 106 peer-reviewed, planned, prospective, controlled, interventional clinical trials published between January 1, 2014, and December 31, 2014, in the American Journal of Ophthalmology, JAMA Ophthalmology, and Ophthalmology were included in this study. Thirty-eight of these 106 trials (35.8%) could not be matched to a registry on ClinicalTrials.gov or on international counterparts indexed by the ICTRP version 3.2. Of the 68 publication-registry pairs (64.2%), 6 were found in international registries (1 in ClinicalTrials.gov by search, 1 in the Australian and New Zealand Clinical Trial Registry, 1 in the Clinical Trial Registry of India, 1 in the University Hospital Medical Information Network Clinical Trial Registry, and 2 in the International Standard Randomized Controlled Trial Number Registry) and the remainder in ClinicalTrials.gov.

Interventions used in the 68 publication-registry pairs analyzed included 41 pharmaceutical trials (60.3%), 6 surgical trials (8.8%), 12 device trials (17.6%), 7 mixed trials (10.3%) that involved more than 1 type of intervention, and 2 trials (2.9%) that involved interventions that did not fall under one of the above categories, namely, medication adherence training and self-guided coping techniques. Funding sources listed in publications included 19 trials (27.9%) that used academic or government funding, 29 (42.6%) that used industry or private organizational funding, and 20 (29.4%) that used mixed sources of funding.

Widespread inconsistencies were found between publication-registry pairs analyzed. A total of 378 discrete inconsistencies were identified (mean [SD], 5.56 [1.84] per study; median, 6). A total of 125 inconsistencies (33.1%) were attributable to explicit discrepancies (mean [SD], 1.84 [1.45] per study; median, 2). A total of 253 inconsistencies (66.9%) were attributable to omissions of data in one or both sources (mean [SD], 3.72 [1.82] per study; median, 4), most frequently in the form of incomplete registry entries.

Information regarding study interventions used in publication-registry pairs were analyzed and found to be 11.8% inconsistent, with all inconsistencies representing discrepancies between sources and 0.0% omitted in either publication or registry. Planned sample size was inconsistent in 60 of the registry-publication pairs (88.2%); in nearly every case, this was attributable to omission of data. Only 1 case (1.5%) was attributable to explicitly discrepant data. Actual sample sizes were inconsistent in 82.4% of pairs: 7.4% discrepant and 75.0% omitted. The POM designs were 47.1% discrepant and 0.0% omitted. Discrepancies in POM design arose from (1) transposition of POMs into SOMs and vice versa, (2) addition or deletion of certain specific POMs, (3) redefinition of a POM (typically from broader measures [eg, safety and tolerability of a given intervention] to more specific measures [eg, maximum tolerated dose of the aforementioned intervention]), and (4) redefinition of time frames at which POMs were measured. The POM analysis methods were 77.9% inconsistent: 1.5% discrepant and 76.5% omitted. In the single case in which the POM analysis method was discrepant, the statistical method reported in the publication was different from that in the registry. However, in an additional 52 cases, POM methods were not reported in the registry (48 cases) or not reported in either the registry or the publication (4 cases). The SOM design was inconsistent in 75.0% of pairs: 48.5% discrepant and 26.5% omitted. Discrepancies were largely attributable to deleted SOMs or SOMs that had been transposed to POMs. Certain SOMs were also deleted in the publication in 3 cases, in the registry in 9 cases, and in both in 6 cases.

Study results were parsed into 3 categories. The POM results were 14.7% discrepant and 55.9% omitted. The SOM results were 26.5% discrepant and 60.3% omitted. The SAEs were 20.6% discrepant and 66.2% omitted. Funding sources were found to be 11.8% discrepant and 0.0% omitted. Additional numeric data resulting from the above analyses appear in the Table. Reliability of the reviewers’ assessments was determined by examining agreement between first-pass reviewers’ determination of consistency of 612 elements across all 68 of the publication-registry pairs. In 504 cases (82.4%), reviewers agreed.

Discussion

On the basis of this review of published clinical trials in the ophthalmic literature, clinical trial registries appear to provide incomplete, inconsistent documentation of predetermined study designs and analysis plans. For more than one-third of publications evaluated, no corresponding registry entry could be found. When a corresponding registry was found, widespread inconsistencies were identified (378 total discrete inconsistencies or a median of 6 per publication-registry pair). Documentation of key design features was often sparse, and the omission of information from one source or the other accounted for most inconsistencies (66.9%). For instance, analysis methods for the primary outcome measure were missing in most registry entries. When study results were provided in the registries, they were frequently discrepant with data in the publication, with 125 explicit (ie, not attributable to omission) discrepancies identified or a median of 2 per each publication-registry pair. Similar patterns of discrepancy have been seen in the nonophthalmic literature. In the study by Hartung et al,16 14.5% of POM descriptions, 80% of SOM descriptions, 9% of POM results, and 37% of SAEs were discrepant. Although complete data on omissions in trial design and results were not reported, it appears that SAEs were not reported in the publication or the registry in at least 29.8% of pairs.

In the present study, many of the inconsistencies were found in design elements. Nearly half of the publication-registry pairs analyzed exhibited inconsistent reporting of POM design, mainly attributable to addition or omission of POM elements, transposition of POM and SOM elements, redefinition of research time frames, or redefinition of POMs, typically from broader to more specific terms. An even larger proportion exhibited discrepancies or omissions in reporting of SOM design. It is unclear whether explicit discrepancies result from erroneous reporting, incomplete documentation, or deliberate alterations in trial design. Although changes in critical study design features may be done for legitimate reasons, it is well known that post hoc creativity in study design can substantially influence results and conclusions.19

Previous studies15-17 comparing registry data with published clinical trial data primarily identified studies from registries and secondarily searched for corresponding publications. This design can provide important clues about biases that influence the likelihood that a study will lead to a peer-reviewed record. In the current study, we began with peer-reviewed literature and searched for corresponding registry entries. This design provides information about the frequency with which those who publish clinical trials use registries but does not address the issue of clinical trials that are never published.

We found a large number of trials for which no registration could be found. Even when registry entries were found, frequent inconsistencies were found between the publication and the registries. Incomplete data in the registries accounted for most of these inconsistencies. To minimize bias, any mismatch between corresponding items in the registry and publication was counted as a discrepancy.

Clear, prospective elucidation of investigators’ intent for the design, conduct, and analysis of a clinical trial is necessary for informed interpretation of results. Deviations from those prespecified features should be transparent and justified by the investigators. The ICMJE has recommended preregistration of clinical trials, including disclosure of key study features, as a prerequisite for publication. Greater adherence to these recommendations may provide valuable information for journals, peer reviewers, and ultimately consumers of medical literature.

Limitations

Limitations of the present study include the selection of journals sampled. We chose 3 prominent, high-impact-factor US journals with expressed, consistent policies on clinical trial registration. Thus, these results may not generalize to other journals. Similarly, conclusions may be limited to the selected time frame. The year 2014 was 8 years after the 3 journals adopted a registration policy consistent with recommendations of the ICMJE. It is likely that adherence and enforcement of the policy has changed over time. Additional studies would be helpful to elucidate the dynamics of these behaviors. Although the studies that met the inclusion criteria were independently assessed by 2 reviewers, eligibility was determined by a single investigator. In future studies, independent determination by 2 or more authors may provide a more balanced selection process and allow for assessment of reproducibility in determination of eligibility.

Conclusions

Clinical trial registries provide an opportunity to verify the integrity of a clinical trial, but in the present study, we found variable use and incomplete data. For studies with registry data, inconsistency rates between published and registered data were similar to those previously reported for general medical literature. In most cases, inconsistencies involved missing data, but explicit discrepancies in methods and/or data were also found. Transparency and credibility of published trials may be improved by closer attention to their registration and reporting.

Back to top
Article Information

Accepted for Publication: January 22, 2019.

Corresponding Author: David V. Weinberg, MD, Department of Ophthalmology, Medical College of Wisconsin, 925 N 87th St, Milwaukee, WI 53226 (dweinber@mcw.edu).

Published Online: April 4, 2019. doi:10.1001/jamaophthalmol.2019.0312

Author Contributions: Dr Weinberg had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: Sun, Weinberg.

Acquisition, analysis, or interpretation of data: All authors.

Drafting of the manuscript: Sun, Collins, Carll, Weinberg.

Critical revision of the manuscript for important intellectual content: Sun, Lee, Ramahi, Sandy, Unteriner, Weinberg.

Statistical analysis: Sun, Collins.

Administrative, technical, or material support: Sun, Lee, Ramahi, Unteriner, Weinberg.

Supervision: Sun, Weinberg.

Conflict of Interest Disclosures: None reported.

Meeting Presentation: This paper was presented at the 2017 Annual Meeting of the Association for Research in Vision and Ophthalmology; May 11, 2017; Baltimore, Maryland.

Additional Contributions: Deborah Costakos, MD, MS, and Iris Kassem, MD, PhD, Department of Ophthalmology, Medical College of Wisconsin, Milwaukee, provided numerous helpful discussions. They were not compensated for their work.

References
1.
Chan  AW, Hróbjartsson  A, Haahr  MT, Gøtzsche  PC, Altman  DG.  Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles.  JAMA. 2004;291(20):2457-2465. doi:10.1001/jama.291.20.2457PubMedGoogle ScholarCrossref
2.
Song  F, Parekh-Bhurke  S, Hooper  L,  et al.  Extent of publication bias in different categories of research cohorts: a meta-analysis of empirical studies.  BMC Med Res Methodol. 2009;9:79. doi:10.1186/1471-2288-9-79PubMedGoogle ScholarCrossref
3.
Jones  CW, Handler  L, Crowell  KE, Keil  LG, Weaver  MA, Platts-Mills  TF.  Non-publication of large randomized clinical trials: cross sectional analysis.  BMJ. 2013;347:f6104. doi:10.1136/bmj.f6104PubMedGoogle ScholarCrossref
4.
van Lent  M, Overbeke  J, Out  HJ.  Role of editorial and peer review processes in publication bias: analysis of drug trials submitted to eight medical journals.  PLoS One. 2014;9(8):e104846. doi:10.1371/journal.pone.0104846PubMedGoogle ScholarCrossref
5.
DeAngelis  CD.  The influence of money on medical science.  JAMA. 2006;296(8):996-998. doi:10.1001/jama.296.8.jed60051PubMedGoogle ScholarCrossref
6.
Kien  C, Nußbaumer  B, Thaler  KJ,  et al; UNCOVER Project Consortium.  Barriers to and facilitators of interventions to counter publication bias: thematic analysis of scholarly articles and stakeholder interviews.  BMC Health Serv Res. 2014;14:551. doi:10.1186/s12913-014-0551-zPubMedGoogle ScholarCrossref
7.
Song  F, Loke  Y, Hooper  L.  Why are medical and health-related studies not being published? a systematic review of reasons given by investigators.  PLoS One. 2014;9(10):e110418. doi:10.1371/journal.pone.0110418PubMedGoogle ScholarCrossref
8.
Dickersin  K, Rennie  D.  Registering clinical trials.  JAMA. 2003;290(4):516-523. doi:10.1001/jama.290.4.516PubMedGoogle ScholarCrossref
9.
Rennie  D.  Trial registration: a great idea switches from ignored to irresistible.  JAMA. 2004;292(11):1359-1362. doi:10.1001/jama.292.11.1359PubMedGoogle ScholarCrossref
10.
Simes  RJ.  Publication bias: the case for an international registry of clinical trials.  J Clin Oncol. 1986;4(10):1529-1541. doi:10.1200/JCO.1986.4.10.1529PubMedGoogle ScholarCrossref
11.
International Committee of Medical Journal Editors. Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals. Updated December 2018. http://www.icmje.org/icmje-recommendations.pdf. Accessed March 3, 2019.
12.
World Health Organization. International Clinical Trials Registry Platform. Version 3.6. http://apps.who.int/trialsearch. Accessed February 26, 2019.
13.
Levin  LA, Gottlieb  JL, Beck  RW,  et al.  Registration of clinical trials.  Arch Ophthalmol. 2005;123(9):1263-1264. doi:10.1001/archopht.123.9.1263PubMedGoogle ScholarCrossref
14.
Kaplan  RM, Irvin  VL.  Likelihood of null effects of large NHLBI clinical trials has increased over time.  PLoS One. 2015;10(8):e0132382. doi:10.1371/journal.pone.0132382PubMedGoogle ScholarCrossref
15.
Dwan  K, Altman  DG, Clarke  M,  et al.  Evidence for the selective reporting of analyses and discrepancies in clinical trials: a systematic review of cohort studies of clinical trials.  PLoS Med. 2014;11(6):e1001666. doi:10.1371/journal.pmed.1001666PubMedGoogle ScholarCrossref
16.
Hartung  DM, Zarin  DA, Guise  JM, McDonagh  M, Paynter  R, Helfand  M.  Reporting discrepancies between the ClinicalTrials.gov results database and peer-reviewed publications.  Ann Intern Med. 2014;160(7):477-483. doi:10.7326/M13-0480PubMedGoogle ScholarCrossref
17.
Wieseler  B, Wolfram  N, McGauran  N,  et al. Completeness of reporting of patient-relevant clinical trial outcomes: comparison of unpublished clinical study reports with publicly available data. PLoS Med. 2013;10(10):e1001526.
18.
Miller  JE, Wilenzick  M, Ritcey  N, Ross  JS, Mello  MM.  Measuring clinical trial transparency: an empirical analysis of newly approved drugs and large pharmaceutical companies.  BMJ Open. 2017;7(12):e017917. doi:10.1136/bmjopen-2017-017917PubMedGoogle ScholarCrossref
19.
Simmons  JP, Nelson  LD, Simonsohn  U.  False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant.  Psychol Sci. 2011;22(11):1359-1366. doi:10.1177/0956797611417632PubMedGoogle ScholarCrossref
×