eTable 1. Magnitude of Delay by Press Release Characteristics (Primary Analysis)
eTable 2. Magnitude of Delay by Press Release Characteristics (Sensitivity Analysis With Backdating)
eFigure 1. Time to Publication or Composite Primary Endpoint
eFigure 2. Sensitivity Analysis: Time to Publication or Composite Primary Endpoint
eFigure 3. Sensitivity Analysis: Time to Composite Primary Endpoint by Type of Results Reported
Customize your JAMA Network experience by selecting one or more topics from the list below.
Qunaj L, Jain RH, Atoria CL, Gennarelli RL, Miller JE, Bach PB. Delays in the Publication of Important Clinical Trial Findings in Oncology. JAMA Oncol. 2018;4(7):e180264. doi:10.1001/jamaoncol.2018.0264
How long does it take for complete data from potentially practice-changing industry-sponsored clinical trials in oncology to be published following the availability of important results?
In this review of 100 pharmaceutical company press releases issued for clinical trial findings, the median delay from available results to publication of complete data was 300 days, with negative findings taking considerably longer to reach the public.
Delayed or incomplete data releases can have deleterious effects on patient care and impede scientific inquiry. Finding more rapid means by which full results can be made available to the scientific community and the public should be a policy priority.
The complete and timely dissemination of clinical trial data is essential to all fields of medicine, with delayed or incomplete data release having potentially deleterious effects on both patient care and scientific inquiry. While prior analyses have noted a substantial lag in the reporting of final clinical study results, we sought to refine these observations through use of a novel starting point for the measurement of dissemination delays: the date of a corporate press release regarding a phase 3 study’s results.
To measure the length of time elapsed between when a sponsor had results of study findings they deemed important to announce, and when the medical community had access to them.
Design and Setting
Covering the years 2011 through 2016, we measured the delay from when 8 large pharmaceutical companies issued a press release announcing completed analyses of phase 3 clinical trials in oncology, and the public sharing of those results either on ClinicalTrials.gov or in a peer-reviewed biomedical journal as found via PubMed or Google Scholar. Press releases announcing regulatory steps and presentation schedules for conferences were excluded, as were those announcing results from preclinical trials, follow-up analyses, and studies of supportive care therapies or various modes of infusion for the same therapy.
Main Outcomes and Measures
Time to public dissemination of clinical trial data.
Of the 100 press releases in our sample, 70 (70%) reported positive results, but only 31 (31%) included the magnitude of study findings. Through the end of follow-up, 99 (99%) of press releases had an associated peer-reviewed publication, complete data posting to ClinicalTrials.gov, or both, with a median time to reporting of 300 days (95% CI, 263-348 days). Positive findings were reported more quickly than negative ones (median of 272; 95% CI, 211-318 days vs 407; 95% CI, 298-705 days; log-rank P < .001).
Conclusions and Relevance
Even for the most pressing study findings, median publication delays approach 1 year. As publication delays hinder research progress and advancements in clinical care, policies that enable early preprint release or public posting of completed data analysis should be pursued.
The results of clinical research are the basis of the accretive learning necessary to improve clinical care and advance scientific inquiry, yet research findings are not disseminated quickly or consistently.1-5 Within the first 2 years of completion, only 36% of clinical trials conducted at academic medical centers have published results.6 Only 57% of trials relevant to US Food and Drug Administration (FDA) drug approvals in 2012 were even registered, only 20% had results posted on ClinicalTrials.gov, and only 56% had been published within 13 months following approval.7
Existing analyses of publication delay have some limitations, however. Some rely on the trial’s primary completion date (PCD) as the start date for measuring publication delay. The PCD is the last date on which data for the primary outcome measure from a study is collected, but not necessarily a date by which all relevant data have been captured and critical analyses completed, both of which can occur considerably later.8-12 For this reason, the PCD starts the measurement of delay too early in most cases. In particular for studies with important interim findings, the PCD can come well after the relevant date from which to measure delays.
The most common alternative is to measure delays to publication following the date of abstract publication at a major medical conference.13-18 This approach systematically underestimates publication delay because abstracts must be prepared and submitted well in advance of their distribution by conference organizers, meaning that at a minimum the important information emerging from the trial was known months before presentation, at the time of abstract submission.
The heterogeneous sample of clinical trials analyzed in previous studies represents another limitation of the existing literature. Given the widely variable duration, size, and staffing of clinical programs across different therapeutic areas, phases of development, and sponsor types, many factors can confound the time needed to prepare a manuscript after a trial closes. Tam and colleagues19 addressed this shortcoming in a study focused exclusively on phase 3 trials in oncology, a context in which delayed publication of results has the greatest impact on clinical practice. They found that within 2 and 5 years of abstract presentation at the American Society of Clinical Oncology annual meeting, 38% and 77%, respectively, had been published in a peer-reviewed journal. An expert panel of hematologists and oncologists concluded that none of the 54 unpublished trials in the analysis was expected to have a “critical impact” on clinical practice.19(p3136)
We endeavored to further understand the delays in publication of those phase 3 oncology trial findings that sponsors deemed were most important, using a start time for delay when we could be confident that the results that merited publication were already available to the sponsor: the release of a corporate press release announcing the findings. Companies issue press releases to reach not only clinicians but also the media and their shareholders, and they seek to reach both audiences for the same reason—because they anticipate a change in practice patterns related to their product that could affect their finances. As a result, delays in the release of trial findings that the sponsors considered meritorious enough to warrant a press release can plausibly be considered some of the most important type of publication delays.
This study was exempt from institutional review board approval because we used only publicly available data and did not include any human subjects or patient health data. We compiled all press releases related to oncology trials (n = 745) posted from January 2011 to June 2016 from 8 pharmaceutical companies that manufacture oncology drugs. This group included Amgen, the company we had chosen for pilot testing our analytic approach, and the 7 pharmaceutical companies ranked highest by oncology sales worldwide in 2015 in a database from EvaluatePharma.20 These 8 companies collectively accounted for 72% of all oncology sales in 2015. They all post their historical press releases on their corporate website. Novartis takes releases down after 2 years but provided us with archived press releases covering our study period. We excluded press releases that only contained the following: announcements of a regulatory step, announcements of a presentation schedule for a given conference, announcements of results from preclinical and phase 1 or 2 trials, long-term analyses of phase 3 trials, retrospective, subgroup, or meta-analyses, studies of supportive care therapies, and studies examining various modes of infusion for the same therapy (Figure 1). We retained only the first press release from each study, and excluded that press release if there was any indication within the text that the trial’s results had been reported previously. If a single announcement contained findings from more than 1 clinical research study, we considered it as if it were multiple announcements, making the count of actual press releases modestly less than the number of analytic units (which we refer to as press releases). Two authors (L.Q. and R.H.J.) applied the described exclusion criteria in a blinded fashion to the initial sample of press releases, with the senior author (P.B.B.) making the final determination in cases of disagreement.
The same 2 authors (L.Q. and R.H.J.) then independently read each press release, with the senior author resolving discrepancies, capturing the reported study identifiers such as the National Clinical Trial (NCT) identifier, the study drug’s FDA approval status, the indication under evaluation, and whether the reported findings were characterized as positive or negative into a standardized database. Findings were considered “positive” if the study met its primary end point with statistical significance, whether that be superiority or noninferiority. We considered press releases to include quantitative study findings if the releases included estimated effect sizes, but not if they only included subjective characterizations of study outcomes or reports on statistical significance tests. From ClinicalTrials.gov we obtained, when available, the study’s primary completion date, the results first received date, and the results first posted date. In our experience, the lag from receipt to posting of results often exceeds 1 month.
Our primary end point was the time elapsed between the date of the first press release and the first date of the public release of study results, either through publication in peer-reviewed literature or via posting on ClinicalTrials.gov. Most publications of the data first reported in a press release are announced in subsequent press releases, but we also searched for the associated publications on PubMed, Google Scholar, and in ClinicalTrials.gov fields where publications can be reported. For publications, we used the date on which the article was first available online according to the journal’s own website.
In many cases, we knew that our estimate of the date that companies had completed analyses of study data would be biased forward, such as when the press release announcing results occurred on the same day as the presentation at a scientific meeting or publication of an article. For both abstracts and articles, we know that submissions preceded publication, so in sensitivity analyses we backdated press release dates by 120 days for standard abstracts and 90 days for “late-breaking” abstracts. For articles, we backdated by 120 days when releases coincided with the article’s publication. These intervals parallel the minimum plausible lead times in each scenario: the conventional and late-breaking deadlines for major oncology conferences such as the American Society of Clinical Oncology and American Society of Hematology annual meetings, and a highly conservative lag between manuscript submission and publication. Just the final step from manuscript acceptance to publication has been shown to take more than 100 days.21 The PCD, which can be actual or estimated or even missing, could not be considered as an alternative start time for our analysis because many (n = 32) of the press releases in our analysis reference interim study findings, findings that are arrived at prior to the completion of study data collection.
We conducted time-to-event analyses with a censoring date of December 5, 2017, the date of our last queries, considering all releases together, and with log-rank tests across subgroups by company, positive or negative findings, premarket vs approved status of the study drug, and reporting or nonreporting of study effect size. Statistically significant results from univariate analyses were entered into a Cox proportional hazards multivariate model, controlling for company fixed effects. Log-rank tests with significance level of .05 were used with 2-sided P value for all unadjusted analyses. Analyses were performed in SAS, version 9.2.
Of the 100 studies reported in press releases that were included in the final analysis (the first one that included results published for each trial), 70 (70%) reported positive study findings (Table). Few (31 [31%]) included a quantitative estimate of effect size, although such estimates seemed to be more frequent in press releases that contained positive findings than ones with negative findings (34 [34%] vs 23 [23%]; P = .28). Most press releases (66 [66%]) reported outcomes of studies involving drugs already approved by the FDA for some indication. Press releases regarding preapproval drugs appeared less likely to contain quantitative reports of effect sizes (18 [18%] vs 38 [38%]; P = .04).
The median time from a press release referencing trial results until either publication or posting on ClinicalTrials.gov was 300 days (95% CI, 263-348 days); 90% of the press release results were either posted, published, or both within 2 years (eTable 1 in the Supplement). Through the end of follow-up (mean, 4.2 years [interquartile range, 3.0-5.4 years]), we could identify an associated peer-reviewed publication for 99 (99%) of press releases and were able to access trial results on ClinicalTrials.gov associated with 78 (78%) of studies. Delays were longer for trials with negative data than those with positive data (407; 95% CI, 298-705 days vs 272; 95% CI, 211-318 days; P < .001) (Figure 2). Of all studies announcing negative results, only 21 (70%) published or posted results within 2 years. In a multivariate Cox model, reporting positive results remained a significant predictor of shorter time to public release even when controlling for company (P < .001). Delays were longer with time to first publication as the sole end point (meaning that posting to ClinicalTrials.gov was not considered public release of data), with a median time to publication of 350 days (95% CI, 286-374 days) (eFigure 1 in the Supplement). There was no difference in the median delay between trials studying approved vs unapproved drugs. Delays were significantly different across the 8 sampled companies.
Backdating press releases that were published simultaneously with a conference or published article lengthened the median time to publication but did not meaningfully alter the other findings (eTable 2 in the Supplement). The median delay in the composite end point across the full sample was 39 days longer, and the time to publication alone was 25 days longer (eFigure 2 in the Supplement). The specific increases varied by company and other press release characteristics, but no estimates were reduced and all trends remained the same as in our primary analysis. For instance, median delays for trials with negative results were still more than 100 days longer than for those with positive results (443 [95% CI, 335-825] vs 303 [95% CI, 278-351] days; P < .001) (eFigure 3 in the Supplement). Overall, 88 (88%) of the press release results were either posted, published, or both within 2 years.
The speed at which pharmaceutical companies publish findings from clinical trials has come under scrutiny based on data that estimated the median publication delay from study completion to peer-reviewed publication to be approximately 2 years. These findings rely on measuring the time from either the trial’s PCD or conference abstract release to journal publication and are aggregated across a broad range of study types. We focused on the narrower but potentially more relevant question of how long it takes from when a corporate sponsor has analyzed data that are likely to be clinically meaningful to the time when they are available to clinicians, researchers, and the public.
Focusing on press releases associated with phase 3 trials in oncology allowed us to ascertain the latest possible date at which the sponsor had available to them a completed data analysis. Although the press release can follow that date, it cannot precede it because companies are cautious about putting data into press releases that could subsequently change. We considered either posting to ClinicalTrials.gov or peer-reviewed publication in a biomedical journal to constitute comprehensive reporting of results. Although abstracts represent another mechanism for data release and may inform clinical and research stakeholders, data featured in conference abstracts can differ from those in study publication. One analysis found a discrepancy of more than 5% in the primary end point reported in 42% of phase 3 trial publications when compared with the conference abstract.22 Using this approach, we observed a shorter time to public dissemination of clinical trial results than has been reported previously, with the median delay to our composite end point of slightly less than 1 year. For already marketed drugs, clinical trial data have the potential to directly and immediately affect prescribing habits. Delays in the release of data regarding unapproved treatments, which in our study were not significantly different from those overall, have other implications. Simultaneous similar studies are likely recruiting patients during this delay, while if the negative data were published, the limited pool of research money and clinical research participants could be devoted to more fruitful investigations.
We found that negative results were announced less frequently and took more than 4 months longer than positive results to be posted on ClinicalTrials.gov or be published in a peer-reviewed medical journal.23,24 Our study does not shed light on how many studies with negative results were never mentioned in press releases, nor does it further illuminate past observations that negative findings are sometimes misleadingly presented as if the study contains positive results.25,26 Regardless, delayed and unreleased negative findings can also put patients at increased risk of harm.
Our study should be viewed within the context of its limitations. We focused on clinical research findings that companies deemed important enough to warrant press releases. Because we examined only press releases from the dominant oncology companies, we can draw no inferences regarding publication and posting practices of earlier-stage and smaller companies. Because there are no regulations stipulating how quickly a public statement must be made following the completion of data analysis, we also cannot account for the time between availability of trial conclusions and issuing of a press release, which is likely influenced by various company-specific factors.
While it is true that companies also issue press releases to notify their shareholders of other material events in the company, we focused on press releases related to clinical research findings relevant to the company’s products, and thus likely have important clinical implications. Moreover, companies are required to disclose findings that could be material for their financial future, an obligation that dovetails with the release of potentially market-moving clinical research findings.
If more rapid full public release of clinical research findings were the objective, several approaches merit consideration. Preprinting, the practice of publishing “draft” findings prior to, or in tandem with peer review, has made inroads in several scientific fields. It is a common practice in the social and physical sciences but has so far been limited in its use among biomedical researchers.27-29 Two of the existing repositories for early dissemination of research data in the life sciences include bioRxiv, managed by the Cold Spring Harbor Laboratory, and the Yale University Open Data Access (YODA) Project. Through preprints, companies could post all their relevant study findings including measured outcomes and toxic effects, and would also publish the study protocol. Medical journals and societies would have to agree that preprint release does not violate embargo policies for either outlet, as some already have.30,31 Given that the majority of journals already permit the dissemination of meeting abstracts or company press releases prior to manuscript submission, we view preprint publication as a reasonable extension.
One limitation of preprinting is that the peer review process can lead to substantive changes and improvements in the analysis and interpretation of study data, meaning that preprints may ultimately be superseded by peer-reviewed publications. Another related concern is the validity of preprint releases and the potential for them to lead to premature clinical or research decision making. To safeguard against these quality issues, preprint servers or data repositories can require that sponsors provide proof of institutional review board approval or exemption, have registered the trial or even posted preliminary results to ClinicalTrials.gov, and include conspicuous labels or watermarks reinforcing the incomplete nature of the findings. These submissions would not be intended to replace ClinicalTrials.gov postings, but rather supplement the raw data that sponsors already upload to the registry with additional commentary and analysis.
Notwithstanding these efforts to highlight the preliminary nature of early data releases, there is a risk that prepublication may lead some clinicians to make patient care decisions based on conclusions that materially change during the subsequent review process. As an alternative, regulatory agencies, medical journals, and medical meeting program committees could require that results be posted to ClinicalTrials.gov at the time of press release or other public announcement of study findings, the same way that registration of trials prior to their commencement is required. Section 801 of the FDA’s Amendments Act of 2007 does not today fulfill that requirement in that it requires that final results be posted for only a subset of trials. Moreover, a 2015 analysis found that only 13% of more than 13 000 surveyed trials had posted results to the registry within 1 year of study completion.32 But when they are posted, the data on ClinicalTrials.gov tend to be more complete than the corresponding publication with respect to participant, efficacy, and adverse events in the majority of cases.33,34
While the proposed strategies focus on minimizing delays in data dissemination at the company level, trial sponsors are not exclusively responsible for the delays that we report in this study, and addressing delays at every step toward publication should be the goal. Publishers, for instance, must continue to consider innovative efforts to responsibly accelerate the peer review process, or to further shorten the time between manuscript acceptance and availability of data. Any changes in existing review practices must be made cautiously so as to not interfere with their rigor. Furthermore, publishers should consistently give comparable consideration to reports of both positive and negative results.
We do foresee possible objections from pharmaceutical companies, scientific journals, and medical societies. Embargoed, timed data releases are a potent tool for marketing study findings and drawing traffic to the journal or meeting where the results are presented. However, these interests are narrow, while the needs of science more generally, and the patients with the conditions that these companies have studied, matter more.
Accepted for Publication: January 22, 2018.
Corresponding Author: Peter B. Bach, MD, MAPP, 485 Lexington Ave, Second Floor, New York, NY 10017 (email@example.com).
Published Online: April 12, 2018. doi:10.1001/jamaoncol.2018.0264
Author Contributions: Dr Bach had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Qunaj, Jain, Miller, Bach.
Acquisition, analysis, or interpretation of data: Qunaj, Jain, Atoria, Gennarelli, Miller.
Drafting of the manuscript: Qunaj, Jain, Gennarelli, Bach.
Critical revision of the manuscript for important intellectual content: All authors.
Statistical analysis: Qunaj, Atoria, Gennarelli.
Administrative, technical, or material support: Qunaj, Jain, Bach.
Study supervision: Miller, Bach.
Conflict of Interest Disclosures: Dr Bach reports personal fees from Association of Community Cancer Centers, America’s Health Insurance Plans, AIM Specialty Health, American College of Chest Physicians, American Society of Clinical Oncology, Barclays, Defined Health, Express Scripts, Genentech, Goldman Sachs, McKinsey and Co, MPM Capital, National Comprehensive Cancer Network, Biotechnology Industry Organization, American Journal of Managed Care, Boston Consulting Group, Foundation Medicine, Anthem Inc, Novartis, and Excellus Health Plan and grants from the National Institutes of Health and Kaiser Foundation Health Plan. No other disclosures are reported.
Funding/Support: Drs Miller and Bach are funded by the Laura and John Arnold Foundation.
Role of the Funder/Sponsor: The Laura and John Arnold Foundation had no role in the design and conduct of the study; in the collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Create a personal account or sign in to: