A substantial proportion of randomized clinical trials (RCTs) initially presented in conference proceedings are not published as full articles afterward.1,2 For example, a survey of 510 large phase 3 RCTs presented at American Society of Clinical Oncology (ASCO) annual meetings over 10 years found that 26% of them were not published.1 Thus, a meeting abstract may provide the only permanent information about an RCT that had been executed. The 2008 Consolidated Standards of Reporting Trials (CONSORT) for Abstracts defined a minimum list of essential items to include when reporting RCTs in abstracts.3 Herein, we assess the quality of ASCO abstracts of RCTs based on the CONSORT guidelines.
We identified primary reports of comparative RCTs in the 2007 and 2015 ASCO proceedings. The data abstraction form was modified from the CONSORT checklist and consist of 21 items (Table).4 Two trained assessors (X.L. and Y.-P.C.) carried out single data extraction with uncertainties resolved by consensus. The concordance was good (Cohen’s κ, 0.6-1.0) in 15 randomly selected abstracts assessed by both reviewers. The number of items scored as “yes” was compared using a nonparametric test. Statistical analyses were performed using STATA statistical software (version 12; STATA Corp). A 2-sided α of less than 0.05 was considered significant.
Of 3711 abstracts identified, 402 (178 in 2007 and 224 in 2015) met eligibility criteria (Figure). In total, 55 of 178 (30.9%) abstracts in 2007 and 47 of 224 (21.0%) in 2015 did not report the phase of the trial. It was unclear whether the primary endpoint was met or not in 53 of 178 (29.8%) and 23 of 224 (10.3%) abstracts in 2007 and 2015, respectively.
The median number of items scored as “yes” increased from 9 in 2007 to 11 in 2015 (Table). We observed an increase in the proportion of reports that provided primary outcome (87.1% vs 67.4%; difference, 19.6%; 95% CI, 11.5%-27.8%; P < .001), number of participants randomized to each group (46.0% vs 34.3%; difference, 11.7%; 95% CI, 2.2%-21.3%; P = .018), precision of effect size (40.6% vs 21.7%; difference, 20.4%; 95% CI, 11.7%-29.1%; P < .001), and registry information (93.3% vs 1.7%; difference, 91.6%; 95% CI, 87.8%-95.4%; P < .001), and scored “yes” for blinding (16.1% vs 5.6%; difference, 10.5%; 95% CI, 4.6%-16.3%; P < .001).
In 2015, only 1 (0.4%) abstract described the methods of allocation concealment. Most abstracts (156 [69.6%]) didn’t report the status of blinding. No abstract detailed specifically who was blinded. Sixty-one (27.2%) abstracts reported the number of participants analyzed in each group. Only 136 abstracts (60.7%) provided the effect size, and 91 (40.6%) provided precision. The reporting of phase 3 RCTs was comparable with the results in all trials.
Poorly-reported RCTs were associated with exaggerated estimates of intervention efficacy.5 Word limitations are a frequently cited reason why essential information may be missing from abstracts. However, this is unlikely to be a main barrier for better reporting of RCTs in ASCO abstracts, which have higher word limitations (300 to 350) than that recommended by the CONSORT (250 to 300).3
Large conferences receive considerable numbers of abstracts. For example, the ASCO 2015 received 5945 abstracts. Reviewing so many abstracts is time-consuming and stressful, and the reviewers' evaluation of abstracts may not be as reproducible as expected.6 If authors were to follow a comprehensive guide like the CONSORT for abstracts, the heterogeneity in reporting may be minimized, which would facilitate the review process and improve the quality of abstracts, as observed in journal abstracts.4
Although encouraging improvement in the reporting of RCTs in ASCO abstracts has occurred in recent years, the overall reporting quality remains below an acceptable level.
Corresponding Author: Jun Ma, MD, Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center of Cancer Medicine, 651 Dongfeng Rd E, Canton 510060, Guangdong, China (majun2@mail.sysu.edu.cn).
Published Online: November 23, 2016. doi:10.1001/jamaoncol.2016.4899
Author Contributions: Dr Ma had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Drs Liu, Chen, and Li contributed equally to this work.
Concept and design: Liu, Chen, Li, Sun, Eisbruch, Ma.
Acquisition, analysis, or interpretation of data: Liu, Chen, Li, Guo, Ma.
Drafting of the manuscript: Liu, Chen, Li.
Critical revision of the manuscript for important intellectual content: All Authors.
Statistical analysis: Liu, Chen, Li, Guo.
Obtained funding: Ma.
Administrative, technical, or material support: Li.
Study supervision: Sun, Eisbruch, Ma.
Conflict of Interest Disclosures: None reported.
Funding/Support: This work was supported by grants from the National Science & Technology Pillar Program during the Twelfth 5-year Plan Period (2014BAI09B10), the Health and Medical Collaborative Innovation Project of Guangzhou City, China (201400000001), the Planned Science and Technology Project of Guangdong Province (2013B020400004), and the Science and Technology Project of Guangzhou City, China (14570006).
Role of the Funder/Sponsor: The funders had no role in design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript.
Additional Contributions: We thank Xiao-Jing Du, MD, and Lei Chen, MD, both from the Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China for their contribution in the interpretation of results and drafting of the manuscript. They received no compensation for their role.
1.Krzyzanowska
MK, Pintilie
M, Tannock
IF. Factors associated with failure to publish large randomized trials presented at an oncology meeting.
JAMA. 2003;290(4):495-501.
PubMedGoogle ScholarCrossref 2.Scherer
RW, Langenberg
P, von Elm
E. Full publication of results initially presented in abstracts.
Cochrane Database Syst Rev. 2007;(2):MR000005.
PubMedGoogle Scholar 3.Hopewell
S, Clarke
M, Moher
D,
et al; CONSORT Group. CONSORT for reporting randomized controlled trials in journal and conference abstracts: explanation and elaboration.
PLoS Med. 2008;5(1):e20.
PubMedGoogle ScholarCrossref 4.Hopewell
S, Ravaud
P, Baron
G, Boutron
I. Effect of editors’ implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis.
BMJ. 2012;344:e4178.
PubMedGoogle ScholarCrossref 5.Schulz
KF, Chalmers
I, Hayes
RJ, Altman
DG. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials.
JAMA. 1995;273(5):408-412.
PubMedGoogle ScholarCrossref 6.Rubin
HR, Redelmeier
DA, Wu
AW, Steinberg
EP. How reliable is peer review of scientific abstracts? Looking back at the 1991 Annual Meeting of the Society of General Internal Medicine.
J Gen Intern Med. 1993;8(5):255-258.
PubMedGoogle ScholarCrossref