Key PointsQuestion
What is the frequency of effect size and confidence interval reporting in JAMA Otolaryngology–Head & Neck Surgery from January 2012 to December 2015?
Findings
In this study of 121 articles, we found that approximately half reported an effect size and approximately half of those articles provided confidence intervals for the effect size. Less than 10% of the articles provided an a priori power analysis or an a priori mention of an effect size measure.
Meaning
This study identifies opportunities for improvements in the results reporting of otolaryngology literature through shifting away from null hypothesis statistical testing using P values to the reporting of effect sizes accompanied by confidence intervals.
Importance
Effect sizes and confidence intervals (CIs) are critical for the interpretation of the results for any outcome of interest.
Objective
To evaluate the frequency of reporting effect sizes and CIs in the results of analytical studies.
Design, Setting, and Participants
Descriptive review of analytical studies published from January 2012 to December 2015 in JAMA Otolaryngology–Head & Neck Surgery.
Methods
A random sample of 121 articles was reviewed in this study. Descriptive studies were excluded from the analysis. Seven independent reviewers participated in the evaluation of the articles, with 2 reviewers assigned per article. The review process was standardized for each article; the Methods and Results sections were reviewed for the outcomes of interest. Descriptive statistics for each outcome were calculated and reported accordingly.
Main Outcomes and Measures
Primary outcomes of interest included the presence of effect size and associated CIs. Secondary outcomes of interest included a priori descriptions of statistical methodology, power analysis, and expectation of effect size.
Results
There were 107 articles included for analysis. The majority of the articles were retrospective cohort studies (n = 36 [36%]) followed by cross-sectional studies (n = 18 [17%]). A total of 58 articles (55%) reported an effect size for an outcome of interest. The most common effect size used was difference of mean, followed by odds ratio and correlation coefficient, which were reported 17 (16%), 15 (13%), and 12 times (11%), respectively. Confidence intervals were associated with 29 of these effect sizes (27%), and 9 of these articles (8%) included interpretation of the CI. A description of the statistical methodology was provided in 97 articles (91%), while 5 (5%) provided an a priori power analysis and 8 (7%) provided a description of expected effect size finding.
Conclusions and Relevance
Improving results reporting is necessary to enhance the reader’s ability to interpret the results of any given study. This can only be achieved through increasing the reporting of effect sizes and CIs rather than relying on P values for both statistical significance and clinically meaningful results.