[Skip to Navigation]
Sign In
Publication Bias
June 5, 2002

Publication Bias in Editorial Decision Making

Author Affiliations

Author Affiliations:JAMA, Chicago, Ill (Drs Olson, Rennie, and Cook, Mss Flanagin and Reiling, and Mr Pace); Department of Medicine, Division of Emergency Medicine, University of Washington, Seattle (Dr Olson); Institute for Health Policy Studies, University of California, San Francisco (Dr Rennie); Departments of Medicine, Clinical Epidemiology, and Biostatistics, McMaster University, Hamilton, Ontario (Dr Cook); and Department of Community Health (Drs Dickersin and Hogan and Ms Zhu) and Center for Statistical Sciences (Dr Hogan), Brown University, Providence, RI.

JAMA. 2002;287(21):2825-2828. doi:10.1001/jama.287.21.2825
Abstract

Context Studies with positive results are more likely to be published than studies with negative results (publication bias). One reason this occurs is that authors are less likely to submit manuscripts reporting negative results to journals. There is no evidence that publication bias occurs once manuscripts have been submitted to a medical journal. We assessed whether submitted manuscripts that report results of controlled trials are more likely to be published if they report positive results.

Methods Prospective cohort study of manuscripts submitted to JAMA from February 1996 through August 1999. We classified results as positive if there was a statistically significant difference (P<.05) reported for the primary outcome. Study characteristics and indicators for quality were also appraised. We included manuscripts that reported prospective studies in which participants were assigned to an intervention or comparison group and statistical tests compared differences between groups.

Results Among 745 manuscripts, 133 (17.9%) were published: 78 (20.4%) of 383 with positive results, 51 (15.0%) of 341 with negative results, and 4 (19.0%) of 21 with unclear results. The crude relative risk for publication of studies with positive results compared with negative results was 1.36 (95% confidence interval [CI], 0.99-1.88). After being adjusted simultaneously for study characteristics and quality indicators, the odds ratio for publishing studies with positive results was 1.30 (95% CI, 0.87-1.96).

Conclusions Among submitted manuscripts, we did not find a statistically significant difference in publication rates between those with positive vs negative results.

Publication bias refers to the greater likelihood that studies with positive results will be published.1-3 Publication bias has been demonstrated in several cohort studies that followed up protocols approved by research ethics committees,2,4,5 ongoing trials funded by the National Institutes of Health,6 medical doctoral dissertations,7 and abstracts presented at scientific meetings.1,8

Although such studies show that researchers are more likely to submit reports with positive results, it is less clear whether journal editors are more likely to publish them. Researchers may assume that reports of research with negative results will be rejected,9 but few researchers associated with unpublished results actually submitted a manuscript.2,4,10 One case-control study11 examined 100 published and 100 rejected manuscripts submitted to either of 2 Spanish medical journals and found no evidence for publication bias. However, that study may have been too small to identify a meaningful association. We are unaware of any evidence that editors are more likely to publish studies with positive results.

Methods

Hypothesis, Design, and Setting

Our objective was to examine whether publication bias operates in editorial decision making. We hypothesized that editors preferentially publish studies with positive results. We performed a prospective cohort study to learn which manuscripts submitted to JAMA were published.

About 4000 manuscripts are submitted to JAMA annually. Each is assigned to a reviewing editor. About half are rejected after internal review and half undergo external peer review. Although the reviewing editor may reject a manuscript at any point, manuscripts are accepted only by the editor-in-chief or designated deputy. Four JAMA editors participated in this investigation; none had ultimate responsibility for accepting manuscripts at JAMA.

Inclusion Criteria

Manuscripts were included in the cohort if they reported results of a study that was prospective, assigned participants to an intervention, had at least 1 comparison group, and used a statistical test to compare differences in outcomes between groups. Such studies were typically randomized. Eligibility for inclusion was assessed in 2 steps: an editor-investigator working in the JAMA office screened all manuscripts for adherence to the first 3 criteria. Copies of manuscripts were then sent to 2 of 3 editor-investigators, who verified that the manuscript met all 4 criteria.

Data Abstraction

Data were abstracted during the editorial process so that investigators were unaware of the publication status of the manuscript when they abstracted data. Data were abstracted independently by 2 of 3 investigators who completed data forms by consensus and forwarded them to other investigators for data entry and analysis. Other investigators independently extracted information on publication status from JAMA's database and sent it directly for data entry and analysis.

Definitions

Our primary outcome was publication in JAMA. The primary independent variable was whether study results were positive, as determined by applying previously published sequential steps.12 The primary predictor of publication was significance of results. We classified results as positive if they showed a statistically significant effect (P<.05; 95% confidence interval [CI] for difference excluding 0 or 95% CI for ratio excluding 1) on the primary outcome in that study and negative if they did not. If no primary outcome was stated or discernible, we classified results based on most outcomes. Results were unclear if they could not be classified as positive or negative, typical when many outcomes were reported and an equal number were positive and negative but none was primary.

We abstracted information on study characteristics examined by others for association with publication1,2,4-6,13-17 and on objective indicators of study quality and reporting transparency2,13 (Table 1). We also recorded the clinical topic, sex and age of study participants, and the funding sources as reported in the manuscript. If an industry supplied only drugs or devices, we did not count it as having funded the study.

Table 1. Characteristics of Controlled Trials and Their Relationship to Publication
Table 1. Characteristics of Controlled Trials and Their Relationship to Publication
Image description not available.

Ethics

Before our investigation, JAMA editors agreed to take part in studies relating to peer review and editorial decision making. We did not tell editors the details of this investigation or request informed consent for participation, since their awareness might have influenced their publication decisions.18 For several years, JAMA's Instructions to Authors have told authors that their work might be included in a study.19 We did not tell authors about this investigation, because the standard editorial process was unchanged and the confidentiality of the author-editor relationship was maintained. The University of Washington's Human Subjects Review Committee approved the protocol by waiver of consent, under the condition that investigators inform the other editors about the study after all publication decisions had been made.

Analysis

We estimated the number of manuscripts needed for this investigation, assuming that manuscripts with positive and negative results would be submitted in equal proportions and the overall publication rate would be 16%. Using a 2-sided test for significance of .05 and power of 0.80, 708 manuscripts would be required to detect a difference between 12% and 20% publication rates. We increased the sample size to 750 to allow for manuscripts with unclear significance of results.

Proportions of studies published were examined by significance of results and other variables. Associations between independent variables and publication were estimated with relative risks (RRs) and 95% CIs. The P values were not adjusted for multiple comparisons, and P<.05 was considered statistically significant. To adjust for several variables simultaneously, we used multiple logistic regression and calculated the odds ratio (OR) as the measure of association. Data were entered into an Access (version 2.0, Microsoft Corporation, Redmond, Wash) database and analyzed with SAS software (version 6.12, SAS Institute, Cary, NC).

Awareness of the Study

Since awareness of the investigation may have influenced the decision making of the 4 JAMA editors who were also investigators, we performed additional analyses that excluded their assigned manuscripts. Editors who were not investigators were asked whether they had been aware of the investigation. They rated their awareness with a continuous scale, with 0 indicating they were unaware that any investigation was in progress, 5 indicating they knew some investigation was in progress but were unaware that it was about publication bias, and 10 indicating they knew everything about the investigation, including its hypothesis.

Results

From February 1996 through August 1999, 13 569 manuscripts were submitted to JAMA; 745 met all inclusion criteria. Manuscripts were distributed among 20 JAMA editors for primary responsibility. Among the 745 manuscripts, 383 (51.4%) had positive results, 341 (45.7%) had negative results, and 21 (2.8%) had unclear significance of results.

JAMA published 133 (17.9%) of these manuscripts: 78 (20.4%) of 383 with positive results, 51 (15.0%) of 341 with negative results, and 4 (19.0%) of 21 with unclear significance of results. After studies with an unclear significance of results were excluded, those with positive results were not significantly more likely to be published (unadjusted RR, 1.36; 95% CI, 0.99-1.88). The association of other factors with publication is shown in Table 1.

After all study characteristics and quality indicators were adjusted for, an OR of 1.30 (95% CI, 0.87-1.96) was achieved for publishing manuscripts with positive vs negative results (Table 2). Being multicenter trials, enrolling in the United States, and having a sample size calculation were significantly associated with publication (Table 2).

Table 2. Logistic Regression Analysis of Characteristics Associated With Publication
Table 2. Logistic Regression Analysis of Characteristics Associated With Publication
Image description not available.

After manuscripts for which any of the investigators were editors were excluded, the adjusted OR for publishing manuscripts with positive vs negative results was 1.32 (95% CI, 0.84-2.07). Of the 16 JAMA editors who were not investigators, 14 responded to the questionnaire about their awareness of the study. The median score was 4 (range, 0-10) and the mean score was 3.7, indicating editors were generally aware some investigation was in progress but were unaware of its hypothesis.

Comment

Submitted manuscripts reporting results of controlled trials are not significantly more likely to be published if they report positive results.

Studies with indicators for higher quality were more likely to be published. In a case-control study11 of submitted manuscripts, higher study quality was associated with publication. In contrast, methodologic quality of abstracts, letters, short reports,13 and meta-analyses submitted to JAMA20 was not associated with publication of full results.

We also found that studies enrolling some participants in the United States had an increased adjusted OR of publication. Previous studies21,22 found an association between journals' nationality and the national origin of the reports they publish. However, those studies did not account for differences in submissions to journals by nationality. Another study15 found that manuscripts submitted to Gastroenterology, a US journal, were more likely to be published if authors were from the United States. However, that study did not adjust for quality of the submitted manuscripts.

In our investigation, a statistically significant difference (P<.05) in outcomes between groups was used to determine a positive study. Some definitions of positive require that the significant result favor the experimental intervention13 or be beneficial.1 Another definition is that a positive result will change current thinking or the standard of care.3,23

Researchers may interpret our results as confirming editors' publication bias, but any such bias is small compared with that demonstrated repeatedly for researchers. We found an adjusted OR of 1.30 for publication of controlled trials with positive results. A meta-analysis10 of controlled trials identified at funding or approval by an ethics committee found an OR for publication of 5.96 (95% CI, 2.33-15.22). Although authors2,9 and others24 have assumed that editors preferentially publish manuscripts with positive results, researchers are more likely to write and submit manuscripts for studies with positive results.2,4,10

Editors may interpret our results as showing no evidence for publication bias. The Declaration of Helsinki25 charges authors and publishers to make available negative and positive results of investigations. Registries of clinical trials26 and online journals may facilitate the reporting of trials with negative results.27 Editors must guard against basing publication on the significance of a study's results; they should also judge manuscripts on the clinical question addressed and quality of the research methods.28,29

Assessing the influence of reviewers on editors' decisions is difficult. In one study,30 reviewers were no more likely to recommend publication of manuscripts reporting positive results.

The strengths of our investigation include its large sample size, prospective design, consideration of consecutive manuscripts submitted to a large-circulation, high-impact, general medical journal, objective inclusion criteria, data abstraction by 2 independent investigators blinded to publication status, and analysis of confounding variables.

Our results may not be generalizable to manuscripts describing studies other than controlled trials. Publication bias may affect studies of various designs differently.2,4,11,31 Our results apply specifically to the editorial process at JAMA, for a specific period and set of editors; our findings may not be generalizable to specialty journals or journals with fewer submissions, fewer editors, or lower circulation.32

We accrued more manuscripts than required under the assumptions of our sample-size calculation. The difference in publication rates (5.4%) was smaller than we hypothesized (8%), decreasing power to detect a difference between groups. In addition, adjusting for covariates decreased the power to detect publication bias.33 We used a conservative 2-sided test for significance, which allows for the possibility that studies with negative results are published at a higher rate than studies with positive results; using 1-sided tests for our analysis would have given statistically significant results.

References
1.
Callaham ML, Wears RL, Weber EJ, Barton C, Young G. Positive-outcome bias and other limitations in the outcome of research abstracts submitted to a scientific meeting  JAMA.1998;280:254-257. [published correction appears in JAMA. 1998;280:1232].Google Scholar
2.
Easterbrook P, Berlin J, Gopalan R, Matthews D. Publication bias in clinical research.  Lancet.1991;337:867-872.Google Scholar
3.
Olson CM. Publication bias.  Acad Emerg Med.1994;1:207-209.Google Scholar
4.
Dickersin K, Min Y, Meinert C. Factors influencing publication of research results.  JAMA.1992;267:374-378.Google Scholar
5.
Stern JM, Simes RJ. Publication bias: evidence of delayed publication in a cohort study of clinical research projects.  BMJ.1997;315:640-645.Google Scholar
6.
Dickersin K, Min Y. NIH clinical trials and publication bias.  Online J Curr Clin Trials.1993;Doc No 50:[4967 words; 53 paragraphs].Google Scholar
7.
Vogel U, Windeler J. Factors modifying frequency of publications of clinical research results exemplified by medical dissertations [in German].  Dtsch Med Wochenschr.2000;125:110-113.Google Scholar
8.
De Bellefeuille C, Morrison C, Tannock I. The fate of abstracts submitted to a cancer meeting: factors which influence presentation and subsequent publication.  Ann Oncol.1992;3:187-191.Google Scholar
9.
Dickersin K, Chan S, Chalmers TC.  et al.  Publication bias and clinical trials.  Control Clin Trials.1987;8:343-353.Google Scholar
10.
Dickersin K, Min YI. Publication bias: the problem that won't go away.  Ann N Y Acad Sci.1993;703:135-146.Google Scholar
11.
Campillo C. Publication bias in two Spanish medical journals. Presented at: The International Congress on Biomedical Peer Review and Global Communications; September 19, 1997; Prague, Czech Republic.
12.
Moher D, Dulberg C, Wells G. Statistical power, sample size, and their reporting in randomized controlled trials.  JAMA.1994;272:122-124.Google Scholar
13.
Chalmers I, Adams M, Dickersin K.  et al.  A cohort study of summary reports of controlled trials.  JAMA.1990;263:1401-1405.Google Scholar
14.
Gilbert J, Williams E, Lundberg G. Is there gender bias in JAMA's peer review process?  JAMA.1994;272:139-142.Google Scholar
15.
Link AM. US and non-US submissions: an analysis of reviewer bias.  JAMA.1998;280:246-247.Google Scholar
16.
Misakian AL, Bero LA. Publication bias and research on passive smoking.  JAMA.1998;280:250-253.Google Scholar
17.
Scherer R, Dickersin K, Langenberg P. Full publication of results initially reported in abstracts: a meta-analysis.  JAMA.1994;272:158-162.Google Scholar
18.
Feinstein AR. Construction, consent, and condemnation in research on peer review.  J Clin Epidemiol.1991;44:339-341.Google Scholar
19.
 JAMA instructions for authors.  JAMA.1995;274:91.Google Scholar
20.
Stroup DF, Thacker SB, Olson CM.  et al.  Characteristics of meta-analyses related to acceptance for publication in a medical journal.  J Clin Epidemiol.2001;54:655-660.Google Scholar
21.
Ernst E, Kienbacher T. Chauvinism.  Nature.1991;352:560.Google Scholar
22.
Braun T, Glanzel W, Schubert A. National publication patterns and citation impact in the multidisciplinary journals Nature and Science Scientometrics.1989;17:11-14.Google Scholar
23.
Wilson A. Meta-analysis, part 2: assessing the quality of published meta-analyses.  Med J Aust.1992;156:173-187.Google Scholar
24.
Altman LK. Experts see bias in drug data.  New York Times.April 19, 1997:B7, B12.Google Scholar
25.
World Medical Association.  2000. Declaration of Helsinki. Available at: http://www.wma.net/e/policy/17-c_e.html. Accessibility verified March 26, 2002.
26.
Smith R, Roberts I. An amnesty for unpublished trials.  BMJ.1997;315:622.Google Scholar
27.
Song F, Eastwood A, Gilbody S, Duley L. The role of electronic journals in reducing publication bias.  Med Inform Internet Med.1999;24:223-229.Google Scholar
28.
Angell M. Negative studies.  N Engl J Med.1989;321:464-466.Google Scholar
29.
Iverson C, Flanagin A, Fontanarosa PB.  et al.  American Medical Association Manual of Style. 9th ed. Baltimore, Md: Williams & Wilkins; 1998.
30.
Abbot NC, Ernst E. Publication bias: direction of outcome less important than scientific quality.  Perfusion.1998;11:182-184.Google Scholar
31.
Petticrew M, Gilbody S, Song F. Lost information? the fate of papers presented at the 40th Society for Social Medicine Conference.  J Epidemiol Community Health.1999;53:442-443.Google Scholar
32.
Ioannidis JP, Cappelleri JC, Sacks HS, Lau J. The relationship between study design, results, and reporting of randomized clinical trials of HIV infection.  Control Clin Trials.1997;18:431-444.Google Scholar
33.
Agresti A. An Introduction to Categorical Data Analysis. New York, NY: Wiley & Sons; 1996.
×