Context.— It is not known whether peer review of research abstracts submitted
to scientific meetings influences subsequent attempts at publication.
Objective.— To determine why research submitted to a scientific meeting is not subsequently
published. We hypothesized that authors of abstracts rejected by a meeting
are less likely to pursue publication than those whose abstracts are accepted,
regardless of research quality.
Design and Participants.— Blinded review of abstracts submitted to a medical specialty meeting
in 1991 and not published as full manuscripts within 5 years. In 1996, authors
of 266 unpublished studies were asked to complete questionnaires.
Main Outcome Measures.— Submission of a full manuscript to a journal between 1991 and 1996;
failure to submit a manuscript to a journal because the investigator believed
it would not be accepted for publication.
Results.— A total of 223 (84%) of the unpublished investigators returned the questionnaire.
Only 44 (20%) had submitted manuscripts to a journal. Manuscript submission
was not associated with abstract quality (odds ratio [OR], 1.16; 95% confidence
interval [CI], 0.80-1.64), positive results (OR, 0.75; 95% CI, 0.31-1.57),
or other study characteristics. Having an abstract accepted for presentation
at the meeting weakly predicted submission of a manuscript to a journal (OR,
1.88; 95% CI, 0.84-4.10). Authors of accepted abstracts were significantly
less likely to believe a journal would not publish their manuscript than were
authors of rejected abstracts (OR, 0.23; 95% CI, 0.0001-0.61).
Conclusions.— Study characteristics do not predict attempts to publish research submitted
to a scientific meeting. Investigators whose research is rejected by a meeting
are pessimistic about chances for publication and may make less effort to
publish.
UNDERREPORTING OF research is a well-recognized problem with serious
implications for clinical practice.1 Most unpublished
research is never submitted to a journal for consideration.2-6
Previous investigations suggest researchers are more likely to attempt to
publish studies with positive outcomes (publication bias).2-4
Much of the research submitted to scientific meetings is never published,5-12
yet it is not known whether acceptance or rejection of the abstract for a
meeting influences subsequent efforts to publish a full manuscript.
The purpose of this study was to determine what factors influence an
investigator's attempts to publish research that was submitted to a scientific
meeting. We hypothesized that authors of abstracts rejected by the meeting
were less likely to pursue publication than those whose abstracts had been
accepted, regardless of the study quality.
We obtained copies of all abstracts submitted to the 21st annual meeting
of the Society for Academic Emergency Medicine (SAEM) held in 1991. In November
1995 and again in March 1996, we searched MEDLINE, EMBASE, and the Cochrane
Collaboration to identify which of these abstracts had been subsequently published
in a peer-reviewed journal. A peer-reviewed publication was defined as a full-length
manuscript (not letters or editorials) appearing in a journal that uses a
process of external review. Multiple search strategies were used, beginning
with several authors, and then, if necessary, combinations of authors, title,
and keywords. Online abstracts were compared with the original abstract submitted
to the SAEM meeting in 1991 when necessary to confirm the identity of the
publication.
In October 1996, a questionnaire6,13
was mailed to the first author of each abstract for whom no publication was
found in the search. The questionnaire asked whether the study had been published,
and asked the author to provide a citation if it had been published. For unpublished
research, authors were asked whether they had submitted a manuscript to a
journal, and if not, they were asked to select a reason for not doing so.
Investigators who did not respond within 3 months were sent another copy of
the questionnaire. After 2 attempts, the questionnaire was sent to another
author of the abstract and repeated in 3 months if necessary.
Research was considered unpublished if an article was not found in the
search and either the responding author confirmed that the study was not published
or no questionnaire was returned. Published studies, whether identified through
the database search or the questionnaire, were excluded from further analysis.
Unpublished abstracts were randomly assigned to 2 of the investigators
for classification of study characteristics.9
Reviewers were blinded with respect to author, submitting institution, and
whether the abstract had been accepted for presentation at SAEM. Disagreements
regarding study characteristics were resolved by a third investigator (M.L.C.).
Reviewers assigned global ratings for scientific quality (overall scientific
solidity) on a 5-point Likert scale and originality on a 3-point scale.14,15 The scores of the 2 reviewers were
averaged. The intraclass correlation for scientific quality was 0.44 (95%
confidence interval [CI], 0.37-0.51) and for originality was 0.29 (95% CI,
0.21-0.37).
The main outcomes were whether or not a manuscript was submitted to
a journal (pursuit of publication) and whether or not authors who failed to
submit a manuscript stated that a journal would be unlikely to accept it for
publication (pessimism). Potential predictors were assessed by fitting a separate
logistic regression model to each of these outcomes. The predictors were whether
the abstract had been accepted or rejected for presentation at the meeting,
quality and originality scores, whether or not the study was randomized, sample
size, presence or absence of positive results, and submitting institution's
ordinal ranking in federal grant dollars.16
The presence or absence of positive results was noted for studies with a hypothesis
and control group. Results were considered positive if the author reported
that the intervention was more effective than the control2,6,13,17;
statistical significance was not required. The statistical analysis was done
using S-Plus, Version 3.3 (MathSoft Inc, Seattle, Wash).
Of the 492 studies submitted to SAEM in 1991, 266 (55%) were never published
(Figure 1). Investigators completed
questionnaires for 223 (84%) of the unpublished studies. Response rate did
not differ for authors of abstracts rejected by the meeting and those accepted
(P>.99). Abstracts of respondents and nonrespondents
were similar in quality score (2.48 vs 2.33, P =.33)
and originality (1.58 vs 1.56, P =.79).
A full manuscript had been submitted to a journal for 44 (20%) of the
223 unpublished studies, with a mean of 1.67 (SD, 0.95) submissions per study.
No association was found between manuscript submission and study characteristics
(Table 1). There was a trend suggesting
that investigators whose abstracts had been accepted for presentation at the
SAEM meeting were more likely to submit full manuscripts to a journal than
those whose abstracts had been rejected by the meeting (odds ratio [OR], 1.88;
95% CI, 0.84-4.1).
Table 1.—Relationship of Meeting Decision and Study Characteristics to Submission of a Manuscript to a Journal*
Reasons for Failure to Publish
Among the 179 investigators who never submitted a full manuscript to
a journal, the most common reason selected was lack of time (Table 2). Only 7 investigators said they did not submit a manuscript
because the statistical analysis was not positive, even though 43 controlled
studies had negative results.
Table 2.—Reasons for Failure to Submit a Manuscript to a Journal*
Two variables predicted whether or not an investigator selected the
response "thought journals unlikely to accept." Authors of abstracts that
had been accepted for the meeting chose this option significantly less frequently
than those whose abstracts had been rejected (OR, 0.23; 95% CI, 0.0001-0.61).
Authors from institutions ranking higher in federal grant dollars chose the
response "journals unlikely to accept" (OR, 1.71; 95% CI, 1.2-2.9) more frequently
than those from lower-ranking institutions. Study quality, originality, design,
sample size, and the presence of a positive outcome did not predict whether
or not an investigator chose this response.
Why are the results of many studies never published? Our findings confirm
prior reports that most unpublished research is never submitted to a journal
for review.2-6
Only 20% of the unpublished studies originally submitted to the SAEM meeting
were later submitted as a full manuscript to a journal. Moreover, investigators
were easily dissuaded, submitting a manuscript, on average, to fewer than
2 journals before giving up.
To our knowledge, the current study is the first to investigate the
relationship among study characteristics, meeting decision, and an author's
efforts to publish research submitted to a scientific meeting. Authors whose
abstracts were rejected from the meeting were significantly more pessimistic
about the chances of publication, and there was also a trend suggesting that
authors of rejected abstracts were less likely to pursue full publication.
There was no association between publication efforts and study quality, originality,
sample size, design, or results.
Unlike previous studies, we found no evidence of publication bias among
our investigators. This is most likely because of the difference in study
populations.2-4
Previous investigations focused on fully funded research projects from a single
institution or fully funded randomized controlled trials, and are therefore
representative of only a minority of unpublished research. The studies in
our analysis came from 144 different institutions and included many projects
that were not funded. Additionally, our population of researchers had all
undergone the review process for a scientific meeting.
The 84% response rate for our questionnaire equals or surpasses that
of other studies of unpublished research.2-6,13
In addition, abstracts of respondents and nonrespondents were similar in quality,
originality, and acceptance by SAEM. We identified with certainty the publication
fate of 92% of all submitted abstracts. The SAEM meeting is comparable with
the meetings of 31 other specialty societies with regard to attendance, number
of abstracts submitted, and subsequent publication rate.18
The 5-year interval from submission may have influenced our ratings and the
authors' responses, but was necessary to allow ample time for publication.5-7,9
For research submitted to scientific meetings, subsequent publication
efforts are not predicted by the specifics of the research, but may be affected
by the meeting's decision to accept or reject the abstract. Investigators
appear to be easily discouraged by rejection at both the meeting and journal
stages of publication. Because failure to publish completed research affects
medical practice, it is imperative that specialty societies understand the
potential impact of their decisions and make additional efforts to encourage
all investigators, not just those whose abstracts are accepted for presentation,
to pursue full publication.
1.Chalmers I. Underreporting research is scientific misconduct.
JAMA.1990;263:1405-1408.Google Scholar 2.Dickersin K, Min YI. NIH clinical trials and publication bias.
Online J Curr Clin Trials [serial online].April 28, 1993; doc 50.Google Scholar 3.Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR. Publication bias in clinical research.
Lancet.1991;337:867-872.Google Scholar 4.Dickersin K, Chan S, Chalmers TC, Sacks HS, Smith Jr H. Publication bias and clinical trials.
Control Clin Trials.1987;8:343-353.Google Scholar 5.De Bellefeuille C, Morrison CA, Tannock IF. The fate of abstracts submitted to a cancer meeting: factors which
influence presentation and subsequent publication.
Ann Oncol.1992;3:187-191.Google Scholar 6.Scherer RW, Dickersin K, Langenberg P. Full publication of results initially presented in abstracts: a meta-analysis.
JAMA.1994;272:158-162.Google Scholar 7.Goldman L, Loscalzo A. Fate of cardiology research originally published in abstract form.
N Engl J Med.1980;303:255-259.Google Scholar 8.McCormick M, Holmes J. Publication of research presented at the pediatric meetings.
AJDC.1985;139:122-126.Google Scholar 9.Callaham ML, Wears RL, Weber EJ, Barton C, Young G. Positive-outcome bias and other limitations in the outcome of research
abstracts submitted to a scientific meeting.
JAMA.1998;280:254-257.Google Scholar 10.Gorman RL, Oderda GM. Publication of presented abstracts at annual scientific meetings: a
measure of quality?
Vet Hum Toxicol.1990;32:470-472.Google Scholar 11.Yentis SM, Campbell FA, Lerman J. Publication of abstracts presented at anaesthesia meetings.
Can J Anaesth.1993;40:632-634.Google Scholar 12.Juzych MS, Shin DH, Coffey J, Juzych L, Shin D. Whatever happened to abstracts from different sections of the association
for research in vision and ophthalmology?
Invest Ophthalmol Vis Sci.1993;34:1879-1882.Google Scholar 13.Dickersin K, Min YI, Meinert CL. Factors influencing publication of research results. Follow-up of applications
submitted to two institutional review boards.
JAMA1992;267:374-378.Google Scholar 14.Oxman AD, Guyatt GH, Singer J.
et al. Agreement among reviewers of review articles.
J Clin Epidemiol.1991;44:91-98.Google Scholar 15.Nylenna M, Riis P, Karlsson Y. Multiple blinded reviews of the same two manuscripts: effects of referee
characteristics and publication language.
JAMA.1994;272:149-151.Google Scholar 16.Gallagher E, Goldfrank L, Anderson G.
et al. Current status of academic emergency medicine within academic medicine
in the United States.
Acad Emerg Med.1994;1:41-46.Google Scholar 17.Chalmers I, Adams M, Dickersin K.
et al. A cohort study of summary reports of controlled trials.
JAMA.1990;263:1401-1405.Google Scholar 18.Wuerz RE, Holliman CJ. Attendance and research abstract activity at the 1993 annual meetings
of the academic medical societies [abstract].
Acad Emerg Med.1994;1:A59.Google Scholar