Customize your JAMA Network experience by selecting one or more topics from the list below.
Author Affiliations: VA Outcomes Group, Department of Veterans Affairs Medical Center, White River Junction, Vt (Drs Schwartz and Woloshin, and Ms Baczek); Center for the Evaluative Clinical Sciences, Dartmouth Medical School, Hanover, NH (Drs Schwartz and Woloshin); and Norris Cotton Cancer Center, Lebanon, NH (Drs Schwartz and Woloshin).
Context Although they are preliminary and have undergone only limited peer review,
research abstracts at scientific meetings may receive prominent attention
in the news media. We sought to describe news coverage of abstracts, characterize
the research, and determine subsequent full publication in the medical literature.
Methods We searched Lexis-Nexis to identify news stories printed in the 2 months
following 5 scientific meetings held in 1998 (12th World AIDS Conference,
American Heart Association, Society for Neuroscience, American Society of
Clinical Oncology, and the Radiological Society of North America). We searched
MEDLINE and contacted authors to determine subsequent publication in the medical
literature within 3-3.5 years of the meetings.
Results A total of 252 news stories reported on 147 research abstracts (average,
50 per meeting); 16% of the covered abstracts were nonhuman studies, 24% randomized
trials, and 59% observational studies. Twenty-one percent of the human studies
were small (ie, involving <30 subjects). In the 3 years after the meetings,
50% of the abstracts were published in high-impact journals (based on Institute
for Scientific Information ratings), 25% in low-impact journals, and 25% remained
unpublished. The publication record of the 39 abstracts receiving front-page
newspaper coverage was almost identical to the overall rate. Meeting organizers
issued press releases for 43 abstracts; these were somewhat more likely to
receive prominent news coverage (35% covered on front page vs 23%, P = .14), but were no more likely to be published.
Conclusions Abstracts at scientific meetings receive substantial attention in the
high-profile media. A substantial number of the studies remain unpublished,
precluding evaluation in the scientific community.
"I'm pretty well plugged in to what's going
on in research," he remarked. "I hear on the news ‘Major breakthrough
in cancer!' And I think, Gee, I haven't heard anything major recently. Then
I listen to the broadcast and realize that I've never heard of this breakthrough.
And then I never hear of it again."—Dr Richard Klausner, Former
Director, National Cancer Institute1
The press translates medical research into news. How the press chooses
stories to cover and how they present the findings are important, since the
media can have a powerful influence on public perceptions. Research abstracts
presented at scientific meetings often receive prominent media attention.
It is easy to understand why. The general public has a strong desire to know
about the latest developments in science and medicine, and the meetings hold
the promise of dramatic stories about new cures, discoveries, and breakthroughs.
Press coverage also may be attractive to the sponsors of the meetings, the
scientists, their institutions, and funding agencies2,3;
such coverage generates publicity that may help the organizations raise funds,
and may help the scientists advance in academia.
But it is also easy to understand why media coverage of scientific meetings
could be a problem.4,5 Scientific
meetings are intended to provide a forum for researchers to present new work
to colleagues; the work presented may be preliminary and may have undergone
only limited peer review. Frequently, the presentations represent work in
progress. Unfortunately, many projects fail to live up to their early promise;
in some cases, fatal flaws emerge. Press coverage at this early stage may
leave the public with the false impression that the data are in fact mature,
the methods valid, and the findings widely accepted. As a consequence, patients
may experience undue hope or anxiety or may seek unproved, useless, or even
dangerous tests and treatments.
While others have reported on the fate of scientific meeting abstracts6,7—approximately half are ultimately
published7—we know of no attempts to
follow those abstracts highlighted in the news and therefore most likely to
influence the public. Herein, we report on the media coverage of abstracts
at high-profile scientific meetings, describe the research, and determine
whether these abstracts are subsequently published as full reports in the
To identify high-profile meetings that attract substantial media attention,
we sought advice from journalists who actually cover the meetings. We consulted
the medical editor of the Associated Press (AP), who provided us with a list
of all meetings routinely covered by the wire service. In addition, we posted
a request on the National Association of Science Writers listserv for the
most important science meetings in terms of press coverage and were directed
to their posted "list of great meetings" (ie, important meetings to cover).
All the "great meetings" were included on the list provided by the AP wire
Our goal was to identify the highest-profile medical meetings. To identify
the meetings receiving the most media attention within each medical topic,
we performed Lexis-Nexis searches on all 1998 meetings nominated by the AP
medical editor and the science writers. Because we were interested in coverage
of diverse medical topics, we considered only 1 meeting per medical specialty
(eg, only 1 cardiology meeting). Based on the search, we selected the meetings
held in 1998 with the highest number of citations: 12th International Conference
on acquired immunodeficiency syndrome (AIDS) (World AIDS Conference), American
Heart Association, American Society of Clinical Oncology, Radiological Society
of North America, and the Society for Neuroscience. We contacted each organization
to learn each meeting's process for scientific review and publicity (Table 1).
To identify media coverage of abstracts at these 5 meetings, we searched
the general news database of Lexis-Nexis for stories appearing 2 months after
each meeting. Our search strategy used the "more options" form, which conducts
full-text searches for combinations of phrases: (name of meeting) w/10 (scientific session$ OR conference
OR meeting$). The "w/10" feature identifies news
stories in which "scientific session," "conference," or "meeting" appears
within 10 words before or after the name of each meeting. For the AIDS meeting,
we searched for 12th International Conference on AIDS
OR 12th World AIDS Conference. We included only those
news stories reporting a specific research finding (n = 252; 117 not reporting
findings were excluded).
To match each story to an abstract, we searched the meeting proceedings
(compendium of abstracts, schedules of presentations) for the author (if quoted),
as well as the topic or specifics of the study. To characterize the research
designs, we recorded the following from each abstract: study design (randomized
trial, meta-analysis, observational study); participants (human subjects,
animals, lab specimens); and number of participants (for human studies).
In February 2002, we performed MEDLINE searches to identify subsequent
publication of abstracts (3-3.5 years elapsed after each meeting). About 95%
of meeting abstracts that are ever published are published within 3 years
of the time of the meeting presentation.7 We
searched MEDLINE and PREMEDLINE for all abstract authors. If we did not identify
a MEDLINE publication, we directly contacted abstract authors to determine
whether a manuscript was ever submitted and to learn its publication status.
We received responses from all but 6 authors reporting on the status of a
relevant manuscript. We considered abstracts to be published only if we could
match the abstract and article in terms of the study objective, design, and
approximate number of patients.
We used publication as a proxy (admittedly imperfect) for the validity
and importance of the work. Our hypothesis was that any medical research of
sufficient importance for the general media would also appear in the medical
literature. Given the variable quality of journals, we further categorized
publication based on the Institute for Scientific Information impact factor.8 We classified journals as "high impact" if their impact
factor was among the top 10 in either the general medicine, research and experimental
medicine, or relevant specialty topic lists. While not a universally accepted
metric, many feel the impact factor reflects a journal's value to the scientific
We used the χ2 test to compare differences in proportions,
and used STATA v7.0 (Stata Corp, College Station, Tex) for analyses; α
was set at .05.
Table 1 summarizes the review
process for selecting submitted abstracts for presentation at each meeting,
and the publicity process. The Society for Neuroscience had no explicit scientific
review process; consequently, all 15 000 submitted abstracts were accepted
for presentation. Abstract submissions to the other 4 meetings underwent scientific
review either by an individual or a committee ranking procedure; acceptance
rates varied from 25% (American Heart Association meeting) to 90% (World AIDS
Conference). Organizers at each meeting actively sought press coverage, generally
by issuing pre-meeting press releases, conducting media briefings before and
during the meeting, distributing press packets at the meeting, and arranging
for interviews with selected authors.
How Much Media Coverage Did Meeting Abstracts Receive?
We found a total of 252 news stories reporting on 147 abstracts within
2 months of each meeting—an average of 50 news stories per meeting (Table 2). Some stories reported on multiple
presentations; several reported on the same presentations. The AIDS meeting
received the most coverage (84 stories), followed by those of the American
Heart Association (65), the Society for Neuroscience (42), the American Society
of Clinical Oncology (39), and the Radiological Society of North America (22).
With the exception of the Wall Street Journal (where
only 1 story appeared), 9 or more stories about meeting abstracts appeared
in each of the nation's 5 highest-circulation10
newspapers (USA Today, New York
Times, Los Angeles Times, Washington Post).
Thirty-nine of the 147 (27%) abstracts received front-page (ie, page
1) coverage in at least 1 newspaper. The distribution of front-page coverage
varied across the meetings, with ASCO (50%) and AHA (35%) presentations most
likely to receive front-page coverage (P = .01)
What Kinds of Studies Received Media Coverage?
Sixteen percent of the covered abstracts were nonhuman (ie, animal or
laboratory) studies, 24% were randomized trials, and 59% were observational
studies. Twenty-one percent of the human studies were small (ie, involving
How Many Abstracts Were Subsequently Published?
In the 3 to 3.5 years after the meetings, 50% of the 147 abstracts were
published in high-impact journals, while 25% were published in low-impact
journals, 25% remained unpublished (Figure
1). Of the 37 unpublished abstracts, 25 were never submitted as
full manuscripts, 3 manuscripts were rejected, and 3 manuscripts were currently
under review (6 authors did not respond to our inquiries). Publication in
high-impact journals varied by meeting, ranging from 27% for the Society for
Neuroscience to 64% for the American Society of Clinical Oncology (P = .01).
The publication record for the 39 presentations that received prominent
(ie, front page) newspaper coverage was almost identical to the overall publication
rate. Meeting organizers issued official press releases for 43 abstracts.
These abstracts were somewhat more likely to receive front-page coverage (35%
vs 23%, P = .14), but were slightly less likely to
be published at all (67% vs 78%, P = .18) or in high-impact
journals (42% vs 53%; P = .36).
We found that research abstracts presented at prominent scientific meetings
often receive substantial attention in the news media. This prepublication
dissemination of medical research often brings findings to the public before
the validity and importance of the work has been established in the scientific
community.4,11 Adding to this
concern, many of the abstracts receiving media attention have weak designs,
are small, or are based on animal or laboratory studies; 25% remained unpublished
more than 3 years after the meeting. Interestingly, presentations that receive
front-page coverage are no more likely to be published than abstracts receiving
less prominent coverage.
These findings should be interpreted in light of 2 limitations. First,
we did not examine the extent to which the public pays attention to or is
influenced by the news coverage. While there is evidence that the premature
dissemination of research can affect both patient and physician behavior,4,11 the impact of the news coverage we
have highlighted is unknown. Second, subsequent publication is an imperfect
way to measure the scientific quality of the meeting abstracts receiving press
coverage. Some research may not be published because it is never submitted
(true for two thirds of the unpublished abstracts in our study), which may
reflect lack of time or concerns about validity.6,7
Poor-quality editorial or peer review may result in rejection of papers that
should have been published or in publication of papers that should have been
rejected. Finally, the proliferation of new medical journals may be diluting
the meaning of publication. We used journal impact factor as a proxy for journal
quality; a third of the meeting abstracts that were eventually published in
our study appeared in low-impact journals.
We believe that our findings both stem from and highlight 2 competing
purposes of scientific meetings. On one hand, the meetings serve a scientific
purpose by enabling communication among researchers. In this context, it is
not only appropriate but desirable that scientists share work in progress
to get feedback and ideas for moving forward, perhaps the purest form of peer
review. On the other hand, the meetings serve a public relations purpose,
generating support for the meetings' sponsors and for the agencies funding
research, and drawing attention to individual investigators and their institutions.
The most direct way to reduce public exposure to misleading preliminary
findings is for meeting organizers to have more rigorous standards for issuing
press releases. The selection of abstracts for promotion to the media should
be based primarily on scientific merit assessed by scientists. Press releases
also should be carefully written to convey the preliminary nature of the work
and fairly depict the science; ideally they should be critically reviewed
before release. Two examples from our study illustrate problems that might
have been avoided had releases undergone a higher level of scrutiny. First,
the headline of the press release for an abstract presented at the 1998 American
Society of Clinical Oncology meeting (and reported on page 1 on the New York Times) read, "Canadian study is first to show
screening reduces prostate cancer death"; however, the study is now widely
criticized for serious methodological flaws apparent at the time of presentation
and the data are not considered evidence that screening is beneficial.12-14 Second, a press release
reporting the early results of a raloxifene trial stated in the headline that
"[raloxifene] may reduce risk of endometrial cancer in postmenopausal women";
by the time the final report was published, no such risk reduction was evident.
In addition, news organizations might also consider raising their threshold
for reporting on scientific meeting abstracts at all. If they do choose to
report on such presentations, they might make a concerted effort to emphasize
the preliminary nature of data presented, and apply the same level of skepticism
in covering these stories that they do in reporting on political matters.
In this way, the press might help readers to develop a healthy skepticism
about the breakthroughs they repeatedly encounter in the news. Scientists
presenting at meetings can also help by routinely emphasizing the limitations
of their work when interviewed by the press.
The current press coverage of scientific meetings may be characterized
as "too much, too soon." Results are frequently presented to the public as
scientifically sound evidence rather than as preliminary findings with still
uncertain validity. With some effort on the part of meeting organizers, journalists,
and scientists, it will be possible to better serve the public.
Schwartz LM, Woloshin S, Baczek L. Media Coverage of Scientific Meetings: Too Much, Too Soon? JAMA. 2002;287(21):2859–2863. doi:10.1001/jama.287.21.2859
Coronavirus Resource Center
Create a personal account or sign in to: