Context To compare the quality, presentation, readability, and clinical relevance
of review articles published in peer-reviewed and "throwaway" journals.
Methods We reviewed articles that focused on the diagnosis or treatment of a
medical condition published between January 1 and December 31, 1998, in the
5 leading peer-reviewed general medical journals and high-circulation throwaway
journals. Reviewers independently assessed the methodologic and reporting
quality, and evaluated each article's presentation and readability. Clinical
relevance was evaluated independently by 6 physicians.
Results Of the 394 articles in our sample, 16 (4.1%) were peer-reviewed systematic
reviews, 135 (34.3%) were peer-reviewed nonsystematic reviews, and 243 (61.7%)
were nonsystematic reviews published in throwaway journals. The mean (SD)
quality scores were highest for peer-reviewed articles (0.94 [0.09] for systematic
reviews and 0.30 [0.19] for nonsystematic reviews) compared with throwaway
journal articles (0.23 [0.03], F2,391 = 280.8, P<.001). Throwaway journal articles used more tables (P = .02), figures (P = .01), photographs (P<.001), color (P<.001),
and larger font sizes (P<.001) compared with peer-reviewed
articles. Readability scores were more often in the college or higher range
for peer-reviewed journals compared with the throwaway journal articles (104
[77.0%] vs 156 [64.2%]; P = .01). Peer-reviewed article
titles were judged less relevant to clinical practice than throwaway journal
article titles (P<.001).
Conclusions Although lower in methodologic and reporting quality, review articles
published in throwaway journals have characteristics that appeal to physician
readers.
Throwaway" journals are characterized as journals that contain no original
investigations, are provided free of charge, have a high advertisement-to-text
ratio, and are nonsociety publications.1 Large
circulations1 and readership polls2 suggest that throwaway journals are more widely read
than some peer-reviewed journals in the same subject areas. Despite their
popularity, throwaway journals are judged disparagingly as a source of "instant
cookbook medicine"3 and journals that are given
away.4 Indeed, throwaway journal articles1 are seldom peer reviewed and are almost never cited
in the medical literature. They are considered to be of poor quality compared
with peer-reviewed journal articles, despite the lack of formal quality comparisons.1 Given the success of throwaway publications, we sought
to understand why so many physicians read them. We assessed the quality, presentation,
readability, and clinical relevance of review articles published in a sample
of peer-reviewed journals compared with those published in a sample of throwaway
journals.
We identified all review articles that focused on the diagnosis or treatment
of medical conditions published in 5 leading peer-reviewed general medical
journals (Annals of Internal Medicine, BMJ, JAMA, The Lancet, and New England Journal of Medicine) and
the throwaway journals with the highest circulation5
(Consultant, Hospital Practice, Patient Care, and Postgraduate Medicine) between January 1 and December 31,
1998. A 3-stage process was used to identify clinically relevant review articles
for inclusion in our sample. First, we identified sections of each peer-reviewed
journal that published review articles. Throwaway journals have no designated
review sections; therefore, we identified sections most likely to contain
review articles. Second, we excluded all review article subsections where
the primary focus was not on the clinical diagnosis or treatment of a specific
medical condition or sections that published only case studies. Two authors
(A.M.B. and P.A.R.) excluded 68 peer-reviewed and 72 throwaway journal articles
that did not meet our inclusion criteria. Our cohort included 394 review articles.
Each article was classified as either a systematic or a nonsystematic
review. To identify systematic reviews, we used an approach based on the comprehensive
search strategy outlined by Hunt and McKibbon.6
Two trained reviewers (J.L.G. and Y.C.K.) independently evaluated methodologic
and reporting quality using the Barnes and Bero7
quality scoring assessment tool. This instrument is a modification of the
Oxman et al8,9 and Mulrow10 instruments. The quality score was based on 12 questions
that evaluated the purpose of the review, review strategy, inclusion and exclusion
criteria, quality assessment, combining of study results, summarizing of study
findings, limitations, and support provided for conclusions. Each question
was scored as 0 (no), 1 (partial), or 2 (yes). The final score was a percentage,
in which higher scores indicate better quality. As an additional measure of
quality, we counted the references cited.
Presentation was evaluated using the article's font size (ie, small
or large), use of color, and numbers of tables and figures. To quantify readability,
we used 2 validated readability formulas11:
the Flesch reading ease index12 and the Gunning
Frequency of Gobbledygook (FOG) index.13 Scores
were based on sentence and word length. The Flesch index generates scores
from 0 to 100 (higher scores indicate easier reading); a score of 30 or lower
was associated with a college-level reading ability. The Gunning FOG index
scores also reflect reading difficulty (lower scores indicate easier reading);
a score of 17 or more was considered too difficult for medical writing.11
Six physicians who were recent graduates in full-time clinical practice
(see "Review Article Study Group") independently rated the clinical relevance
of all 394 articles in 2 ways. First, physicians blinded to the journal name
read a computer-generated random list of all article titles and indicated
their agreement (1 = strongly disagree; 5 = strongly agree) to 2 statements:
(1) this article may provide useful information for my practice, and (2) I
would consider reading this article. Second, the reviewers evaluated the clinical
relevance of all 30 heart disease articles (heart disease was one of the most
frequent topics). The physicians independently read each article and used
the scale to respond to the following statements: (1) the article addresses
an important issue; (2) the topic is of interest to me; (3) the topic is relevant
to my practice; (4) the article provides practical strategies for physicians
such as myself; and (5) I will use the information to help care for patients.
Tables were also evaluated.
The quality scores obtained by the 2 reviewers were very consistent;
hence, the quality score assigned was the mean score. To evaluate the clinical
relevance of all of the 394 review article titles and the subset of the 30
heart disease articles, we calculated the mean score obtained from the 6 physician
reviewers. Differences in continuous variables among the 3 types of articles
(ie, peer-reviewed systematic review, peer-reviewed nonsystematic review,
and nonsystematic review articles published in the throwaway journals) were
compared using analysis of variance. We used χ2 tests to assess
differences in categorical variables. Analyses were performed using SPSS version
10 (SPSS Inc, Chicago, Ill) and for all tests P<.05
was considered significant.
Of the 394 articles in our sample, 16 (4.1%) were classified as peer-reviewed
journal systematic review articles, 135 (34.3%) as peer-reviewed journal nonsystematic
review articles, and 243 (61.7%) as throwaway journal review articles. Most
peer-reviewed articles (n = 126, 83.4%) were classified by MEDLINE as tutorial
reviews. Systematic reviews were published exclusively in the peer-reviewed
journals.
Quality scores were highest for the 16 systematic review articles. The
mean (SD) quality score was 0.94 (0.09) for the peer-reviewed systematic review
articles compared with 0.30 (0.19) for the peer-reviewed nonsystematic review
articles and 0.23 (0.03) for nonsystematic review articles published in throwaway
journals (F2,391 = 280.8, P<.001).
Peer-reviewed journal articles provided significantly more references than
throwaway journal articles (53.6 [36.8] vs 14.4 [11.6]; P<.001).
As outlined in Table 1,
throwaway journal articles were more likely to use tables, figures, color,
and larger font size compared with review articles published in peer-reviewed
journals. Among the 378 nonsystematic review articles, 228 throwaway journal
articles (93.8%) used color compared with only 77 (57.0%) of the peer-reviewed
journal articles. All of the throwaway journal articles and none of the peer-reviewed
journal articles used a large font size. Articles published in throwaway journals
were judged to be easiest to read. Among the 378 nonsystematic review articles,
the mean Flesch score was significantly higher in throwaway journal articles
than in the peer-reviewed journal articles (23.7 [15.4] vs 15.8 [17.7]), indicating
that throwaway journal articles were easier to read (P<.001).
More scores were in the college level or higher range for the peer-reviewed
journal articles compared with the throwaway journal articles (104 [77.0%]
vs 156 [64.2%]; P = .01). Using the Gunning FOG index,
mean (SD) scores were significantly lower in the throwaway journal articles
compared with the peer-reviewed journal articles (17.2 [2.9] vs 19.2 [3.3])
indicating that throwaway journal articles were easier to read (P<.001). Peer-reviewed journal articles were significantly more
likely than throwaway journal articles to score in the range judged too difficult
even for medical writing (67.4% vs 53.5%; P = .009).
Table 1. Presentation and Readability of Review Articles (N = 394) Published in Peer-Reviewed and Throwaway Journals in 1998*
Peer-reviewed journal article titles were judged to be significantly
less relevant to clinical practice than throwaway journal article titles.
When the physicians reviewed the article titles and were asked whether the
article provided useful information for their clinical practice, throwaway
journal articles were rated more relevant (mean [SD], 3.89 [0.55]) compared
with peer-reviewed nonsystematic (3.50 [0.67]) or systematic (3.41 [0.73])
review articles (F2,391 = 20.7, P<.001).
Similarly, when the physicians reviewed article titles and were asked whether
they would consider reading the article, throwaway journal articles were rated
as an article they were more likely to read (3.74 [0.59]) compared with peer-reviewed
nonsystematic (3.34 [0.69]) or systematic (3.17 [0.84]) review articles (F2,391 = 20.2, P<.001).
Table 2 outlines the reviewers'
assessment of the clinical relevance of the subset of 30 heart disease management
articles. Compared with the peer-reviewed journal articles, throwaway journal
articles were judged more likely to address important issues and be a topic
of interest to the physicians. Furthermore, throwaway journal articles provided
tables that were significantly easier to understand (F2,27 = 5.5, P = .01), helped to clarify the text (F2,27
= 9.5, P = .001), and provided information relevant
to clinical practice (F2,27 = 13.5, P<.001).
Table 2. Clinical Relevance of Review Articles on Heart Disease (N = 30) Published in Peer-Reviewed and Throwaway Journals in 1998*
We found that review articles published in throwaway journals were easier
to read than review articles published in peer-reviewed medical journals.
Review articles published in throwaway journals were rated consistently better
than articles published in peer-reviewed journals on virtually all measures
of presentation, readability, and the clinical relevance of the message. As
expected, peer-reviewed journal articles were of superior methodologic and
reporting quality relative to articles published in throwaway journals. These
findings are consistent with the large body of evidence showing that peer-reviewed
medical journals produce articles of superior quality compared with those
published in non–peer-reviewed journals.14-16
The simplest way of writing is not always the best.17
Complex messages may require complex writing to convey accurate information.
Through the use of color,18,19
larger font size,18-20
and the incorporation of more graphics,19 many
peer-reviewed journals have attempted to improve the appeal of the scientific
material they publish to their readership. Despite these efforts, our findings
suggest that peer-reviewed journal articles lag behind the throwaway journal
articles in these communication techniques.
Our study has several limitations. First, review article quality scoring
instruments reward articles that are systematic reviews. Many articles in
our sample were not intended to be systematic reviews. Nonsystematic reviews
can provide valuable information. However, systematic review articles are
the only type of review that has been shown to minimize bias. Second, our
physician reviewers may not be representative of all physicians; all had a
clinical focus and were recent graduates. Third, titles may not be the best
way to judge clinical relevance but play an important role in attracting readers'
attention and influence the decision of whether to read an article.
A balance needs to be achieved between presenting high-quality information
and communicating the message. Throwaway journals do not serve the same markets
as peer-reviewed journals and are largely supported by advertising; therefore,
their editors may choose to publish articles for which there are enthusiastic
sponsors. In contrast, peer-reviewed journals may be more likely to tackle
difficult and sometimes less popular topics. Although lower in methodologic
and reporting quality, review articles published in throwaway journals possess
characteristics that are appealing to physician readers.
1.Rennie D, Bero LA. Throw it away, Sam: the controlled circulation journals.
CBE Views.1990;13:31-35.Google Scholar 2.Finklestein D. Oh, the times! tabloids and other non–peer-reviewed publications.
Arch Ophthalmol.1985;103:1641-1642.Google Scholar 3.Soffer A. What is a practical clinical journal?
Arch Intern Med.1980;140:1419.Google Scholar 4.Cook D, Meade MO, Fink MP. How to keep up with the critical care literature and avoid being buried
alive.
Crit Care Med.1996;24:1757-1768.Google Scholar 5. The Bowker International Serials Database: Urlich's International
Periodical Directory [database online]. 36th ed. New Providence, RI: RR Bowker; 1998.
6.Hunt DL, McKibbon KA. Locating and appraising systematic reviews.
Ann Intern Med.1997;126:532-538.Google Scholar 7.Barnes DE, Bero LA. Why review articles on the health effects of passive smoking reach
different conclusions.
JAMA.1998;279:1566-1570.Google Scholar 8.Oxman A, Guyatt GH. Validation of an index of the quality of review articles.
J Clin Epidemiol.1991;44:1271-1278.Google Scholar 9.Oxman AD, Guyatt GH, Singer J.
et al. Agreement among reviewers of review articles.
J Clin Epidemiol.1991;44:91-98.Google Scholar 10.Mulrow CD. The medical review article: state of the science.
Ann Intern Med.1987;106:485-488.Google Scholar 11.Roberts JC, Fletcher RH, Fletcher SW. Effects of peer review and editing on the readability of articles published
in
Annals of Internal Medicine.
JAMA.1994;272:119-121.Google Scholar 12.Flesch RF. A new readability yardstick.
J Appl Psychol.1948;32:221-223.Google Scholar 13.Gunning R. The Technique of Clear Writing. New York, NY: McGraw-Hill International Book Co; 1952.
14.Rochon PA, Gurwitz JH, Cheung CM.
et al. Evaluating the quality of articles published in journal supplements
compared with the quality of those published in journal supplements.
JAMA.1994;272:108-113.Google Scholar 15.Barnes DE, Bero LA. Scientific quality of original research articles on environmental tobacco
smoke.
Tob Control.1997;6:19-26.Google Scholar 16.Cho MK, Bero LA. The quality of drug studies published in symposium proceedings.
Ann Intern Med.1996;124:485-489.Google Scholar 17.Goodman NW. Too many words? Mozart 1, Emperor 0.
JAMA.1995;273:1087-1088.Google Scholar 18.Delamothe T, Smith R. Redesigning the journal: having your say.
BMJ.1996;312:232-234.Google Scholar 19.Flanagin A, Murphy PJ, Lundberg GD.
JAMA's new look: a New Year's gift to readers.
JAMA.1999;281:85.Google Scholar