Context Technical editing supposedly improves the accuracy and clarity of journal
articles. We examined evidence of its effects on research reports in biomedical
journals.
Methods Subset of a systematic review using Cochrane methods, searching MEDLINE,
EMBASE, and other databases from earliest entries to February 2000 by using
inclusive search terms; hand searching relevant journals. We selected comparative
studies of the effects of editorial processes on original research articles
between acceptance and publication in biomedical journals. Two reviewers assessed
each study and performed independent data extraction.
Results The 11 studies on technical editing indicate that it improves the readability
of articles slightly (as measured by Gunning Fog and Flesch reading ease scores),
may improve other aspects of their quality, can increase the accuracy of references
and quotations, and raises the quality of abstracts. Supplying authors with
abstract preparation instructions had no discernible effect.
Conclusions Considering the time and resources devoted to technical editing, remarkably
little is known about its effects or the effects of imposing different house
styles. Studies performed at 3 journals employing relatively large numbers
of professional technical editors suggest that their editorial processes are
associated with increases in readability and quality of articles, but these
findings may not be generalizable to other journals.
Most articles in biomedical journals undergo some form of editing between
acceptance and publication. The intensity and type of editing vary
between journals but the process often includes applying house style to
references, abbreviations, and numbers, checking articles for
consistency, clarity, and completeness, and correcting grammatical
errors. For this review, we examined any processes applied to articles
between acceptance and publication that were designed to improve
accuracy or clarity or impose a predefined style.
We performed a systematic review using Cochrane methods,1
searching MEDLINE, EMBASE, Current Contents, and 10 other databases from earliest
entries to February 2000. We used inclusive search terms such as writing, editing, accuracy, and readability. Relevant journals (eg, Learned Publishing, Journal of Information Sciences, and JAMA
special issues on peer review) were hand searched. We contacted researchers
working in the field to request publications that we had missed. The full
search strategy is described in the protocol.1
The full review included any comparative studies of the effects of processes
designed to improve the quality of accepted research articles in biomedical
journals.1 Those on editorial decision making
and peer review formed the basis of a separate review.2
Although we sought evidence about biomedical journals, we did not restrict
our search to studies published in such journals. Studies of readability or
comprehension had to involve journal readers (ie, health care professionals).
We included articles in any language but restricted our searches on evidence
about improving writing style to English. This article presents the findings
from the subset of articles that focused on editorial processes between acceptance
and publication. Details of studies on the effects of other processes, such
as providing instructions to contributors and imposing a structured abstract
format, can be found in the Cochrane review.1
Because few studies used comparable methods, we performed a descriptive
review of 11 articles (Table 1).
Table. Summary of Studies on Technical Editing
The study at the Dutch Medical Journal3 demonstrated significant improvements in papers between
acceptance and publication, as measured by readers using a purpose-designed
scoring system. Another study at Annals of Internal Medicine4 showed similar improvements but compared
submitted and published versions and so was unable to distinguish the effects
of peer review from technical editing.
Two studies compared readability scores and demonstrated improvements
between submission and publication.5,6
Although the improvements were statistically significant, the absolute increases
were small, and scores indicate that, even after editing, the articles remained
difficult to read. Guidelines for using the Flesch readability scale suggest
that scores of 50 to 60 are desirable for standard documents, but articles
in studies scored around 30.14 The Gunning
Fog test, which gives lower scores for clearer writing, indicated that papers
remained difficult to read, with scores of 15 to 17, on a par with legal contracts
or corporate reports (as opposed to quality newspapers, which score about
9).15 The articles did not provide details
about which interventions were thought to affect readability.
Three studies considered the effects of technical editing on the accuracy
of reference citations and quotations.7-9
Those that compared submitted and published versions showed clear improvements
in accuracy.7,8 A comparison of
journals with different checking policies was less conclusive.9
A study of abstract completeness and consistency indicated variation
between journals but could not identify the processes associated with better
results,10 while 2 studies relating to the
introduction of more intensive abstract editing at JAMA showed clear improvements
in accuracy after the intervention.11,12
Another study by Pitkin and Branagan13 examined
the effects of sending abstract preparation instructions to authors after
their article had been accepted but did not detect any improvement in the
quality of abstracts from authors who had received the intervention.
Remarkably little research into the effects of technical editing performed
on articles in biomedical journals has been published.
The quality of study methods varied; 6 compared different versions of
articles, 2 used before and after designs, 2 made comparisons between journals,
and 1 used a randomized design. Some of the studies comparing different versions
of articles (eg, accepted vs published) attempted to mask the articles' status,
but often unsuccessfully (eg, in the Dutch Medical Journal study, 72% of assessors correctly identified the version of the article3). Comparisons between journals are hard to interpret
because findings may be influenced by extraneous factors. Comparisons over
time may be similarly confounded.
Another aspect of methodological quality is the use of appropriate rating
scales and assessors. Two studies used Flesch and Gunning readability scores,5,6 which have not been validated on scholarly
literature and are based not on adult readers, but on the ways in which children
learn to read. Both produce scores based on sentence and word length but do
not take into account word familiarity or sentence complexity. Therefore,
they are at best surrogate markers of the comprehensibility of a research
report. A study in which students assessed structured and unstructured abstracts
found that the structured version was considered significantly easier to read,
despite the fact that Flesch scores were similar for both versions.16 Further work is needed to test the ability of these
scores to measure the comprehensibility of biomedical articles.
Only one study recruited journal readers to act as assessors.3 In other cases, the investigators themselves, experts,
or professional editors rated the quality of papers. Although this step may
have reduced interrater variability, the everyday use of journals may not
have been reflected, and there is evidence that readers do not always agree
with experts on the quality of papers.17 Several
studies used specially devised scales to rate the quality of papers or abstracts.
In most cases, no details of interrater reliability or face validity were
given. However, the study by Goodman et al4
reported poor reliability (intraclass correlation coefficient of 0.12) for
its scoring system.
Definitions of Technical Editing and Generalizability
Most studies that examined changes between acceptance and publication
did not specify the processes that took place. However, the Dutch Medical Journal investigators note that "[d]uring editing, the
information in the article is checked scientifically and linguistically, corrected
and clarified if necessary, numbers are checked when possible, and the references
are made to conform to the so-called Vancouver system."3
Studies of reference accuracy indicate how the intensity of editing
varies between journals. For example, editors at the Dutch
Medical Journal and British Journal of Dermatology check references against MEDLINE,7
while those at the Journal of the American Academy of Dermatology check references only from their own journal.9
Lowry, who studied letters in the BMJ in 1985, explains
that "[a]lthough the journal does not check all references . . . the subeditors
correct any obvious errors. References are put into the house style, which
allows many mistakes to be spotted, especially where the fault is an incomplete
reference, which is inevitably corrected."8
Many of the studies of technical editing relate to general medical journals
that employ professional technical editors. They are likely to have received
more training and to have more time than academic editors who edit their journal
in addition to holding a full-time job. Thus, Goodman et al4
comment that "the relatively large editorial staff at Annals is not typical of any but the largest medical journals, and the generalization
to others with different . . . editing processes cannot easily be made."
Three of the 4 studies examining overall quality or readability failed
to distinguish between changes occurring between submission and acceptance,
which might be considered part of peer review, and those between acceptance
and publication, which meet our definition of technical editing.4-6
Only the Dutch study specifically examined and demonstrated improvements occurring
between acceptance and publication.3 A more
recent study at Annals of Internal Medicine found
that only 3% of substantive changes introduced in manuscripts resulted from
technical editing, although the origin of 47% of changes was unknown, so this
figure may have been substantially higher.18
It is also possible that other textual changes (that were not considered in
the Annals study) contributed to manuscript clarity
and readability (Frank Davidoff, oral communication, September 2001).
Need for Further Research
Nearly all journals impose a house style that includes elements of typographic
design, such as typeface, and scientific conventions, such as number format.
The only aspect of journal style that has attracted research is the structuring
of abstracts. Although studies comparing readability scores are inconclusive,
structured abstracts are preferred by readers and are more comprehensive but
are also longer than unstructured ones.16,19-24
Apart from this, we found no research about the effects of different styles
on legibility or readability, and we conclude that the imposition of such
styles is not evidence based, unless journals have undertaken unpublished
research.
Our review suggests a pattern of improvement in the accuracy and readability
of research articles, which occurs between acceptance and publication. However,
few studies have attempted to determine which processes contribute most to
this improvement or how variations in these processes might affect the quality
of published papers. Without elucidation of the processes involved, these
findings may not be generalizable. Despite the time and resources spent on
technical editing, articles remain difficult to read, although intensive editing
and checking may lead to improvements in the accuracy of references and abstracts.
Few studies have consulted journal readers about their needs or views or worked
with authors to improve manuscript quality.
1.Wager E, Middleton P. Technical editing of research reports in biomedical journals [protocol
for Cochrane Review on CD-ROM]. Oxford, England: The Cochrane Library, Update Software; 2002; issue
1.
2.Alderson P, Davidoff F, Jefferson TO, Wager E. Editorial peer review for improving the quality of reports of biomedical
studies [protocol for Cochrane Review on CD-ROM]. Oxford, England: The Cochrane Library, Update Software; 2002;issue
1.
3.Pierie J-P, Walvort HC, Overbeke AJ. Readers' evaluation of effect of peer review and editing on quality
of articles in the
Nederlands Tijdschrift voor Geneeskunde.
Lancet.1996;348:1480-1483.Google Scholar 4.Goodman SN, Berlin J, Fletcher SW, Fletcher RH. Manuscript quality before and after peer review and editing at
Annals of Internal Medicine.
Ann Intern Med.1994;121:11-21.Google Scholar 5.Biddle C, Aker J. How does the peer review process influence
AANA Journal article readability?
AANA J.1996;64:65-68.Google Scholar 6.Roberts JC, Fletcher RH, Fletcher SW. Effects of peer review and editing on the readability of articles published
in
Annals of Internal Medicine.
JAMA.1994;272:119-121.Google Scholar 7.Hobma SO, Overbeke AJPM. Fouten in literatuurverwijzingen in het Nederlands Tijdschrift voor
Geneeskunde [Errors in literature references in the Nederlands Tijdschrift
voor Geneeskunde].
Ned Tijdschr Geneeskd.1992;136:637-641.Google Scholar 8.Lowry SR. How accurate are quotations and references in medical journals?
BMJ.1985;291:1421.Google Scholar 9.George PM, Robbins K. Reference accuracy in the dermatologic literature.
J Am Acad Dermatol.1994;31:61-64.Google Scholar 10.Pitkin RM, Branagan MA, Burmeister LF. Accuracy of data in abstracts of published research articles.
JAMA.1999;281:1110-1111.Google Scholar 11.Pitkin RM, Branagan MA, Burmeister LF. Effectiveness of a journal intervention to improve abstract quality.
JAMA.2000;283:481.Google Scholar 12.Winker MA. The need for concrete improvement in abstract quality.
JAMA.1999;281:1129-1130.Google Scholar 13.Pitkin RM, Branagan MA. Can the accuracy of abstracts be improved by providing specific instructions?
a randomized controlled trial.
JAMA.1998;280:267-269.Google Scholar 15.Gunning R. The Technique of Clear Writing. New York, NY: McGraw-Hill; 1952.
16.Hartley J, Benjamin M. An evaluation of structured abstracts in journals published by the
British Psychological Society.
Br J Educ Psychol.1998;68:443-456.Google Scholar 17.Justice AC, Berlin J, Fletcher SW, Fletcher RH, Goodman SN. Do readers and peer reviewers agree on manuscript quality?
JAMA.1994;272:117-119.Google Scholar 18.Purcell GP, Donovan SL, Davidoff F. Changes in manuscripts and quality: the contribution of peer review. Presented at: Fourth International Congress on Peer Review in Biomedical
Publication; September 14-16, 2001; Barcelona, Spain.
19.Comans ML, Overbeke AJPM. De gestructureerde samenvatting: een hulpmiddel voor lezer en auteur
[The structured summary: a tool for reader and author].
Ned Tijdschr Geneeskd.1990;134:2338-2343.Google Scholar 20.Hartley J, Sydes M, Blurton M. Obtaining information accurately and quickly: are structured abstracts
more efficient?
J Inform Sci.1996;22:349-356.Google Scholar 21.Hartley J, Sydes M. Are structured abstracts easier to read than traditional ones?
J Res Reading.1997;20:122-136.Google Scholar 22.Scherer RW, Crawley B. Reporting of randomized clinical trial descriptors and use of structured
abstracts.
JAMA.1998;280:269-272.Google Scholar 23.Taddio A, Pain T, Fassos FF, Boon H, Ilersich AL, Einarson TR. Quality of nonstructured and structured abstracts of original research
articles in the
British Medical Journal, the
Canadian Medical Association Journal, and the
Journal of the American Medical Association.
CMAJ.1994;150:1611-1615.Google Scholar 24.Trakas K, Addis A, Kruk D, Buczek Y, Iskedjian M, Einarson TR. Quality assessment of pharmacoeconomic abstracts of original research
articles in selected journals.
Ann Pharmacother.1997;31:423-428.Google Scholar