Sign In
Individual Sign In
Create an Account
Institutional Sign In
OpenAthens Shibboleth
Peer Review Congress
July 15, 1998

The Reported Training and Experience of Editors in Chief of Specialist Clinical Medical Journals

Author Affiliations

From the LOCKNET Peer Review Research Group: European Journal of Clinical Nutrition (Dr Garrow) and BMJ Publishing Group (Mr Butterfield, Ms Marshall, and Mrs Williamson), London, England.

JAMA. 1998;280(3):286-287. doi:10.1001/jama.280.3.286

Context.— The majority of the peer-reviewed clinical literature is edited by editors whose training in editorial matters may be limited or nonexistent. We suspect that editors are selected for their clinical or academic rather than editorial ability.

Objective.— To test the hypothesis that editors of medical specialist clinical journals were recruited from active clinicians rather than those with evident ability or training as editors.

Design, Setting, and Subjects.— Anonymous mail survey to editors of the 262 peer-reviewed clinical journals that had received at least 1000 citations in the 1994 Science Citation Index.

Main Outcome Measures.— Training and editorial practices of editors.

Results.— Replies were received from 191 editors (73%): in 1994 the journals they edited had 6060 (27300/1000 [maximum/minimum]) citations, 234 (740/31) source items, and an impact factor of 2.10 (18.3/0.2); nonresponders' journals had similar characteristics. Of the responding editors, 181 (95%) were part-time, 132 (69%) treated patients, and 164 (86%) were recruited by one of the following methods: election by a scientific society (49 [30%]), nomination by the previous editor (41 [25%]), or response to an advertisement (29 [18%]). There was no strong association between method of recruitment or formal editorial training and the status of the journal. Only 9% of editors in the United States send at least half of the papers to reviewers outside their own country, compared with 41% of editors in the United Kingdom and 73% in other countries, and 69% do not feel bound to follow the advice they receive concerning acceptance of papers.

Conclusions.— Clinical journals are usually edited by practicing clinicians who are self-taught part-time editors, but willing to accept further training. They usually consult 2 reviewers, but exercise independent judgment on the acceptability of papers.

RESEARCH in biomedical peer review is largely driven by editors of large weekly general medical journals, who work full-time in an office with many professional colleagues. These journals have high citation rates: in 1994 there were 276000 citations to articles in the Annals of Internal Medicine, BMJ, JAMA , The Lancet, and The New England Journal of Medicine (the 5 big "Vancouver group" journals, hereafter called "V5 journals"). However, these V5 citations were only a small minority of the total: 86% of citations are to specialist clinical journals (SCJs), which may be edited by clinicians who are relatively isolated from editorial colleagues. We therefore investigated the recruitment, training, and use of external reviewers by the editors of SCJs.

We also looked for differences in the selection and practice of editors of small SCJs with low impact factor, compared with those of larger SCJs of higher impact factor.


A clinical medical journal was defined as a journal that included in its title a word indicating a medical specialty (a list of the words used can be obtained from the authors). Review journals were excluded, since we were primarily interested in editors of journals that received original research papers relevant to clinical practice; the V5 journals were also excluded. The Science Citation Index for 1993 was manually searched for journals meeting these criteria, but for logistic reasons, journals that received fewer than 1000 citations in that year were not included in the analysis. This search yielded 277 eligible journals. Those selected for the study had a large range of high citation rates (1000-63000 per year) and impact factor (0.2-18). In late 1995, letters were sent to the editors in chief of these journals, explaining our objectives and asking if the editor would reply to a questionnaire of not more than 20 questions. In May 1996 we sent a second letter to those who agreed to respond, enclosing a questionnaire concerning their demographic characteristics, training, and use of reviewers. Respondents were assured that the results would be published in a manner that would not permit identification of individual journals or editors. The final analysis is based on 262 journals that met the selection criteria in the 1994 report.

The size of journals was ranked according to the number of source items published per year, and the quality by the impact factor. For the purpose of analysis, the journals were divided by tertiles into low, medium, and high in size or quality.

Response Rate

The characteristics of the journals from which the editors responded were similar to those of nonresponding editors, as shown in Table 1. The average number of sources per year and the impact factor of nonresponding journals were slightly higher than those of responding journals, but there is a large overlap in the range within each group.

Characteristics of Journals From Which the Editor Did or Did Not Respond
Characteristics of Journals From Which the Editor Did or Did Not Respond
Image not available

Geographic Location of Editorial Office. Postal addresses showed that 50% of the editorial offices were in the United States, 28% in the United Kingdom, 20% in other European countries, and 2% elsewhere.

Characteristics of Responding Editors. Ninety-five percent of editors were part-time, 69% treated patients, 69% were 50 to 69 years old, 21% were younger than 50 years, 10% were older than 69 years, and 96% were men.

Method by Which Editor Was Selected. Of the 191 responding editors, 164 (86%) were selected by one of the suggested methods: recommendation from the previous editor (41 [25%]), competitive interview after advertisement (29 [18%]), or election by a scientific society or college (49 [30%]). However, many also checked the "other" response, so alternative routes to the editorial chair (and the number of respondents who report it) are selection by the publishers (12), election by the editorial board (7), selection by a research committee (6), and having founded or revitalized the journal (5).

The average age range for editors in chief of journals of small size and low impact factor is 50 to 59 years; larger journals of higher impact factor typically have editors in the 60- to 69-year-old age range.

Experience and Editorial Training. Among 188 editors who answered the question, 66 (35%) had not served on the editorial team of the journal before becoming editor, but 122 (65%) had done so; of these 122 editors, 65 (53%) had served for 5 years or more. An editor with less than 5 years of experience on the editorial board, or who reported training in editing, is associated with small, low-impact journals, but there is no indication that long experience is associated with editors of large, high-impact journals compared with medium-grade ones. Of the 191 responding editors, 132 (69%) thought some form of training would be helpful to editors, but 49 thought it would not. Of the 85 respondents who said they had no formal training in editing skills, 53 (62%) thought that some training would be valuable, so evidently there is an unmet desire for such training.

Selection and Use of External Reviewers. Forty-six percent of respondents said they personally reviewed every paper submitted, but 54% did not. External reviewers were used as follows: 63% of editors used 2, 25% used 3, and 4% used more than 3. Editors of larger journals (n = 66) are more likely than those of smaller journals (n = 57) to use 3 or more reviewers for each paper (38.0% vs 15.8%, P=.05), are more likely to blind authors to reviewers (97.0% vs 86.0%, P =.05), are less likely to blind reviewers to authors (10.9% vs 21.0%, P=.09), and are less likely to be bound by the majority advice of reviewers to accept or reject a manuscript (21.0% vs 34.4%, P =.08). Differences in behavior by impact factor did not achieve statistical significance. The proportion of editors who sent at least half the manuscripts received to reviewers outside their own country was 9% for editorial offices in North America, 41% in the United Kingdom, and 73% for other countries.


The purpose of this study was to learn more about the selection, training, and use of reviewers by "amateur" editors of SCJs who are collectively responsible for far more articles and citations than the "professional" editors in charge of the big weekly general medical journals. Answers to the questionnaire confirmed our impression that SCJ editors were usually practicing clinicians with no formal training in editorial skills—they had essentially learned the craft of editing by apprenticeship to more experienced editors.

We expected to find major differences between the practices of editors of small, low-impact journals (who are usually isolated, untrained, and virtually unpaid amateurs), and the editors in chief of large, high-impact journals, who are more nearly in the situation of the professional editor of large weekly general journals, since they work with a group of fellow professionals whom they can teach and from whom they can learn. In fact, we found rather small differences in the answers to our questions when the journals were ranked by size or impact factor.

The presumed "amateur" editors of small, low-impact journals, compared with the editors of large, high-impact journals, are somewhat younger, more likely to be full-time, more likely to have been selected by the recommendation of the previous editor, and less likely to have served 5 years on the editorial team before appointment. They use fewer external reviewers and are more likely to consider themselves bound by the majority opinion of the reviewers.

We suggest that many "amateur" editors would welcome a system by which it would be possible for them to make a formative assessment of their editorial competence without taking them away from their clinical work for too long. For example, there could be a floppy disk with interactive questions on the problems that editors face and the pitfalls into which the novice editor regularly falls. Publishers should take some responsibility for providing training facilities for the editors of their journals.