Customize your JAMA Network experience by selecting one or more topics from the list below.
Schroter S, Tite L, Hutchings A, Black N. Differences in Review Quality and Recommendations for Publication Between Peer Reviewers Suggested by Authors or by Editors. JAMA. 2006;295(3):314–317. doi:10.1001/jama.295.3.314
Context Many journals give authors who submit papers the opportunity to suggest reviewers. Use of these reviewers varies by journal and little is known about the quality of the reviews they produce.
Objective To compare author- and editor-suggested reviewers to investigate differences in review quality and recommendations for publication.
Design, Setting, and Participants Observational study of original research papers sent for external review at 10 biomedical journals. Editors were instructed to make decisions about their choice of reviewers in their usual manner. Journal administrators then requested additional reviews from the author's list of suggestions according to a strict protocol.
Main Outcome Measure Review quality using the Review Quality Instrument and the proportion of reviewers recommending acceptance (including minor revision), revision, or rejection.
Results There were 788 reviews for 329 manuscripts. Review quality (mean difference in Review Quality Instrument score, −0.05; P = .27) did not differ significantly between author- and editor-suggested reviewers. The author-suggested reviewers were more likely to recommend acceptance (odds ratio, 1.64; 95% confidence interval, 1.02-2.66) or revise (odds ratio, 2.66; 95% confidence interval, 1.43-4.97). This difference was larger in the open reviews of BMJ than among the blinded reviews of other journals for acceptance (P = .02). Where author- and editor-suggested reviewers differed in their recommendations, the final editorial decision to accept or reject a study was evenly balanced (50.9% of decisions consistent with the preferences of the author-suggested reviewers).
Conclusions Author- and editor-suggested reviewers did not differ in the quality of their reviews, but author-suggested reviewers tended to make more favorable recommendations for publication. Editors can be confident that reviewers suggested by authors will complete adequate reviews of manuscripts, but should be cautious about relying on their recommendations for publication.
Peer review plays a central role in determining what research is published. Peer reviewers are responsible for identifying methodological flaws and for improving the quality of manuscripts. Several factors are associated with review quality (reviewer age, being a current investigator, and postgraduate training in epidemiology or statistics).1,2 Many journals give authors the opportunity to suggest reviewers for their own paper, but editors' decisions to select these reviewers vary because some are concerned that they might favor the author. However, many journals find it hard to recruit good-quality reviewers and, as such, are willing to try authors' suggestions.
The only study to evaluate author-suggested reviewers found that these reviewers were less critical than those suggested by editors in terms of the scientific importance of an article and the decision to publish.3 However, the generalizability of this finding is uncertain because it was based on 1 journal and did not use a validated outcome measure. We describe a large study of 10 journals across a range of medical specialties to investigate whether author-suggested reviewers differed from editor-suggested reviewers in terms of review quality and recommendation for publication.
The study was conducted of 10 journals (Table 1) that routinely request that authors suggest potential peer reviewers as part of their electronic manuscript management. Original research papers submitted between April 1, 2003, and December 31, 2003 (April 1, 2003–August 31, 2003, for BMJ) and sent out for peer review were eligible for inclusion. Papers were excluded if the author did not spontaneously suggest a reviewer, as were reviews conducted by journals' statistical reviewers.
We needed 92 papers with discordant recommendations between author- and editor-suggested reviewers to detect a 2-fold difference in the odds of recommendation with 90% power at 2-sided α = .05. A total of 110 papers would be sufficient to detect a difference in review quality of 0.4 (SD of difference, 1.2; 2-sided α = .05; power, 90%) on the Review Quality Instrument (RQI).4,5
Editors chose reviewers in their usual manner. Using the journals' electronic tracking systems, administrators requested an additional review from the top of the author's list of suggestions. If the editor had already requested a review from someone on the author's list, the administrator did not request an additional review. If the first person on the list declined the review, the next reviewer on the list was contacted until a reviewer was found.
We did not seek ethics committee approval for this study because it did not involve human participants or medical records. We did not seek consent from individual reviewers because we did not interfere with the usual editorial process and reviewers were not recruited into the study. Raters of the reviews volunteered to participate and were blinded to the identity and status of the reviewer.
Review Quality. Each review was rated independently using the RQI4 (Box) by 2 of 16 trained raters who were blinded to the identity and source of the reviewer. The reliability and validity of the RQI have been reported previously.4-7
Recommendation to Publish. Of participating journals, 6 of 10 ask reviewers to provide a recommendation about publication. For the purpose of this study, BMJ also asked reviewers to provide a recommendation. We reclassified the journals' existing response categories as: accept (a recommendation to accept or accept with minor revisions), revise (major revisions), and reject (reject or revise and reconsider).
Papers were denoted as being preferred by author-suggested reviewers if at least 1 of the author-suggested reviewers rated the paper more favorably than the highest-rating editor-suggested reviewer, or at least 1 editor-suggested reviewer rated lower than the lowest-rating author-suggested reviewer. For editor-suggested reviewer preference the denotation was reversed. A paper could fall into both categories (eg, author-suggested reviewers recommending accept and reject but all editor-suggested reviewers recommended revise), or in situations in which the range of recommendations between author- and editor-suggested reviewers was the same, it was interpreted as no preference.
Missing data for individual items of the RQI were imputed by best subset regression from the remaining items using data from both raters. The agreement between raters was assessed using the weighted κ statistic.8 To compare RQI scores, we first calculated the mean of the 2 raters' scores. Where there were 2 or more author-suggested reviews for a study, we calculated the mean and repeated this for studies with 2 or more editor-suggested reviews. The difference in the mean RQI scores between author- and editor-suggested reviewers was assessed using a paired t test.
Differences between author- and editor-suggested reviewers in their recommendations to accept (as opposed to revise or reject) were assessed using odds ratios (ORs) from conditional logistic regression (conditional on the paper) and repeated for a recommendation to accept or revise (as opposed to reject). The data were first analyzed excluding data from BMJ. We then examined whether the effect of reviewer source on recommendation differed between papers submitted to BMJ (in which the identity of reviewers is known to authors) and the other journals (in which authors are blinded to reviewer identity) by using a likelihood ratio test on the interaction between reviewer source and whether the reviewer's identity was revealed.
For papers where author- and editor-suggested reviewers differed in their recommendations, we assessed whether the final journal decision (accept or reject) was more likely to reflect the author- or editor-suggested reviewers' preferences. The reject category included cases in which authors failed to resubmit a revised manuscript.
For all comparisons between author- and editor-suggested reviewers, the unit of analysis was the paper. All statistical analyses were performed using STATA software version 8.2 (Stata Corporation, College Station, Tex).
In 48% (1471/3014) of papers sent out for review, the authors suggested at least 1 reviewer (Table 1). There were 329 manuscripts for which at least 1 author-suggested and 1 editor-suggested reviewer were obtained and there were 788 reviews of these manuscripts. Agreement between raters was moderate (κw, 0.56; 95% confidence interval, 0.49-0.63) but consistent with previous research.5
Review quality did not differ greatly between author- and editor-suggested reviewers (Table 2). However, author-suggested reviewers were more likely to provide a favorable recommendation (accept and revise) in the 6 journals that solicited recommendations with blinded reviews. The extent to which author-suggested reviewers provided more favorable recommendations for acceptance was even greater for open (BMJ) reviews (test for interaction P = .02).
There were 106 manuscripts in which author- and editor-suggested reviewers differed in their recommendations to publish, with author-suggested reviewers giving more favorable recommendations in 75 (70.8%) of these reviews (Table 3). However, the final editorial decision to accept or reject a study was evenly balanced with 54 (50.9%) decisions consistent with author-suggested reviewers' preferences (30 with more favorable recommendations accepted, 24 with less favorable recommendations rejected). Decisions about the other 52 (49.1%) studies were consistent with the editor-suggested reviewers' preferences.
Author- and editor-suggested reviewers of manuscripts did not differ in the quality of their reviews but author-suggested reviewers tended to make more favorable recommendations for publication, particularly if the reviewers' identity was unblinded to the author. Editors' decisions showed no overall preference between author- and editor-suggested reviewers' recommendations. This is consistent with reviewers of a surgical journal3 and a recent unpublished study with reviewers of BioMed Central's journals.9 Our results are more applicable to other medical journals because we had a larger sample of reviewers from 10 journals in different specialties.
Author-suggested reviewers might make more favorable recommendations because they know the author personally or have received a positive review from the author in the past. However, it is not necessarily the case that authors know their suggested reviewer. A more plausible reason is that authors recommend experts in their field of research who will recognize the importance of their paper. In contrast, while editor-suggested reviewers might work in the authors' specialty, they may be less interested in the issues raised in the paper and even keen to see it rejected.
Our finding that more positive recommendations made by author-suggested reviewers is greater in BMJ (with open reviewers) than among the blinded reviews of other journals should be treated with some caution because there may be other journal characteristics that explain the difference.10,11
There may have been a Hawthorne effect, ie, while editors were instructed to choose reviewers in their usual manner, they were aware of the objectives of the study and may have altered their behavior. In addition, editorial decisions about manuscripts may have been influenced by the existence of additional reviews from author-suggested reviewers solicited by journal administrators. It is unclear what biases, if any, such factors may have introduced. Only a small proportion of the total number of papers sent for review during the study period were included (Table 1). This was largely due to reviews solicited not resulting in a pair of completed reviews. We conducted an observational study and did not alter the decision-making process.
Our findings suggest that editors can make use of author-suggested reviewers and expect reviews of similar quality, but with the caveat that the recommendation to publish may be more favorable. The latter is not a problem for many journals, including BMJ, because they do not ask reviewers to make a recommendation. The decision to publish is an editorial decision based not only on the scientific review but a number of other factors.
Corresponding Author: Sara Schroter, PhD, BMJ Editorial Office, BMA House, Tavistock Square, London WC1H 9JR, England (firstname.lastname@example.org).
Author Contributions: Dr Schroter and Mr Hutchings had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Schroter, Black.
Acquisition of data: Schroter, Tite.
Analysis and interpretation of data: Schroter, Hutchings.
Drafting of the manuscript: Schroter, Hutchings, Black.
Critical revision of the manuscript for important intellectual content: Schroter, Tite, Hutchings.
Statistical analysis: Hutchings.
Administrative, technical, or material support: Schroter, Tite.
Study supervision: Schroter, Black.
Financial Disclosures: None reported.
Funding/Support: This study was funded by the BMJ Publishing Group's research budget.
Previous Presentation: Presented in part at the Fifth International Congress on Peer Review and Biomedical Publication; September 16-18, 2005; Chicago, Ill.
Acknowledgment: We thank Michael Healy, retired statistical advisor emeritus, Archives of Disease in Childhood, and Ben Armstrong, reader in epidemiology statistics, London School of Hygiene & Tropical Medicine, for their unpaid and independent statistical advice.
Create a personal account or sign in to: