Journal peer review is often time-consuming, arduous, and fraught with suspicion, not least because the identities of reviewers usually remain hidden from the authors. Despite these flaws, and the fundamental problem that the efficiency and effectiveness of peer review have yet to be measured satisfactorily, authors, editors, reviewers, and readers have become comfortable with it.1,2 Editors like anointing colleagues as experts, reviewers appreciate peer review because it tends to confirm their own impressions of themselves as experts, and no one has created a better system to vet the validity of scientific reports. Authors may complain but also may be grateful for expert appraisal and criticism and subsequent improvement in their manuscripts. They realize that their work has been taken seriously and recognize that the incorporation of reviewers democratizes beyond the editor this part of the scientific enterprise.3 Readers feel assured that articles have been evaluated by experts, and the public at large, imagining a much deeper degree of scrutiny than is customary or possible, assumes that articles that are published in most peer-reviewed journals have passed some universal standard of quality.
Given the central place of publication in the diffusion of information, the editors of JAMA and the BMJ have held conferences every 4 years since 1989 to present research into the quality of publication processes, including editorial peer review. The Eighth International Congress on Peer Review and Scientific Publication was held in Chicago in September 2017.4 This time, we were joined in the planning by the newly founded Meta-Research Innovation Center at Stanford (METRICS), and the name of the Congress was changed to replace “biomedical” with “scientific” in an effort to broaden the scope and engage researchers, editors, and others in all sciences.
Why Did This Initiative Start in Biomedicine?
In the late 1970s and throughout the 1980s, spectacular cases of what came to be called research misconduct were widely published in the press, and the public forced medical journal editors to think hard about the processes that had allowed fraudulent research to be published despite their systems for expert peer review.5,6 It seemed natural that the processes of peer review and publication should be put under the same sort of examination that editors demanded of authors when reporting science.7,8 Most importantly, in medicine, the clinical stakes are particularly high, so the pressure to get things right was intense. For example, the best evidence for effectiveness of health care treatments derives from the results of randomized clinical trials. As the methods for performing clinical trials have been refined over the past 70 years, the underlying expectations have been extended. Clinical trial registration at inception is now required by US law, and it is assumed from the start that the data and analysis behind each trial report will become part of a wider meta-analysis and made accessible for tests of reproducibility and other secondary analyses. Studies of all types, including trials and meta-analyses, have increased in volume and complexity, making the need to evaluate the quality of their reporting ever more important.
Moreover, the congresses took place during an era of explosive growth in the speed and volume of communication. The internet was developing rapidly during the 1980s, but in 1989, few who attended the first Peer Review Congress would have heard of internet service providers, email, and instant messaging. In 1993, 1% of 2-way information was carried by the internet, but by 2007, this had increased to more than 97%.9 The World Wide Web became publicly available in August 1991.10 These developments radically changed the mechanics of journal peer review and scientific publication. The processes sped up substantially, but the quality of peer review and publication actually have little to do with speed.11 In scientific publication, quality depends on the critical eyes and integrity of peer reviewers and editors, and that is difficult to reconcile with speed.
Having secured from the owner of JAMA, the American Medical Association, the crucial financial backing to allow us to proceed with the first Peer Review Congress, we had decided on a few simple rules. First, the object was to present research into the processes of selection and refinement of scientific manuscripts. What it was emphatically not was an attempt to set up rules governing publication or to settle matters by consensus, or to dictate how scientists and journal editors should conduct themselves. Second, with few exceptions, the congress program would be determined by the abstracts submitted by researchers, with priority given to data-driven studies. Third, there would be no simultaneous breakout or parallel sessions. All attendees could hear every presentation and participate in every discussion. And fourth, the audience would be given equal time to debate the presentations. Participants who have responded to postmeeting surveys have resoundingly told us that this format is one of the keys to the success of the congresses.
The number of registrants has increased from fewer than 300 in 1989 to nearly 600 in 2017. When the congresses began, the world literature on scientific peer review and publication amounted to fewer than 5 articles a year.12 This rapidly changed and by 1999 was around 200 articles per year. The number of abstracts submitted to the Peer Review Congresses increased from 50 in 1989 to 260 in 2017. A striking change has been in the proportion of women at the congresses. In 1989, 24 papers were presented from the podium, with 3 (13%) by women. At the eighth congress, of the 50 plenary session presenters, 29 (58%) were women.4 In addition, 7 studies evaluated the role of gender bias in peer review conducted by journals and funders.
The 1989 congress included 3 influential presentations on the history of peer review.6 Several papers were given on specific aspects of the peer review process, such as blinded review13 and on the extent and importance of publication bias and research misconduct.14-17 Several important trials on blinding during peer review were presented at subsequent congresses.18,19 It was also noted at the time that there was a need for further studies on the registration of trials at inception to prevent bias in publications as well as on blinded peer review. This in part contributed to the adoption by the International Committee of Medical Journal Editors of the requirement that all clinical trials be registered prior to enrollment of the first patient.
After the first few congresses, there were fewer plenary session abstracts presented on peer review as a process, and by the sixth and seventh congresses in 2009 and 2013, there were no plenary abstracts on the actual process. Parallel developments that had led to the foundation of the Cochrane Collaboration had shown the considerable effects of bias on the published literature, and soon the papers presented at the Peer Review Congress were examining the nature of these biases and how they related to the peer review and publication processes.14-16 Many realized that how peer review was organized (eg, blinded or not, use of author-recommended reviewers or not) had become less important compared with the extraordinary distortions caused by biases. This understanding was confirmed by numerous studies that evaluated the causes and extent of bias in scientific publication, particularly, conflicts of interest, industry funding, and defective and often deceptive reporting. In addition, many studies examined the importance of transparency and accountability in authorship.
In 2017, researchers again found plenty to criticize. At earlier congresses, much evidence had been presented that financial conflicts of interest were a powerful reason so much industry-related research was demonstrably biased. Abstracts presented in the 2017 congress showed that these problems continue,20,21 and that biases undermine peer review by journals and funders. These biases are attributable to spin in the presentation of results; failure to publish final results; and gender, geographic, and author-prestige factors that distort the record across all sciences. Other researchers reported on specific problems with quality control of scientific images and nucleotide sequences. Several other studies documented the limited amount of data sharing actually taking place despite early experiences demonstrating demand for shared data and willingness to share.22,23
A major contribution of the congresses has been to get researchers to focus on where problems may exist and to identify and test solutions. For example, in 2017 an interventional study was presented that evaluated a common problem and potential improvement—the effect of the introduction of a simple but mandatory checklist into the editorial process for Nature journals publishing in the life sciences.24 The checklist included 4 methodologic criteria (randomization, blinding, sample size calculations, and exclusions) that if properly reported might reduce the risk of bias. In this before-and-after study, a substantial improvement in the reporting of these methodologic standards was seen in the participating journals. Another trial evaluated the effect of a mandatory 108-item checklist vs the standard editorial process for manuscripts submitted to PLOS One and found much less–favorable results.25 Discussants following presentation of these 2 complementary studies speculated that perhaps the shorter 4-item checklist was easier to implement and thus more successful. These multijournal initiatives provide a good model for future research into improvements in the publication process.
Other areas of inquiry into the state of improvements included assessments of early positive experiences with data sharing and the requirement for registration of randomized clinical trials and trial results22,23 and the effect of these requirements on complete and consistent reporting of results in trial protocols, registries, and publications.26,27 Some promises of improvement remain in limbo. For example, Zarin et al28 showed that of trials registered at ClinicalTrials.gov, 33% of completed trials and 57% of terminated trials had no corresponding published articles. Other studies evaluated trends in increased use of statistical review and appropriate reporting of statistical results. Also, in 2017, there was an encouraging burst of abstracts on the quality of peer review to assess grant applications and mechanisms to improve funder processes.
While previous trials have compared the quality of various forms of peer review (double-blind, single-blind, and open), several 2017 studies evaluated new processes that offer authors, reviewers, and editors multiple options for choosing different forms of peer review. These studies evaluated various rates of uptake and views of usefulness of different types of peer review across a range of scientific disciplines, multiple journals, and postpublication media. Several studies examined the effect of including patients in the peer review processes of journals and funders. Other studies assessed the roles of preprints and new forms of postpublication metrics,29 online-only supplements,30 replacement of articles with pervasive errors,31 and online commenting,32 which continues to appear infrequently compared with the huge body of the world’s scientific literature.
Although the eighth Peer Review Congress saw improvements in the scope and diversity of research and participants, we were disappointed that we did not receive as many abstracts as expected on other important issues and threats to the scientific enterprise, such as reproducibility, fake peer review, and predatory journals. Research on these topics is important.
Peer review can be looked on as a test, and most tests are evaluated before they are used in practice with patients. However, as Moher33 lamented during his plenary address on editors and peer reviewers as custodians of high-quality science: “What we have accomplished to date is still not optimal. This is not the best way to instill confidence in readers, provide value for money for funders, or ensure the public can trust the research record.” The credibility of journals depends on robust quality assurance mechanisms. This requires continued and more rigorous testing of the operating characteristics of peer review and publication to make sure that all the labor and costs are justified.8 Large multijournal (and multifunder) controlled trials, as were done admirably by the Nature and PLOS journals, of at least 2 sorts of peer review (before, during, and after publication) are still needed. Previous experience testing blinding in peer review shows that this will be expensive and time-consuming.19 As is usual when measuring quality, a major obstacle will be finding reliable and credible end points.
Many have come to rely on peer review and accept it as a normative process.1,2 Despite the continued criticism of peer review, even if it were proved to cause harm, few would vote for its abolition, and most would advocate for its improvement, so more research is needed. That is what the congresses have shown. The eighth congress was once again full of discussion and argument, but, as one observer noted, it was remarkably good-humored.34 Plans for the Ninth International Congress on Peer Review and Scientific Publication are counting on the continued existence, use, and need for assessment of peer review and publication in all their evolving forms.
Corresponding Author: Annette Flanagin, RN, MA, JAMA and the JAMA Network, 330 N Wabash Ave, Chicago, IL 60611 (annette.flanagin@jamanetwork.org).
Conflict of Interest Disclosures: The authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Dr Rennie served as director and Ms Flanagin as executive director for the Peer Review Congress. Ms Flanagin reported grants from multiple companies and organizations that sponsored the Peer Review Congress.
Funding/Support: The Eighth Peer Review Congress was supported with grants from Meta-Research Innovation Center at Stanford (METRICS), Wolters Kluwer Health, the New England Journal of Medicine, eJournal Press, Annals of Internal Medicine, Aries Systems Corporation, BioMed Central/Springer Nature, Public Library of Science, Peer-Review Evaluation, Silverchair Information Systems, Copyright Clearance Center, and HighWire Press.
Role of the Funder/Sponsor: The funders had no role in the review and decisions of abstracts for the Eighth Peer Review Congress, or the preparation, review, approval, or decision to submit the manuscript for publication.
Additional Contributions: We thank Howard Bauchner, MD; Michael Berkwits, MD, MSCE; Fiona Godlee, FRCP; Trish Groves, MBBS, MRCPsych; Theo Bloom, PhD; John P.A. Ioannidis, MD; Steven N. Goodman, MD, PhD; and the Peer Review Congress advisory board members for important support and guidance for the Eighth International Congress on Peer Review and Scientific Publication.
3.Rennie
D. Editorial peer review: its development and rationale. In: Godlee
F, Jefferson
T, eds. Peer Review in Health Sciences. Tavistock Square, London: BMJ Books; 1999.
4.American Medical Association.
Eighth International Congress on Peer Review and Scientific Publication: enhancing the quality and credibility of science: program and abstracts.
https://peerreviewcongress.org/index.html. Accessed December 19, 2017.
5.Broad
W, Wade
N. Betrayers of the Truth: Fraud and Deceit in the Halls of Science. New York, NY: Simon & Schuster; 1983.
6.Rennie
D, Gunsalus
CK. Scientific misconduct: new definition, procedures, and office—perhaps a new leaf.
JAMA. 1993;269(7):915-917.
PubMedGoogle ScholarCrossref 7. Guarding the guardians: research on editorial peer review: selected proceedings from the First International Congress on Peer Review in Biomedical Publication.
JAMA. 1990;263(10):1317-1441.
PubMedGoogle ScholarCrossref 13.McNutt
RA, Evans
AT, Fletcher
RH, Fletcher
SW. The effects of blinding on the quality of peer review: a randomized trial.
JAMA. 1990;263(10):1371-1376.
PubMedGoogle ScholarCrossref 17.Garfield
E, Welljams-Dorof
A. The impact of fraudulent research on the scientific literature: the Stephen E. Breuning case.
JAMA. 1990;263(10):1424-1426.
PubMedGoogle ScholarCrossref 18.van Rooyen
S, Godlee
F, Evans
S, Smith
R, Black
N. Effect of blinding and unmasking on the quality of peer review: a randomized trial.
JAMA. 1998;280(3):234-237.
PubMedGoogle ScholarCrossref 19.Justice
AC, Cho
MK, Winker
MA, Berlin
JA, Rennie
D; PEER Investigators. Does masking author identity improve peer review quality? a randomized controlled trial.
JAMA. 1998;280(3):240-242.
PubMedGoogle ScholarCrossref 20.Grundy
Q, Dunn
AG, Bourgeois
FT, Coiera
E, Bero
L. Prevalence of disclosed conflicts of interest in biomedical research and associations with journal impact factors and Altmetric scores.
JAMA. doi:
10.1001/jama.2017.20738Google Scholar 21.Hansen
C, Lundh
A, Rasmussen
K, Frandsen
TF, Gøtzsche
P, Hróbjartsson
A.
The influence of industry funding and other financial conflicts of interest on the outcomes and quality of systematic reviews. Paper presented at: Eighth International Congress on Peer Review and Scientific Publication; September 12, 2017; Chicago, IL.
https://peerreviewcongress.org/prc17-0222. Accessed December 19, 2017.
23.Tannenbaum
S, Ross
JS, Krumholz
HM,
et al.
Early experiences with journal data sharing policies: a survey of published clinical trial investigators. Paper presented at: Eighth International Peer Review Congress; September 11, 2017; Chicago, IL.
https://peerreviewcongress.org/prc17-0186. Accessed December 19, 2017.
24.Macleod
M; NPQIP Collaborative Group.
Impact of a change in editorial policy at Nature Publication Group (NPG) on their reporting of biomedical research. Paper presented at: Eighth International Congress on Peer Review and Scientific Publication; September 12, 2017; Chicago, IL.
https://peerreviewcongress.org/prc17-0165.
25.Sena
E; Intervention to Improve Compliance With the ARRIVE Guidelines (IICARus) Collaborative Group.
Impact of an intervention to improve compliance with the ARRIVE guidelines for the reporting of in vivo animal research. Paper presented at: Eighth International Congress on Peer Review and Scientific Publication; September 11, 2017; Chicago, IL.
https://peerreviewcongress.org/prc17-0296. Accessed December 19, 2017.
26.Chan
AW, Pello
A, Kitchen
J,
et al. Association of trial registration with reporting of primary outcomes in protocols and publications.
JAMA. 2017;318(17):1709-1711.
PubMedGoogle ScholarCrossref 27.Woloshin
S, Schwartz
LM, Bagley
PJ, Blunt
HB, White
B. Characteristics of interim publications of randomized clinical trials and comparison with final publications.
JAMA. doi:
10.1001/jama.2017.20653Google Scholar 28.Zarin
DA, Tse
T, Williams
RJ, Rajakannan
T, Fain
KM.
Evaluation of the ClinicalTrials.gov results database and its relationship to the peer-reviewed literature. Paper presented at: Eighth International Congress on Peer Review and Scientific Publication; September 11, 2017; Chicago, IL.
https://peerreviewcongress.org/prc17-0267. Accessed December 19, 2017.
30.Flanagin
A, Christiansen
S, Borden
C,
et al. Editorial evaluation, peer review, and publication of research reports with and without supplementary online content.
JAMA. doi:
10.1001/jama.2017.20650Google Scholar 31.Marasović
T, Utrobičić
A, Marušić
A.
Analysis of indexing practices of corrected and republished articles in MEDLINE, Web of Science, and Scopus. Paper presented at: Eighth International on Peer Review and Scientific Publication; September 12, 2017; Chicago, IL.
https://peerreviewcongress.org/prc17-0314. Accessed December 19, 2017.
32.Vaught
MD, Jordan
DC, Bastian
H.
A cross-sectional study of commenters and commenting in PubMed, 2014-2016: who’s who in PubMed Commons. Paper presented at: Eighth International on Peer Review and Scientific Publication; September 12, 2017; Chicago, IL.
https://peerreviewcongress.org/prc17-0269. Accessed December 19, 2017.
33.Moher
D. Custodians of high-quality science: are editors and peer reviewers good enough? Presented at: Eighth International Congress on Peer Review and Scientific Publication; September 11, 2017; Chicago, IL.