[Skip to Navigation]
Sign In

Featured Clinical Reviews

Editorial
April 18, 2001

CONSORT Revised—Improving the Reporting of Randomized Trials

Author Affiliations

Author Affiliation: Dr Rennie is Deputy Editor, JAMA.

JAMA. 2001;285(15):2006-2007. doi:10.1001/jama.285.15.2006

If physicians are to base treatment decisions on the evidence in the medical literature, all the relevant results of trials must be available easily and consistently. Yet it is common to have trouble identifying the hypothesis, the research question, and the design of a published trial. It is even more common to lose count of the participants or to be unable to tell who received what therapies and the type of analysis used. As a result, it is often impossible to know whether the conclusions are justified by the data.

In February 1995, Schulz and colleagues1 published an important article that drew attention to this sad state of affairs and to the importance of complete reporting of clinical trials if bias was to be avoided. At that time 2 groups, responding to the widespread perception that reporting of study results was highly variable, had held meetings to try to articulate the standards of good reporting. The first, the Standards of Reporting Trials (SORT) group, met in Ottawa, Ontario, in 1993 and published its recommendations in December 1994.2 The second group met at Asilomar, Calif, in 1994 and also published its recommendations for reporting of clinical trials at the end of that year.3

In 1995, JAMA editors persuaded the authors of a clinical trial already accepted for publication to rewrite their article to conform with all the SORT recommendations.4,5 The report turned out to be an extraordinary, one-of-a-kind prototype. The article was structured into more than 30 parts, was apparently agonizing to write, and was certainly a torture to read. But the experiment was instructive because the authors, and numerous readers, let us know their reactions and suggestions. It was apparent that the SORT recommendations were too inflexible, too mechanistic, and too little concerned with the external validity or applicability of the trial results.5

Under the leadership of David Moher, the 2 groups then pooled their results.5 In 1996, the result of this cooperation—the Consolidated Standards of Reporting Trials (CONSORT) statement—was published.6,7 The statement described the process, provided the rationale for the various reporting requirements listed, and, crucially, allowed a mechanism that permitted considerable individual freedom on the part of journal editors in the way they set forth the information. In addition, CONSORT recommended a flow diagram so that the reader might easily follow the progress of participants through the various stages of the trial, and in every arm of the study.

Some trialists confused CONSORT with the earlier SORT and so damned CONSORT for the very reasons that had brought about the changes found in CONSORT.8,9 They objected to what they perceived as unilateral imposition of excessively prescriptive rules by nontrialists.8 This view ignored all the experienced trialists who had taken part in the lengthy process of feedback, of trial and error, that had informed the final document.9 Fortunately, no substantive criticisms were raised9 and the general reaction was strongly favorable.

CONSORT was republished in and endorsed by many journals, in several languages (http://www.consort-statement.org). Its use is recommended by the International Committee of Medical Journal Editors, the Council of Science Editors, and the World Association of Medical Editors. CONSORT has been so successful that similar groups of scientists and editors have set up standards, using a similar template and based on empirical evidence whenever possible, to increase the quality of reporting of meta-analysis of randomized trials (QUOROM),10 and the reporting of the meta-analysis of observational studies (MOOSE).11 An effort led by Jeroen Lijmer to establish standards to improve the reporting of studies assessing diagnostic tests (STARD) is proceeding, and this group is expected to publish its recommendations soon. Matthias Egger and colleagues are setting up similar groups for improving the reporting of case-control and cohort studies.

Anyone who has read through many hundreds of randomized controlled trials (RCTs) is immediately struck by the fact that when the authors have used the CONSORT checklist and flow diagram, it takes a fraction of the time to get the essential information necessary to assess the quality of a trial. In this issue of THE JOURNAL, Moher et al12 show in 3 large medical journals, including JAMA, that adopted the CONSORT statement, study reporting was improved more than in one such journal that has not chosen to do so. However, before concluding that CONSORT is some sort of panacea, Moher et al also demonstrate that even in journals that have made a strong commitment to CONSORT, reporting was deficient in many ways, for example, in detailing concealment of allocation.12 This shows that editors have as much difficulty as trialists in learning new behaviors.

But the CONSORT statement, developed after a great deal of discussion and some experimentation, was never intended to be set in concrete. From the first, it was assumed that CONSORT would have to change as new evidence accumulated on the importance of various items of reporting. In 1998, Meinert13 published a detailed and cogent critique of CONSORT and made several important suggestions for its improvement. In that same issue of JAMA, Moher detailed the often woeful ways in which trials were still being reported and stressed the importance of evidence in improving what he considered to be "an evolving tool," CONSORT.14

Also in this issue of THE JOURNAL is an article by Egger et al,15 who examine the use of flow diagrams recommended in the CONSORT statement to show the path of participants from enrollment to analysis. Egger et al found that use of the diagrams was associated with more complete reporting and recommend that all reports of RCTs include them. However, the authors noted problems with the use of the recommended flow diagram, with few of the reports they studied including the number of participants who received treatments as allocated. Moreover, as Meinert pointed out,13 there were deficiencies in the diagram itself, such as failure to recommend that the number of participants included in the main analysis be specified, a number needed to assess whether an intention-to-treat analysis had been performed. Egger et al make some sensible suggestions for revision of the flow diagram15 and these revisions have been incorporated in the revised CONSORT statement.

Publication of the revised CONSORT statement in this issue of THE JOURNAL16 coincides with its simultaneous publication in the Annals of Internal Medicine and The Lancet, 2 journals that have, in addition to JAMA, strongly supported CONSORT. Just as the original SORT and Asilomar statements were revised in response to public comments and experience, so the CONSORT statement, flow diagram, and the checklist have all been revised. The checklist, as before, consists of items that empirical evidence has shown must be included if reporting bias is to be minimized. The changes are listed in the new statement.16 To help with the use and dissemination of the revised statement, an explanatory and elaboration article is being published with the CONSORT statement in the Annals of Internal Medicine.17 The revision of CONSORT, which is clearer and more flexible, should make it even easier for authors and editors to use and should greatly improve the transparency of reporting. And since the reporting of a trial is inseparable from the rest of its conduct, I expect these revised standards eventually to serve a valuable educational function and improve the way trials are conducted.18

Finally, in this issue of THE JOURNAL Devereaux and colleagues19 report that there is a long way to go to achieve an acceptable degree of precision in thinking about as well as in reporting clinical trials. The authors assessed whom physicians understood to have been blinded (masked) when a study was reported as using single, double, and triple blinding. They found that physicians' interpretations were quite variable, with the respondents offering 10, 17, and 15 unique interpretations, respectively, of the 3 sorts of blinding. When the authors looked at recently published textbooks, they found 5, 9, and 7 different interpretations of each. Lewis Carroll's Humpty Dumpty could say: "When I use a word . . . it means just what I choose it to mean—neither more nor less,"20 but in the interpretation of science, there is no place for such ambiguity. As Devereaux et al suggest, the answer must be for authors to describe exactly and completely what they did, which is what CONSORT and these other initiatives are all about.

The whole of medicine depends on the transparent reporting of clinical trials. There is plenty of evidence for biased reporting due to commercial influences.21 To retain credibility, trialists, other researchers, and editors have to show the profession and the public evidence that we are making an earnest attempt to achieve the very highest standards of transparent reporting.

References
1.
Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias.  JAMA.1995;273:408-412.Google Scholar
2.
The Standards of Reporting Trials Group.  A proposal for structured reporting of randomized controlled trials.  JAMA.1994;272:1926-1931.Google Scholar
3.
Working Group on Recommendations for Reporting Clinical Trials in the Biomedical Literature.  Call for comments on a proposal to improve reporting of clinical trials in the biomedical literature.  Ann Intern Med.1994;121:894-895.Google Scholar
4.
Williams JW, Holleman DR, Samsa GP, Simel DL. Randomized controlled trial of 3 versus 10 days of trimethoprim/sulfamethoxazole for acute maxillary sinusitis.  JAMA.1995;273:1015-1021.Google Scholar
5.
Rennie D. Reporting randomized controlled trials: an experiment and a call for responses from readers.  JAMA.1995;273:1054-1055.Google Scholar
6.
Begg C, Cho M, Eastwood S.  et al.  Improving the quality of reporting of randomized controlled trials: the CONSORT statement.  JAMA.1996;276:637-639.Google Scholar
7.
Rennie D. How to report randomized controlled trials: the CONSORT statement.  JAMA.1996;276:649.Google Scholar
8.
Meade TW, Wald N, Collins R. CONSORT statement on the reporting standards in clinical trials.  BMJ.1997;314:1126.Google Scholar
9.
Altman DG, Moher D, Rennie D. CONSORT statement on the reporting standards of clinical trials.  BMJ.1997;314:1127.Google Scholar
10.
Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup DF.for the QUOROM Group.  Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement.  Lancet.1999;354:1896-1900.Google Scholar
11.
Stroup DF, Berlin JA, Morton SC.  et al.  Meta-analysis of observational studies in epidemiology: a proposal for reporting.  JAMA.2000;283:2008-2012.Google Scholar
12.
Moher D, Jones A, Lepage L.for the CONSORT Group.  Use of the CONSORT statement and quality of reports of randomized trials: a comparative before-and-after evaluation.  JAMA.2001;285:1992-1995.Google Scholar
13.
Meinert CL. Beyond CONSORT.  JAMA.1998;279:1487-1489.Google Scholar
14.
Moher D. CONSORT: an evolving tool to help improve the quality of reports of randomized controlled trials.  JAMA.1998;279:1489-1491.Google Scholar
15.
Egger M, Jüni P, Bartlett C.for the CONSORT Group.  Value of flow diagrams in reports of randomized controlled trials.  JAMA.2001;285:1996-1999.Google Scholar
16.
Moher D, Schulz KF, Altman D.for the CONSORT Group.  The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials.  JAMA.2001;285:1987-1991.Google Scholar
17.
Altman DG, Schulz KF, Moher D.  et al. for the CONSORT Group.  The revised CONSORT statement for reporting randomized trials: explanation and elaboration.  Ann Intern Med.2001;134:663-694.Google Scholar
18.
O'Toole LB. CONSORT statement on the reporting standards of clinical trials: MRC uses checklist similar to CONSORT's.  BMJ.1997;314:1127.Google Scholar
19.
Devereaux PJ, Manns BJ, Ghali WA.  et al.  Physician interpretations and textbook definitions of blinding terminology in randomized controlled trials.  JAMA.2001;285:2000-2003.Google Scholar
20.
Carroll L. Through the Looking-Glass and What Alice Found ThereNew York, NY: William Morrow & Co; 1887:chap 6.
21.
Rennie D. Fair conduct and fair reporting of clinical trials.  JAMA.1999;282:1766-1768.Google Scholar
×