[Skip to Navigation]
Sign In
JAMA Classics
May 6, 2009

Assessing Authority

Author Affiliations

Author Affiliations: James Lind Library, Oxford, England (Dr Chalmers). Dr Rennie (drummond.rennie@ucsf.edu) is Deputy Editor, JAMA.

Author Affiliations: James Lind Library, Oxford, England (Dr Chalmers). Dr Rennie (drummond.rennie@ucsf.edu) is Deputy Editor, JAMA.

JAMA. 2009;301(17):1819-1821. doi:10.1001/jama.2009.559

A Comparison of Results of Meta-analyses of Randomized Control Trials and Recommendations of Clinical Experts: Treatments for Myocardial Infarction


At the beginning of the 1990s, Antman and a team led by Tom Chalmers and Fred Mosteller used retrospective cumulative meta-analysis to show that the treatment recommendations of authorities in review articles and textbook chapters published over the previous 30 years had not reflected the best contemporary research evidence. These gaps between evidence and advice, which had sometimes lasted more than a decade, meant that both effective and dangerous treatments had been overlooked. The article by Antman et al published in JAMA in 1992 provided powerful evidence that traditional, unsystematic, narrative reviews did not serve patients well, and that better systems for gathering, analyzing, and disseminating clinical information were urgently required.

See PDF for full text of the original JAMA article.


In 1992, JAMA published an article coauthored by Elliott Antman, Joseph Lau, Bruce Kupelnick, Frederick Mosteller, and Thomas Chalmers entitled “A Comparison of Results of Meta-analyses of Randomized Control Trials and Recommendations of Clinical Experts.”1 The article showed that traditional review articles and textbooks had often given treatment advice that was dangerously inconsistent with the evidence available at the time they had been written. The article by Antman et al rapidly become a citation classic, having been cited 680 times, making it 134th among the most-cited articles in JAMA (Eugene Garfield, PhD, Thomson Reuters, written communication, February 25, 2009). This classic JAMA article has exceptional relevance to the task of providing reliable information to guide treatment decisions.

The Background

In the late 1980s, the 2 senior authors, Frederick Mosteller (a statistician) and Thomas Chalmers (a hepatologist),1 had joined forces as codirectors of a small technology assessment group housed in the basement of the Harvard School of Public Health. Both authors had been involved in the early development of controlled trials2,3 and in pioneering systematic approaches for synthesizing evidence from separate but similar studies.4 Improving methods for research synthesis had become a necessity. This was partly because it made no scientific sense to base conclusions on informal analyses of potentially biased “convenience samples” of studies and also because health professionals could not be expected to cope with the unmanageable volume of studies of potential relevance to their practice.5

In the mid-1970s, Tom Chalmers et al4 had used a systematic approach to identifying, assessing, and synthesizing the results of controlled trials of anticoagulants in patients with myocardial infarction. One of us (D.R.) helped handle the manuscript while serving as deputy editor of the New England Journal of Medicine and remembers how it seemed to settle, at one blow, an argument that had raged for decades. The analysis showed how methods could be used to synthesize the results of separate but similar studies to provide more scientifically robust estimates of the direction and size of treatment effects.

A decade later, Mulrow6 showed that reviews published in major general medical journals had usually ignored basic scientific principles. She and others in the late 1980s, including Tom Chalmers and his colleagues,7 suggested standards to decrease bias and random errors in reviews of evidence. These proposed standards included calls for full descriptions of the methods used to search for articles, criteria for inclusion and exclusion of studies, and the statistical methods used to achieve quantitative synthesis of data from separate studies—a technique that had been dubbed “meta-analysis” by a US social scientist a decade earlier.8 The 1980s witnessed increasing use of these methods in medicine. In one sphere—care during pregnancy and childbirth—efforts were made to identify, assess, and make sense of all of the controlled trials that could be identified. Importantly, from 1988 onward, the new medium of electronic publication was exploited to update these analyses cumulatively as new evidence became available.9

The Distinct Contribution of the Analysis by Antman et al

Discarding a venerable system of expert reviewing was a radical idea that could scarcely have been adopted unless it had been demonstrated that a real problem existed with important implications for the well-being of patients. The distinct and important contribution made by the analysis reported by Antman et al1 in JAMA in 1992 was that it provided clear evidence that the old system of reviews simply did not work, at least as far as treatment for myocardial infarction was concerned.

The authors' comparisons of the recommendations of clinical experts writing reviews and book chapters over a period of 30 years with what could have been known had the experts used systematic reviews and meta-analyses made clear that effective as well as dangerous treatments had been overlooked. For example, thrombolytic drugs “did not begin to be recommended even for specific indications by more than half the experts until 13 years after they could have been shown to be effective.” In 1992, 7 years after “an approximately 20% reduction in the risk of death was established at the P<.001 level (OR, 0.78; 95% CI, 0.69 to 0.90), 14 reviews did not mention the treatment or felt it was still experimental.” Antiplatelet drugs “did not begin to be recommended for routine use by more than half of the reviewers until 1986, 10 years after they could have been shown to be effective by cumulative meta-analyses, and 6 years after the first published meta-analysis.”1 Type 1 antiarrhythmic drugs were found to have statistically significant adverse effects on mortality, and serious doubt was cast on the safety of calcium channel blockers. The authors concluded by calling for more timely reviews and the “dissemination of clinical trial results in a format that will facilitate better published clinical guidelines.”1

Tom Chalmers, seen in the Figure working on the manuscript with Joseph Lau, was the corresponding author for the article, and its publication was surrounded by confusion and some ill will. The coincidence of topic and content with an article that appeared in the New England Journal of Medicine 2 weeks later was an unpleasant surprise to the editors of both journals.10 Chalmers had implied to the editors at JAMA that the other manuscript, which he called “a description of the cumulative meta-analysis methodology,” had been sent to a specialized statistical journal. Because of personal trust, he was never asked for further clarification, but readers accused the New England Journal of Medicine of duplicate publication.11 Looking back 17 years, after the dust has settled, the editors at JAMA explained Tom Chalmers' doubtful behavior by one of his most notable characteristics—relentless competitiveness.

Figure. Thomas Chalmers and Joseph Lau Discussing Cumulative Meta-analysis
Figure. Thomas Chalmers and Joseph Lau Discussing Cumulative Meta-analysis

Thomas Chalmers (left) and Joseph Lau (right), circa 1991 at the Boston (Jamaica Plain) VA Medical Center (now part of the VA Boston Health Care System) in Massachusetts.

The Evolution of a New Approach to Research Reviews

Thousands of systematic reviews and meta-analyses have been published and they are now the most frequently cited form of clinical research.12 However, the challenge of keeping reviews up to date as new evidence accumulates has not yet been solved. The 1992 articles in JAMA1 and the New England Journal of Medicine10 showed retrospectively what could have been known about treatments for myocardial infarction had the results of each new trial been added to those already at hand. Their findings gave urgency to the idea that not only were clinicians not making use of evidence already published, but a system was very much needed to increase the dissemination of good evidence. Failure to make use of all available evidence sometimes had lethal consequences.

A new system was emerging with the creation of the Cochrane Collaboration, a nonprofit international organization that was inaugurated formally in 1993 to prepare, maintain, and disseminate systematic reviews of the effects of health care.13 The growth of the Cochrane Collaboration was very rapid, partly because of the large numbers of interested individuals who volunteered to help achieve its objectives, but also because the Internet and the spread of personal computers provided easy, fast, and inexpensive communication. These electronic resources also provided the perfect medium for updating evidence, in contrast to reviews published in print journals and textbooks.

However, the challenge of keeping existing systematic reviews up to date has not yet been cracked by any organization in the world, including the Cochrane Collaboration, and authors and editors of journals are still not taking seriously the need for new results to be set systematically in the context of relevant existing evidence.14 Therefore, the problem identified so clearly in the article by Antman et al1 has still not been overcome, and this means that patients continue to receive treatments that do not necessarily reflect the best available evidence.

Tom Chalmers—A Tribute

Tom Chalmers' publishing career in clinical trials began in 1955 with a remarkable report of a randomized factorial trial of bed rest and diet for hepatitis.3 In a personal reflection on the importance of this article, David Sackett15 wrote: “Reading this paper not only changed my treatment plan for my patient. It forever changed my attitude toward conventional wisdom, uncovered my latent iconoclasm, and inaugurated my career in what I later labeled ‘clinical epidemiology.’” In the early 1990s, after being shown early versions of the analyses that would form the basis of the article by Antman et al,1 one of us (I.C.) suggested to Tom Chalmers that it would come to be regarded as the most important of his many important publications. This Commentary on the article is a tribute to all of the authors of the article by Antman et al,1 but to Tom Chalmers particularly. He died 4 years after it was published, but it has enduring importance for clinicians and patients alike.

Back to top
Article Information

Financial Disclosures: None reported.

Antman EM, Lau J, Kupelnick B, Mosteller F, Chalmers TC. A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts: treatments for myocardial infarction.  JAMA. 1992;268(2):240-2481535110PubMedGoogle ScholarCrossref
Petrosino A. Charles Frederick [Fred] Mosteller (1916-2006). James Lind Library Web site. http://www.jameslindlibrary.org/trial_records/20th_Century/1970s/bunker/bunker_biog.html. Accessibility verified April 6, 2009
Chalmers TC, Eckhardt RD, Reynolds WE,  et al.  The treatment of acute infectious hepatitis: controlled studies of the effects of diet, rest, and physical reconditioning on the acute course of the disease and on the incidence of relapses and residual abnormalities.  J Clin Invest. 1955;34(7 part II):1163-123514392230PubMedGoogle ScholarCrossref
Chalmers TC, Matta RJ, Smith H Jr, Kunzler AM. Evidence favoring the use of anticoagulants in the hospital phase of acute myocardial infarction.  N Engl J Med. 1977;297(20):1091-1096909566PubMedGoogle ScholarCrossref
Chalmers I, Hedges L, Cooper H. A brief history of research synthesis.  Eval Health Prof. 2002;25(1):12-3711868442PubMedGoogle ScholarCrossref
Mulrow CD. The medical review article: state of the science.  Ann Intern Med. 1987;106(3):485-4883813259PubMedGoogle ScholarCrossref
Sacks HS, Berrier J, Reitman D, Ancona-Berk VA, Chalmers TC. Meta-analysis of randomized controlled trials.  N Engl J Med. 1987;316(8):450-4553807986PubMedGoogle ScholarCrossref
Glass GV. Primary, secondary and meta-analysis of research.  Educ Res. 1976;10:3-8Google Scholar
Chalmers I, Enkin M, Keirse MJNC. Preparing and updating systematic reviews of randomized controlled trials of health care.  Milbank Q. 1993;71(3):411-4378413069PubMedGoogle ScholarCrossref
Lau J, Antman EM, Jimenez-Silva J, Kupelnick B, Mosteller F, Chalmers TC. Cumulative meta-analysis of therapeutic trials for myocardial infarction.  N Engl J Med. 1992;327(4):248-2541614465PubMedGoogle ScholarCrossref
Federman DJ, Mutgi AB. Redundant publication?  N Engl J Med. 1992;327(18):13161406824PubMedGoogle ScholarCrossref
Patsopoulos NA, Analatos AA, Ioannidis JP. Relative citation impact of various study designs in the health sciences.  JAMA. 2005;293(19):2362-236615900006PubMedGoogle ScholarCrossref
Chalmers I. The Cochrane Collaboration: preparing, maintaining and disseminating systematic reviews of the effects of health care. In: Warren KS, Mosteller F, eds. Doing More Good Than Harm: The Evaluation of Health Care Interventions. New York, NY: Annals of the New York Academy of Sciences; 1993:156-163. FREEFULLTEXT
Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence.  LancetIn pressGoogle Scholar
Sackett D. A 1955 clinical trial report that changed my career. James Lind Library Web site. http://www.jameslindlibrary.org/trial_records/20th_Century/1950s/chalmers_et_al/chalmers-commentary.pdf. Accessibility verified April 6, 2009