Author Affiliations: James Lind Library, Oxford, England (Dr Chalmers). Dr Rennie (email@example.com) is Deputy Editor, JAMA.
At the beginning of the 1990s, Antman and a team led by Tom Chalmers and Fred Mosteller used retrospective cumulative meta-analysis to show that the treatment recommendations of authorities in review articles and textbook chapters published over the previous 30 years had not reflected the best contemporary research evidence. These gaps between evidence and advice, which had sometimes lasted more than a decade, meant that both effective and dangerous treatments had been overlooked. The article by Antman et al published in JAMA in 1992 provided powerful evidence that traditional, unsystematic, narrative reviews did not serve patients well, and that better systems for gathering, analyzing, and disseminating clinical information were urgently required.
See PDF for full text of the original JAMA article.
In 1992, JAMA published an article coauthored by Elliott Antman, Joseph Lau, Bruce Kupelnick, Frederick Mosteller, and Thomas Chalmers entitled “A Comparison of Results of Meta-analyses of Randomized Control Trials and Recommendations of Clinical Experts.”1 The article showed that traditional review articles and textbooks had often given treatment advice that was dangerously inconsistent with the evidence available at the time they had been written. The article by Antman et al rapidly become a citation classic, having been cited 680 times, making it 134th among the most-cited articles in JAMA (Eugene Garfield, PhD, Thomson Reuters, written communication, February 25, 2009). This classic JAMA article has exceptional relevance to the task of providing reliable information to guide treatment decisions.
In the late 1980s, the 2 senior authors, Frederick Mosteller (a statistician) and Thomas Chalmers (a hepatologist),1 had joined forces as codirectors of a small technology assessment group housed in the basement of the Harvard School of Public Health. Both authors had been involved in the early development of controlled trials2,3 and in pioneering systematic approaches for synthesizing evidence from separate but similar studies.4 Improving methods for research synthesis had become a necessity. This was partly because it made no scientific sense to base conclusions on informal analyses of potentially biased “convenience samples” of studies and also because health professionals could not be expected to cope with the unmanageable volume of studies of potential relevance to their practice.5
In the mid-1970s, Tom Chalmers et al4 had used a systematic approach to identifying, assessing, and synthesizing the results of controlled trials of anticoagulants in patients with myocardial infarction. One of us (D.R.) helped handle the manuscript while serving as deputy editor of the New England Journal of Medicine and remembers how it seemed to settle, at one blow, an argument that had raged for decades. The analysis showed how methods could be used to synthesize the results of separate but similar studies to provide more scientifically robust estimates of the direction and size of treatment effects.
A decade later, Mulrow6 showed that reviews published in major general medical journals had usually ignored basic scientific principles. She and others in the late 1980s, including Tom Chalmers and his colleagues,7 suggested standards to decrease bias and random errors in reviews of evidence. These proposed standards included calls for full descriptions of the methods used to search for articles, criteria for inclusion and exclusion of studies, and the statistical methods used to achieve quantitative synthesis of data from separate studies—a technique that had been dubbed “meta-analysis” by a US social scientist a decade earlier.8 The 1980s witnessed increasing use of these methods in medicine. In one sphere—care during pregnancy and childbirth—efforts were made to identify, assess, and make sense of all of the controlled trials that could be identified. Importantly, from 1988 onward, the new medium of electronic publication was exploited to update these analyses cumulatively as new evidence became available.9
Discarding a venerable system of expert reviewing was a radical idea that could scarcely have been adopted unless it had been demonstrated that a real problem existed with important implications for the well-being of patients. The distinct and important contribution made by the analysis reported by Antman et al1 in JAMA in 1992 was that it provided clear evidence that the old system of reviews simply did not work, at least as far as treatment for myocardial infarction was concerned.
The authors' comparisons of the recommendations of clinical experts writing reviews and book chapters over a period of 30 years with what could have been known had the experts used systematic reviews and meta-analyses made clear that effective as well as dangerous treatments had been overlooked. For example, thrombolytic drugs “did not begin to be recommended even for specific indications by more than half the experts until 13 years after they could have been shown to be effective.” In 1992, 7 years after “an approximately 20% reduction in the risk of death was established at the P<.001 level (OR, 0.78; 95% CI, 0.69 to 0.90), 14 reviews did not mention the treatment or felt it was still experimental.” Antiplatelet drugs “did not begin to be recommended for routine use by more than half of the reviewers until 1986, 10 years after they could have been shown to be effective by cumulative meta-analyses, and 6 years after the first published meta-analysis.”1 Type 1 antiarrhythmic drugs were found to have statistically significant adverse effects on mortality, and serious doubt was cast on the safety of calcium channel blockers. The authors concluded by calling for more timely reviews and the “dissemination of clinical trial results in a format that will facilitate better published clinical guidelines.”1
Tom Chalmers, seen in the Figure working on the manuscript with Joseph Lau, was the corresponding author for the article, and its publication was surrounded by confusion and some ill will. The coincidence of topic and content with an article that appeared in the New England Journal of Medicine 2 weeks later was an unpleasant surprise to the editors of both journals.10 Chalmers had implied to the editors at JAMA that the other manuscript, which he called “a description of the cumulative meta-analysis methodology,” had been sent to a specialized statistical journal. Because of personal trust, he was never asked for further clarification, but readers accused the New England Journal of Medicine of duplicate publication.11 Looking back 17 years, after the dust has settled, the editors at JAMA explained Tom Chalmers' doubtful behavior by one of his most notable characteristics—relentless competitiveness.
Thomas Chalmers (left) and Joseph Lau (right), circa 1991 at the Boston (Jamaica Plain) VA Medical Center (now part of the VA Boston Health Care System) in Massachusetts.
Thousands of systematic reviews and meta-analyses have been published and they are now the most frequently cited form of clinical research.12 However, the challenge of keeping reviews up to date as new evidence accumulates has not yet been solved. The 1992 articles in JAMA1 and the New England Journal of Medicine10 showed retrospectively what could have been known about treatments for myocardial infarction had the results of each new trial been added to those already at hand. Their findings gave urgency to the idea that not only were clinicians not making use of evidence already published, but a system was very much needed to increase the dissemination of good evidence. Failure to make use of all available evidence sometimes had lethal consequences.
A new system was emerging with the creation of the Cochrane Collaboration, a nonprofit international organization that was inaugurated formally in 1993 to prepare, maintain, and disseminate systematic reviews of the effects of health care.13 The growth of the Cochrane Collaboration was very rapid, partly because of the large numbers of interested individuals who volunteered to help achieve its objectives, but also because the Internet and the spread of personal computers provided easy, fast, and inexpensive communication. These electronic resources also provided the perfect medium for updating evidence, in contrast to reviews published in print journals and textbooks.
However, the challenge of keeping existing systematic reviews up to date has not yet been cracked by any organization in the world, including the Cochrane Collaboration, and authors and editors of journals are still not taking seriously the need for new results to be set systematically in the context of relevant existing evidence.14 Therefore, the problem identified so clearly in the article by Antman et al1 has still not been overcome, and this means that patients continue to receive treatments that do not necessarily reflect the best available evidence.
Tom Chalmers' publishing career in clinical trials began in 1955 with a remarkable report of a randomized factorial trial of bed rest and diet for hepatitis.3 In a personal reflection on the importance of this article, David Sackett15 wrote: “Reading this paper not only changed my treatment plan for my patient. It forever changed my attitude toward conventional wisdom, uncovered my latent iconoclasm, and inaugurated my career in what I later labeled ‘clinical epidemiology.’” In the early 1990s, after being shown early versions of the analyses that would form the basis of the article by Antman et al,1 one of us (I.C.) suggested to Tom Chalmers that it would come to be regarded as the most important of his many important publications. This Commentary on the article is a tribute to all of the authors of the article by Antman et al,1 but to Tom Chalmers particularly. He died 4 years after it was published, but it has enduring importance for clinicians and patients alike.
Financial Disclosures: None reported.
Drummond Rennie, Iain Chalmers. Assessing Authority. JAMA. 2009;301(17):1819–1821. doi:10.1001/jama.2009.559