Customize your JAMA Network experience by selecting one or more topics from the list below.
The term evidence-based medicine has been defined as “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients.”1(p71) Few other concepts in contemporary medical practice have achieved such universal adulation among practitioners, academicians, and payers. As a profession, we have placed evidence-based medicine on a golden pedestal as the ultimate expression of our desire to make clinical decisions in a systematic and scientific fashion. In an ideal world, evidence-based medicine drives the content of clinical guidelines and informs decisions by payers about what tests or procedures should be performed and reimbursed. In developing guidelines, authors typically integrate the findings from all available published studies to provide the best possible advice to clinicians. Accordingly, the quality of guidelines is only as good as the published studies on which they are based.
Unfortunately, there is a dark secret that corrupts nearly every aspect of our profession and undermines societal efforts to promote evidence-based medicine. Many relevant studies that inform evidence-based medicine are never published, a phenomenon often termed publication bias or positive publication bias. In 1979, this practice was graphically described by psychologist Robert Rosenthal as the “file drawer problem.”2 Rosenthal wrote that “the extreme view of the file drawer problem is that journals are filled with the 5% of the studies that show Type I errors, while the file drawers are filled with the 95% of the studies that show nonsignificant results.”2(p638) In commercially sponsored pharmaceutical trials, the phenomenon of publication bias has been well described. Pediatric studies of antidepressant drugs showing increased suicidality in children and adolescents went systematically unpublished until public outcry resulted in disclosure and ultimately regulatory action.3 In a 2007 meta-analysis evaluating the cardiovascular risks of rosiglitazone, 35 of 42 trials conducted by the drug maker were unpublished and became available only after a lawsuit by the State of New York.4 The problem of publication bias became so pervasive in drug research that legislative action was ultimately required, resulting in congressional action to create a clinical trial registry maintained by the National Library of Medicine that now includes reporting of results.5
The article by Tzoulaki et al6 in this issue of JAMA Internal Medicine documents another troubling aspect of publication bias—selective reporting of associations between biomarkers and cardiovascular outcomes. Their findings are striking and disturbing, demonstrating that most meta-analyses of biomarkers commonly used in cardiovascular medicine show evidence of publication bias. Some of the biomarkers with strong evidence for selective reporting are commonly used to assess cardiovascular risk and guide therapy. In a few cases, the evidence for publication bias was extreme. For example, routine measurement of carotid intima-medial thickness has been advocated as a means to predict cardiovascular risk and select patients for treatment, but the current analysis demonstrates a greater than 12-fold excess in the number of favorable studies compared with what would be predicted.
How strong is the evidence for selective reporting in biomarker studies? Unfortunately, investigation of potential publication bias is a difficult and complex task. By definition, in the cases involving selective reporting of results, the missing studies reside in a file drawer rather than the public domain. Accordingly, identification of potentially missing studies requires extraordinary statistical detective work. In the current study, the authors used 3 methods: evidence for large heterogeneity in meta-analyses, an excess of “positive” results in small studies, and a statistically significant excess of favorable studies compared with what would be predicted. It must be emphasized, as acknowledged by the authors, that none of the 3 methods are definitive, but taken together, they provide a much clearer picture of the potential unreliability of biomarker analyses.
Strikingly, of 49 meta-analyses with statistically significant findings, only 13 showed no evidence of selective reporting of positive results. In some surprising examples, biomarkers under consideration for inclusion in practice guidelines showed strong evidence for publication bias. For example, apolipoprotein B has been widely advocated as a better predictor of the risk of coronary heart disease compared with low-density lipoprotein cholesterol. In a meta-analysis, the summary relative risk (RR) of coronary heart disease for the top vs bottom tertile of apolipoprotein B was implausibly large, 1.98 (95% CI, 1.65-2.38),7 compared with the study with the largest number of cardiovascular events (1.32; 1.09-1.60). Furthermore, in the meta-analysis, a measure of heterogeneity, I2, was very large (80%), demonstrating that there are large study-to-study differences among analyses evaluating this biomarker. Finally, there was a 1.5-fold excess of studies with significant results in the meta-analysis compared with what would be predicted (P = .02). Accordingly, the writers currently updating national guidelines for lipid management may want to consider these findings before recommending measurement of apolipoprotein B, rather than low-density lipoprotein cholesterol, for selection of patients to undergo treatment of hyperlipidemia.
The example of apolipoprotein B is typical of the findings described in the current article. Many biomarkers are likely associated with cardiovascular risk, but the magnitude of the association is probably much smaller than suggested by the definitive meta-analysis. Thus, although these biomarkers may be reliably associated with increased risk, the problem of publication bias appears to overemphasize the value of individual biomarkers and magnify estimates of the strength of the association. The authors point out that the best-studied biomarkers typically show RRs of 1.1 to 1.5 for each SD change. Prudence suggests a skeptical approach to acceptance of biomarker studies suggesting an RR larger than 1.5. Cardiovascular disease is a complex polygenetic disorder. Studies showing an RR for any single biomarker approaching 2.0 are therefore biologically implausible and should be interpreted cautiously.
Who is to blame for the shameful current state of affairs? Unfortunately, we must all take responsibility. As academic leaders, we have failed to emphasize to colleagues and trainees that every study, regardless of the findings, deserves publication. Science advances only when the totality of available information is shared widely within the academic community. We commonly hear a rationalization from authors who leave negative studies in the file drawer. Physician-scientists often claim it is difficult or impossible to get journals to accept studies with negative findings. That is just nonsense. The editors of the top journals have been ardent advocates of trial registration and make every effort to publish carefully performed negative studies.8 Furthermore, there are now so many published journals that it is inconceivable that any reasonable study cannot find a journal interested in publishing the findings.
How do we ameliorate the problem of publication bias in medicine? First, it is essential that investigators register every study with ClinicalTrials.gov, regardless of sponsorship. Registration not only is intended for studies of therapeutic interventions but also applies equally to assessment of markers of risk. Academic medical centers should consider adopting a policy that considers failure to attempt to publish negative findings an ethical violation of standards of conduct. Although ClinicalTrials.gov operates a results registry, only minimum data sets are supported. Therefore, society must consider funding the National Library of Medicine to create a public website where authors can post the detailed results of findings that they were unable to publish despite submitting to multiple journals. Finally, we must emphasize to colleagues and trainees that all studies contribute to scientific understanding. We have a moral obligation to our patients to make all research findings available to the broader scientific community.
Published Online: March 25, 2013. doi:10.1001/jamainternmed.2013.4074
Correspondence: Dr Nissen, Department of Cardiovascular Medicine, The Cleveland Clinic Foundation, 9500 Euclid Ave, Desk F25, Cleveland, OH 44195 (firstname.lastname@example.org).
Conflict of Interest Disclosures: None reported.
Nissen SE. Biomarkers in Cardiovascular Medicine: The Shame of Publication Bias Comment on “Bias in Associations of Emerging Biomarkers With Cardiovascular Disease”. JAMA Intern Med. 2013;173(8):671–672. doi:10.1001/jamainternmed.2013.4074
Create a personal account or sign in to: