[Skip to Content]
[Skip to Content Landing]
December 7, 2005

Contradictions in Highly Cited Clinical Research

JAMA. 2005;294(21):2695-2696. doi:10.1001/jama.294.21.2695-c

To the Editor: Dr Ioannidis1 notes that randomized clinical trials failed to be replicated 9 of 39 times whereas nonrandomized (epidemiological) studies failed to be replicated 5 out of 6 times. Pocock et al2 have also noted high false-positive rates in epidemiological studies. A possibly unappreciated explanation for failure of replication is the multiple comparisons problem.3 Researchers typically ask many questions of the same data set and formulate hypotheses in multiple ways (eg, covariate adjustment vs no adjustment; percentage change from baseline vs raw score, parametric vs nonparametric test), resulting in a multiplicity of features to choose to emphasize in the publication. The most dramatic results will naturally be emphasized. Failure of replication is essentially the same problem as investing in a so-called hot stock, based on its 50% return last year, and expecting the same 50% this year. Multiplicity invites selection, which in turn leads to regression to the mean.

First Page Preview View Large
First page PDF preview
First page PDF preview