[Skip to Content]
Access to paid content on this site is currently suspended due to excessive activity being detected from your IP address Please contact the publisher to request reinstatement.
[Skip to Content Landing]
Views 1,989
Citations 0
Editor's Note
October 2014

Randomized Clinical Trials and Observational Studies Are More Often Alike Than Unlike

JAMA Intern Med. 2014;174(10):1557. doi:10.1001/jamainternmed.2014.3366

Reconciling the results of randomized clinical trials (RCTs) and observational studies remains a substantial challenge for clinical medicine. There are numerous examples of studies showing that therapies seemed effective, or perhaps conferred risk, when investigated by observational methods that were later contradicted by evidence from RCTs, and vice versa. A number of reasons may account for why these 2 types of research studies may arrive at dissimilar findings. These include selection bias, confounding, statistical power, and differential adherence and follow-up. Another critical difference is generalizability. Randomized clinical trials tend to evaluate interventions under ideal conditions among highly selected populations, whereas observational studies examine effects in “real world” settings.

In this issue of JAMA Internal Medicine, Hue et al1 provide another example of this phenomenon. Observational studies had suggested that treatment with bisphosphonates reduced the risk of developing postmenopausal breast cancer. Examining women without prior breast cancer diagnosis enrolled in 2 large RCTs of bisphosphonate treatment for fracture prevention, they found that neither alendronate nor zoledronic acid use was associated with decreased breast cancer risk. This difference might be explained by the failure of observational studies to account for the “healthy user” bias, the same reason why observational studies and RCTs examining estrogen therapy to prevent heart attacks were thought to have discordant results.

Whereas these findings highlight why it is so important for new therapies to be evaluated using RCTs, they also reinforce the importance of assessing the methodological rigor of observational studies before interpreting real-world effects. Just as we closely scrutinize RCT design, so must we understand the quality and statistical power of the data used for observational studies, how participants were identified, the duration of follow-up, the end points examined, and the analytic strategy used. Observational studies are particularly valuable for clinical situations unlikely to be tested using RCTs, and many provide valid and reliable real-world evidence. In fact, a recent Cochrane review2 found little evidence that the results of observational studies and RCTs systematically disagreed. Thus, whereas we all can remember examples of when RCTs and observational studies differed, less memorable are the even more numerous examples in which results were consistent. In the end, we should be open to all types of evidence and rely on rigorous clinical science to guide practice.

Hue  TF, Cummings  SR, Cauley  JA,  et al.  Effect of bisphosphonate use on risk of postmenopausal breast cancer: results from the randomized clinical trials of alendronate and zoledronic acid [published online August 11, 2014]. JAMA Intern Med. doi:10.1001/jamainternmed.2014.3634.PubMed
Anglemyer  A, Horvath  HT, Bero  L.  Healthcare outcomes assessed with observational study designs compared with those assessed in randomized trials. Cochrane Database Syst Rev. 2014;4:MR000034.PubMed