[Skip to Navigation]
Sign In
Editorial
February 21, 2017

Using Design Thinking to Differentiate Useful From Misleading Evidence in Observational Research

Author Affiliations
  • 1Meta-research Innovation Center at Stanford (METRICS), Division of Epidemiology, Department of Health Research and Policy, Division of Primary Care and Population Health, Department of Medicine, Stanford University School of Medicine, Stanford, California
  • 2Harvard Medical School, Division of Pharmacoepidemiology, Department of Medicine, Brigham and Women’s Hospital, Boston, Massachusetts
  • 3Stanford Prevention Research Center, Department of Medicine, Department of Statistics, and Division of Epidemiology, Stanford University School of Medicine, Stanford, California
JAMA. 2017;317(7):705-707. doi:10.1001/jama.2016.19970

Few issues can be more important to physicians or patients than that treatment decisions are based on reliable information about benefits and harms. While randomized clinical trials (RCTs) are generally regarded as the most valid source of evidence about benefits and some harms, concerns about their generalizability, costs, and heterogeneity of treatment effects have led to the search for other sources of information to augment or possibly replace trials. This is embodied in the recently passed 21st Century Cures Act, which mandates that the US Food and Drug Administration develop rules for the use of “real world evidence” in drug approval, defined as “data…derived from sources other than randomized clinical trials.”1 A second push toward the use of nontrial evidence is based on the perception that the torrent of electronic health-related data—medical record, genomic, and lifestyle (ie, “Big Data”)—can be transformed into reliable evidence with the use of powerful modern analytic tools.

Add or change institution
×