We don't really know how to get more value for the health care dollars we spend . . . the next step [is] doing what needs to be done to make a difference.
Carolyn Clancy1
Comparative effectiveness research is “the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat, and monitor a clinical condition or to improve the delivery of care.”2 It is a term that has been used in different ways and with different methodologies including: (1) randomized, controlled trials to compare the efficacy of alternative approaches; (2) large simple trials, clinical databases, and claims analyses to determine the effectiveness of different interventions in real-world settings; and (3) various approaches to incorporating costs and/or benefits in cost-effectiveness analyses.3 The focus has evolved from determining whether something works, to evaluating what works better, and finally to, “is it worth the expenditure for the value conferred?”4 The impetus to compare treatments arises not only from scientific questions of demonstrating relative performance but also from the seemingly inexorable rise of health care costs in the United States. In 2010, US national health expenditures are estimated to have been more than $2.6 trillion, approximately 17.6% of the country's gross domestic product, and they are expected to grow to $4.6 trillion, or 19.8% of the GDP, by 2020.5 Thus, while the current approach to comparing treatments embodied in the Patient Protection and Affordable Care Act is in terms of clinical effectiveness and prohibits policy decisions based on costs of treatment,6 cost-effectiveness data will likely become more prominent as comparative effectiveness grows in importance to generate useful data to help patients, clinicians, and payors choose among a myriad of possible therapies.