[Skip to Content]
[Skip to Content Landing]
Less Is More
December 2015

Producing Evidence to Reduce Low-Value Care

Author Affiliations
  • 1Department of Health Policy and Management, Rollins School of Public Health and Winship Cancer Center, Emory University, Atlanta, Georgia
  • 2Cancer Outcomes, Public Policy, and Effectiveness Research Center, Section of General Internal Medicine, Department of Internal Medicine, Yale University School of Medicine, New Haven, Connecticut
JAMA Intern Med. 2015;175(12):1893-1894. doi:10.1001/jamainternmed.2015.5453

Efforts to reduce low-value care can decrease spending while helping patients avoid potentially harmful treatments. However, these efforts face enormous challenges. The current approach focuses on 2 strategies to reduce low-value care. First, professional societies are exhorting physicians to consider costs and value when deciding among alternative treatments. For example, the Choosing Wisely campaign has brought together more than 50 medical specialty societies to develop lists of low-value services.1 These initiatives appeal to a sense of professionalism by promoting the idea that physicians are stewards of societal resources. Second, payers are promoting health care system innovation and implementing payment reforms to reduce the incentives to provide ineffective treatments. Unfortunately, discussions about how to reduce low-value care often gloss over a crucial third strategy: a sustained effort to generate evidence that will distinguish between high- and low-value care.

High-quality evidence is critical for identifying and discouraging the use of low-value care. In the absence of this evidence, health care system leaders and administrators may have difficulty convincing front-line physicians to change practice patterns. Frequently, however, such evidence is lacking. Many of the interventions cited in the Choosing Wisely campaign were deemed low value precisely because they have not been tested in trials. For example, the American Academy of Neurology recommended against carotid endarterectomy for patients with asymptomatic carotid stenosis because no recent randomized clinical trials have compared carotid endarterectomy with medical management.2 Observational studies are useful in identifying potentially low-value services, but trials that ensure the comparability of the treatment and control groups through random assignment are the best approach to convincingly determine whether a service is low value.

Although it is not feasible to conduct a trial of every service mentioned in the Choosing Wisely recommendations, the lack of high-quality evidence will hamper efforts to reduce low-value care. Clinical training increasingly focuses on the use of evidence to guide patient management. Clinicians are more likely to respond to a recommendation to avoid a low-value treatment if the recommendation is based on sound evidence than if it is justified on the grounds that the treatment has never been proven to be superior to another treatment. The absence of evidence is not the same as evidence of ineffectiveness.

Why are there few randomized clinical trials that evaluate established medical practices? Trials are expensive, and there is currently no home agency or funder that devotes substantial resources to studies that compare established medical practices with less costly alternatives. The health care industry is unlikely to sponsor a trial of a widely used therapy if there is only the downside risk that the trial will have a negative result and discourage use of the company’s product. In addition, patients and physicians may be reluctant to participate in a trial of an intervention that has already disseminated into clinical practice when they have strong prior beliefs that a practice is effective. For example, patients with breast cancer were reluctant to enroll in the trials that rejected the hypothesis that high-dose chemotherapy followed by hematopoietic stem cell transplantation is superior to conventional therapy.3

There is a widespread perception that trials of established medical practices with negative results have little effect on what physicians do. This perception may discourage researchers from conducting and funders from supporting trials of established medical practices. However, claims about the ineffectiveness of evidence seem to have gained credibility more through frequent repetition than through a careful examination of the effect of evidence on practice. There are examples in which negative results from trials have failed to decrease the use of established treatments, such as percutaneous coronary intervention for persistently totally occluded infarct-related arteries after myocardial infarction4 and tight glycemic control in adult patients in the intensive care unit.5 There are also examples, however, in which trials with negative results led to immediate and meaningful changes in practice. Trials with sustained effects on practice include those reporting negative results for arthroscopic surgery in patients with osteoarthritis,6 epidermal growth factor receptor inhibitors in metastatic colorectal cancer,7 intermittent positive-pressure breathing therapy,8 and percutaneous coronary intervention for patients with stable angina.9 Although results from trials are not always immediately or completely incorporated into practice, they often have major effects, independent of changes to coverage and payment policies.

We propose that research funders and other interested parties, such as the National Institutes of Health, the Patient-Centered Outcomes Research Institute (PCORI), the Agency for Healthcare Research and Quality, large payers, and patient advocacy groups, undertake a major initiative to identify and fund randomized clinical trials of costly and established but untested treatments. Studies that test the equivalence (or noninferiority) of treatments with markedly different costs should be prioritized based on the potential effect of their findings on health care spending. When the costs of various treatments or approaches to practice are unknown, studies should include costs as an end point. Although other randomized clinical trials aim to identify treatments with superior outcomes, value-based research would answer the question, “Can we achieve equivalent outcomes at a lower cost?” In addition, implementation science can shed light on which trials have an effect on clinical practice and identify mechanisms to increase the effect of new evidence. Developing an infrastructure for conducting these types of trials will become even more important if the US Congress passes the 21st Century Cures Act,10 which will lower the barriers to US Food and Drug Administration approval of certain medical devices and drugs. If enacted, it is likely that even more treatments will enter practice with limited evidence of effectiveness. Information from postapproval studies will be even more important than it is today.

Reducing costs and improving value were major reasons that the US Congress established PCORI in 2010 and increased funding for comparative effectiveness research. Earlier this year, PCORI, the main funder of comparative effectiveness research, awarded several large grants to fund pragmatic clinical trials. These trials have the potential to reduce health care spending and improve value. For example, one trial of patients with back pain compares a comprehensive intervention (physical therapy and cognitive behavioral therapy) with usual care. Another trial compares usual care and personalized breast cancer screening strategies, which may entail longer intervals between mammograms for some women. Although PCORI’s decision to fund these trials is encouraging, we believe PCORI and other funders could do more to reduce low-value care. At least on paper, PCORI’s funding priorities give little weight to the effect of research on value and health care spending. Economic considerations are classified within the category of health system performance, which itself is 1 of 9 parallel priorities. The Patient Protection and Affordable Care Act (ACA) prohibits PCORI from establishing a threshold to determine whether a service is cost-effective but does not prohibit it from funding studies in which cost is an outcome. In addition, the ACA does not preclude the consideration of the financial effect on patients or society as an important input in the prioritization of studies. In fact, the ACA specifically directs PCORI to consider “the effect on national expenditures associated with a health care treatment” when identifying research priorities. Given the financial challenges facing Medicare and other federal insurance programs, the National Institutes of Health, the Agency for Healthcare Research and Quality, and particularly PCORI should devote more of their budgets (which in aggregate total >$30 billion annually) to funding value-based research.

A comprehensive initiative to fund trials comparing established medical treatments with less costly alternatives should complement ongoing efforts to reduce low-value care through physician stewardship and innovations in health care. There is a need for evidence that will guide decisions about clinical care. Instead of asking, “Does evidence affect practice?” we ought to be asking, “How can we produce more of it?”

Back to top
Article Information

Corresponding Author: David H. Howard, PhD, Department of Health Policy and Management, Rollins School of Public Health and Winship Cancer Center, Emory University, 1518 Clifton Rd NE, Atlanta, GA 30322 (david.howard@emory.edu).

Published Online: October 12, 2015. doi:10.1001/jamainternmed.2015.5453.

Conflict of Interest Disclosures: Dr Howard reported receiving research funding from the Agency for Healthcare Research and Quality and has applied for future funding, reported receiving research funding from Pfizer, and plans to apply for funding from the National Institutes of Health and the William T. Grant Foundation. Dr Gross reported receiving research funding from 21st Century Oncology and funding from Medtronic and Johnson & Johnson related to the sharing of clinical trial data. No other disclosures were reported.

 ABIM Foundation. Choosing Wisely. http://www.choosingwisely.org. Accessed May 1, 2015.
Langer-Gould  AM, Anderson  WE, Armstrong  MJ,  et al.  The American Academy of Neurology’s top five Choosing Wisely recommendations.  Neurology. 2013;81(11):1004-1011.PubMedGoogle ScholarCrossref
Rettig  RA, Jacobson  PD, Farquhar  CM, Aubry  WM.  False Hope: Bone Marrow Transplantation for Breast Cancer. New York, NY: Oxford University Press; 2007.
Deyell  MW, Buller  CE, Miller  LH,  et al.  Impact of National Clinical Guideline recommendations for revascularization of persistently occluded infarct-related arteries on clinical practice in the United States.  Arch Intern Med. 2011;171(18):1636-1643.PubMedGoogle ScholarCrossref
Niven  DJ, Rubenfeld  GD, Kramer  AA, Stelfox  HT.  Effect of published scientific evidence on glycemic control in adult intensive care units.  JAMA Intern Med. 2015;175(5):801-809.PubMedGoogle ScholarCrossref
Howard  D, Brophy  R, Howell  S.  Evidence of no benefit from knee surgery for osteoarthritis led to coverage changes and is linked to decline in procedures.  Health Aff (Millwood). 2012;31(10):2242-2249.PubMedGoogle ScholarCrossref
Dotan  E, Li  T, Hall  MJ, Meropol  NJ, Beck  JR, Wong  YN.  Oncologists’ response to new data regarding the use of epidermal growth factor receptor inhibitors in colorectal cancer.  J Oncol Pract. 2014;10(5):308-314.PubMedGoogle ScholarCrossref
Duffy  SQ, Farley  DE.  The protracted demise of medical technology. The case of intermittent positive pressure breathing.  Med Care. 1992;30(8):718-736.PubMedGoogle ScholarCrossref
Howard  DH, Shen  YC.  Trends in PCI volume after negative results from the COURAGE trial.  Health Serv Res. 2014;49(1):153-170.PubMedGoogle ScholarCrossref
 The Library of Congress. H.R. 6–21st Century Cures Act. http://thomas.loc.gov/cgi-bin/query/z?c114:H.R.6. Accessed June 22, 2015.