[Skip to Navigation]
Sign In
Figure.  Differential Changes in Use of Low-Value Services in Accountable Care Organizations (ACOs) vs Control Group, by Baseline Use
Differential Changes in Use of Low-Value Services in Accountable Care Organizations (ACOs) vs Control Group, by Baseline Use

Adjusted differential changes in the annual rate of low-value service use for beneficiaries attributed to Pioneer ACOs vs the control group from the precontract period (2009-2011) to the postcontract period (2012) are presented for ACOs that served areas with a 2008 mean adjusted rate of low-value service use per beneficiary in the control group that was higher vs lower than that of the median service area among ACOs, as well as ACOs with an adjusted rate of low-value service use per beneficiary in 2008 that was above vs below that of the control group within the ACO’s service area. The number of ACOs within each subgroup is indicated parenthetically. Squares indicate differential reductions; limit lines, 95% CI.

Table 1.  Summary of Low-Value Care Measures
Summary of Low-Value Care Measures
Table 2.  Beneficiary Characteristics Before and After Start of Pioneer ACO Contracts
Beneficiary Characteristics Before and After Start of Pioneer ACO Contracts
Table 3.  Differential Changes in Use of Low-Value Services in ACO vs Control Groups
Differential Changes in Use of Low-Value Services in ACO vs Control Groups
1.
Berwick  DM, Hackbarth  AD.  Eliminating waste in US health care.  JAMA. 2012;307(14):1513-1516.PubMedGoogle ScholarCrossref
2.
Burwell  SM.  Setting value-based payment goals—HHS efforts to improve U.S. health care.  N Engl J Med. 2015;372(10):897-899.PubMedGoogle ScholarCrossref
3.
Coulam  RF, Gaumer  GL.  Medicare’s prospective payment system: a critical appraisal.  Health Care Financ Rev Annu Suppl. 1991:45-77.PubMedGoogle Scholar
4.
McGuire  TG, Newhouse  JP, Sinaiko  AD.  An economic history of Medicare part C.  Milbank Q. 2011;89(2):289-332.PubMedGoogle ScholarCrossref
5.
Song  Z, Chokshi  DA.  The role of private payers in payment reform.  JAMA. 2015;313(1):25-26.PubMedGoogle ScholarCrossref
6.
Choudhry  NK, Rosenthal  MB, Milstein  A.  Assessing the evidence for value-based insurance design.  Health Aff (Millwood). 2010;29(11):1988-1994.PubMedGoogle ScholarCrossref
7.
Cassel  CK, Guest  JA.  Choosing Wisely: helping physicians and patients make smart decisions about their care.  JAMA. 2012;307(17):1801-1802.PubMedGoogle ScholarCrossref
8.
Schwartz  AL, Landon  BE, Elshaug  AG, Chernew  ME, McWilliams  JM.  Measuring low-value care in Medicare.  JAMA Intern Med. 2014;174(7):1067-1076.PubMedGoogle ScholarCrossref
9.
Colla  CH, Morden  NE, Sequist  TD, Schpero  WL, Rosenthal  MB.  Choosing Wisely: prevalence and correlates of low-value health care services in the United States.  J Gen Intern Med. 2015;30(2):221-228.PubMedGoogle ScholarCrossref
10.
Elshaug  AG, McWilliams  JM, Landon  BE.  The value of low-value lists.  JAMA. 2013;309(8):775-776.PubMedGoogle ScholarCrossref
11.
Colla  CH.  Swimming against the current−what might work to reduce low-value care?  N Engl J Med. 2014;371(14):1280-1283.PubMedGoogle ScholarCrossref
12.
McWilliams  JM, Landon  BE, Chernew  ME, Zaslavsky  AM.  Changes in patients’ experiences in Medicare accountable care organizations.  N Engl J Med. 2014;371(18):1715-1724.PubMedGoogle ScholarCrossref
13.
Song  Z, Rose  S, Safran  DG, Landon  BE, Day  MP, Chernew  ME.  Changes in health care spending and quality 4 years into global payment.  N Engl J Med. 2014;371(18):1704-1714.PubMedGoogle ScholarCrossref
14.
Song  Z, Safran  DG, Landon  BE,  et al.  Health care spending and quality in year 1 of the alternative quality contract.  N Engl J Med. 2011;365(10):909-918.PubMedGoogle ScholarCrossref
15.
McWilliams  JM, Landon  BE, Chernew  ME.  Changes in health care spending and quality for Medicare beneficiaries associated with a commercial ACO contract.  JAMA. 2013;310(8):829-836.PubMedGoogle ScholarCrossref
16.
McWilliams  JM, Chernew  ME, Landon  BE, Schwartz  AL.  Performance differences in year 1 of Pioneer accountable care organizations.  N Engl J Med. 2015;372(20):1927-1936.PubMedGoogle ScholarCrossref
17.
Centers for Medicare & Medicaid Services (CMS), HHS.  Medicare program; Medicare Shared Savings Program: accountable care organizations. Final rule.  Fed Regist. 2011;76(212):67802-67990. PubMedGoogle Scholar
18.
Choosing Wisely. Lists of five things physicians and patients should question. http://www.choosingwisely.org/doctor-patient-lists/. Accessed February 16, 2015.
19.
US Preventive Services Task Force. Published recommendations. http://uspreventiveservicestaskforce.org/BrowseRec/Index/browse-recommendations. Accessed February 16, 2015.
20.
Canadian Agency for Drugs and Technologies in Health. Health technology assessments. https://www.cadth.ca/resources/hta-database-canadian-search-interface. Accessed February 16, 2015.
21.
Elshaug  AG, Watt  AM, Mundy  L, Willis  CD.  Over 150 potentially low-value health care practices: an Australian study.  Med J Aust. 2012;197(10):556-560.PubMedGoogle ScholarCrossref
22.
American Medical Association. CodeManager® Online. https://ocm.ama-assn.org/OCM/mainMenu.do. Accessed November 1, 2014.
23.
Centers for Medicare and Medicaid Services. ICD-9-CM Diagnosis and Procedure Codes: abbreviated and full code titles. https://www.cms.gov/Medicare/Coding/ICD9ProviderDiagnosticCodes/codes.html. Accessed November 1, 2014.
24.
Chronic Conditions Data Warehouse. http://www.ccwdata.org/. Accessed February 16, 2015.
25.
Holley  JL.  Screening, diagnosis, and treatment of cancer in long-term dialysis patients.  Clin J Am Soc Nephrol. 2007;2(3):604-610.PubMedGoogle ScholarCrossref
26.
Vesco  K, Whitlock  E, Eder  M,  et al. Screening for Cervical Cancer: A Systematic Evidence Review for the US Preventive Services Task Force. Rockville, MD: Agency for Healthcare Research and Quality; 2011.
27.
Whitlock  EP, Lin  JS, Liles  E, Beil  TL, Fu  R.  Screening for colorectal cancer: a targeted, updated systematic review for the US Preventive Services Task Force.  Ann Intern Med. 2008;149(9):638-658.PubMedGoogle ScholarCrossref
28.
Lin  K, Lipsitz  R, Miller  T, Janakiraman  S; U.S. Preventive Services Task Force.  Benefits and harms of prostate-specific antigen screening for prostate cancer: an evidence update for the U.S. Preventive Services Task Force.  Ann Intern Med. 2008;149(3):192-199.PubMedGoogle ScholarCrossref
29.
Bell  KJL, Hayen  A, Macaskill  P,  et al.  Value of routine monitoring of bone mineral density after starting bisphosphonate treatment: secondary analysis of trial data.  BMJ. 2009;338:b2266.PubMedGoogle ScholarCrossref
30.
Hillier  TA, Stone  KL, Bauer  DC,  et al.  Evaluating the value of repeat bone mineral density measurement and prediction of fractures in older women: the Study of Osteoporotic Fractures.  Arch Intern Med. 2007;167(2):155-160.PubMedGoogle ScholarCrossref
31.
Martí-Carvajal  AJ, Solà  I, Lathyris  D, Karakitsiou  D-E, Simancas-Racines  D.  Homocysteine-lowering interventions for preventing cardiovascular events.  Cochrane Database Syst Rev. 2013;1:CD006612. doi:10.1002/14651858.CD006612.pub3.PubMedGoogle Scholar
32.
Baglin  T, Gray  E, Greaves  M,  et al; British Committee for Standards in Haematology.  Clinical guidelines for testing for heritable thrombophilia.  Br J Haematol. 2010;149(2):209-220.PubMedGoogle ScholarCrossref
33.
Levin  A, Bakris  GL, Molitch  M,  et al.  Prevalence of abnormal serum vitamin D, PTH, calcium, and phosphorus in patients with chronic kidney disease: results of the study to evaluate early kidney disease.  Kidney Int. 2007;71(1):31-38.PubMedGoogle ScholarCrossref
34.
Palmer  SC, McGregor  DO, Craig  JC, Elder  G, Macaskill  P, Strippoli  GF.  Vitamin D compounds for people with chronic kidney disease not requiring dialysis.  Cochrane Database Syst Rev. 2009;(4):CD008175. doi:10.1002/14651858.CD008175.PubMedGoogle Scholar
35.
Garber  JR, Cobin  RH, Gharib  H,  et al; American Association of Clinical Endocrinologists and American Thyroid Association Taskforce on Hypothyroidism in Adults.  Clinical practice guidelines for hypothyroidism in adults: cosponsored by the American Association of Clinical Endocrinologists and the American Thyroid Association.  Endocr Pract. 2012;18(6):988-1028.PubMedGoogle ScholarCrossref
36.
Holick  MF.  Vitamin D deficiency.  N Engl J Med. 2007;357(3):266-281.PubMedGoogle ScholarCrossref
37.
Mohammed  T, Kirsch  J, Amorosa  J,  et al.  ACR Appropriateness Criteria Routine Admission and Preoperative Chest Radiography. Reston, VA: American College of Radiology; 2011.
38.
Joo  HS, Wong  J, Naik  VN, Savoldelli  GL.  The value of screening preoperative chest x-rays: a systematic review.  Can J Anaesth. 2005;52(6):568-574.PubMedGoogle ScholarCrossref
39.
Douglas  PS, Garcia  MJ, Haines  DE,  et al; American College of Cardiology Foundation Appropriate Use Criteria Task Force; American Society of Echocardiography; American Heart Association; American Society of Nuclear Cardiology; Heart Failure Society of America; Heart Rhythm Society; Society for Cardiovascular Angiography and Interventions; Society of Critical Care Medicine; Society of Cardiovascular Computed Tomography; Society for Cardiovascular Magnetic Resonance.  ACCF/ASE/AHA/ASNC/HFSA/HRS/SCAI/SCCM/SCCT/SCMR 2011 appropriate use criteria for echocardiography. A report of the American College of Cardiology Foundation Appropriate Use Criteria Task Force, American Society of Echocardiography, American Heart Association, American Society of Nuclear Cardiology, Heart Failure Society of America, Heart Rhythm Society, Society for Cardiovascular Angiography and Interventions, Society of Critical Care Medicine, Society of Cardiovascular Computed Tomography, and Society for Cardiovascular Magnetic Resonance endorsed by the American College of Chest Physicians.  J Am Coll Cardiol. 2011;57(9):1126-1166.PubMedGoogle ScholarCrossref
40.
Qaseem  A, Snow  V, Fitterman  N,  et al; Clinical Efficacy Assessment Subcommittee of the American College of Physicians.  Risk assessment for and strategies to reduce perioperative pulmonary complications for patients undergoing noncardiothoracic surgery: a guideline from the American College of Physicians.  Ann Intern Med. 2006;144(8):575-580.PubMedGoogle ScholarCrossref
41.
Fleisher  LA, Beckman  JA, Brown  KA,  et al; American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Writing Committee to Revise the 2002 Guidelines on Perioperative Cardiovascular Evaluation for Noncardiac Surgery); American Society of Echocardiography; American Society of Nuclear Cardiology; Heart Rhythm Society; Society of Cardiovascular Anesthesiologists; Society for Cardiovascular Angiography and Interventions; Society for Vascular Medicine and Biology; Society for Vascular Surgery.  ACC/AHA 2007 guidelines on perioperative cardiovascular evaluation and care for noncardiac surgery: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Writing Committee to Revise the 2002 Guidelines on Perioperative Cardiovascular Evaluation for Noncardiac Surgery): developed in collaboration with the American Society of Echocardiography, American Society of Nuclear Cardiology, Heart Rhythm Society, Society of Cardiovascular Anesthesiologists, Society for Cardiovascular Angiography and Interventions, Society for Vascular Medicine and Biology, and Society for Vascular Surgery.  Circulation. 2007;116(17):e418-e499. doi:10.1161/CIRCULATIONAHA.107.185699.PubMedGoogle ScholarCrossref
42.
Cornelius  RS, Martin  J, Wippold  FJ  II,  et al; American College of Radiology.  ACR appropriateness criteria sinonasal disease.  J Am Coll Radiol. 2013;10(4):241-246.PubMedGoogle ScholarCrossref
43.
Moya  A, Sutton  R, Ammirati  F,  et al; Task Force for the Diagnosis and Management of Syncope; European Society of Cardiology (ESC); European Heart Rhythm Association (EHRA); Heart Failure Association (HFA); Heart Rhythm Society (HRS).  Guidelines for the diagnosis and management of syncope (version 2009).  Eur Heart J. 2009;30(21):2631-2671.PubMedGoogle ScholarCrossref
44.
Jordan  J, Wippold  FI, Cornelius  R,  et al.  ACR Appropriateness Criteria: Headache. Reston, VA: American College of Radiology; 2009.
45.
Gronseth  GS, Greenberg  MK.  The utility of the electroencephalogram in the evaluation of patients presenting with headache: a review of the literature.  Neurology. 1995;45(7):1263-1267.PubMedGoogle ScholarCrossref
46.
Chou  R, Fu  R, Carrino  JA, Deyo  RA.  Imaging strategies for low-back pain: systematic review and meta-analysis.  Lancet. 2009;373(9662):463-472.PubMedGoogle ScholarCrossref
47.
Wolff  T, Guirguis-Blake  J, Miller  T, Gillespie  M, Harris  R.  Screening for carotid artery stenosis: an update of the evidence for the US Preventive Services Task Force.  Ann Intern Med. 2007;147(12):860-870.PubMedGoogle ScholarCrossref
48.
Buchbinder  R.  Plantar fasciitis.  N Engl J Med. 2004;350(21):2159-2166.PubMedGoogle ScholarCrossref
49.
Hendel  RC, Berman  DS, Di Carli  MF,  et al; American College of Cardiology Foundation Appropriate Use Criteria Task Force; American Society of Nuclear Cardiology; American College of Radiology; American Heart Association; American Society of Echocardiography; Society of Cardiovascular Computed Tomography; Society for Cardiovascular Magnetic Resonance; Society of Nuclear Medicine.  ACCF/ASNC/ACR/AHA/ASE/SCCT/SCMR/SNM 2009 appropriate use criteria for cardiac radionuclide imaging: a report of the American College of Cardiology Foundation Appropriate Use Criteria Task Force, the American Society of Nuclear Cardiology, the American College of Radiology, the American Heart Association, the American Society of Echocardiography, the Society of Cardiovascular Computed Tomography, the Society for Cardiovascular Magnetic Resonance, and the Society of Nuclear Medicine.  Circulation. 2009;119(22):e561-e587. doi:10.1161/CIRCULATIONAHA.109.192519.PubMedGoogle ScholarCrossref
50.
Boden  WE, O’Rourke  RA, Teo  KK,  et al; COURAGE Trial Research Group.  Optimal medical therapy with or without PCI for stable coronary disease.  N Engl J Med. 2007;356(15):1503-1516.PubMedGoogle ScholarCrossref
51.
Lin  GA, Dudley  RA, Redberg  RF.  Cardiologists’ use of percutaneous coronary interventions for stable coronary artery disease.  Arch Intern Med. 2007;167(15):1604-1609.PubMedGoogle ScholarCrossref
52.
Wheatley  K, Ives  N, Gray  R,  et al; ASTRAL Investigators.  Revascularization versus medical therapy for renal-artery stenosis.  N Engl J Med. 2009;361(20):1953-1962.PubMedGoogle ScholarCrossref
53.
Cooper  CJ, Murphy  TP, Cutlip  DE,  et al; CORAL Investigators.  Stenting and medical therapy for atherosclerotic renal-artery stenosis.  N Engl J Med. 2014;370(1):13-22.PubMedGoogle ScholarCrossref
54.
Goldstein  LB, Bushnell  CD, Adams  RJ,  et al; American Heart Association Stroke Council; Council on Cardiovascular Nursing; Council on Epidemiology and Prevention; Council for High Blood Pressure Research; Council on Peripheral Vascular Disease; Interdisciplinary Council on Quality of Care and Outcomes Research.  Guidelines for the primary prevention of stroke: a guideline for healthcare professionals from the American Heart Association/American Stroke Association.  Stroke. 2011;42(2):517-584.PubMedGoogle ScholarCrossref
55.
PREPIC Study Group.  Eight-year follow-up of patients with permanent vena cava filters in the prevention of pulmonary embolism: the PREPIC (Prevention du Risque d’Embolie Pulmonaire par Interruption Cave) randomized study.  Circulation. 2005;112(3):416-422.PubMedGoogle ScholarCrossref
56.
Sarosiek  S, Crowther  M, Sloan  JM.  Indications, complications, and management of inferior vena cava filters: the experience in 952 patients at an academic hospital with a level I trauma center.  JAMA Intern Med. 2013;173(7):513-517.PubMedGoogle ScholarCrossref
57.
Rajaram  SS, Desai  NK, Kalra  A,  et al.  Pulmonary artery catheters for adult patients in intensive care.  Cochrane Database Syst Rev. 2013;2:CD003408. doi:10.1002/14651858.CD003408.pub3.PubMedGoogle Scholar
58.
Kallmes  DF, Comstock  BA, Heagerty  PJ,  et al.  A randomized trial of vertebroplasty for osteoporotic spinal fractures.  N Engl J Med. 2009;361(6):569-579.PubMedGoogle ScholarCrossref
59.
Buchbinder  R, Osborne  RH, Ebeling  PR,  et al.  A randomized trial of vertebroplasty for painful osteoporotic vertebral fractures.  N Engl J Med. 2009;361(6):557-568.PubMedGoogle ScholarCrossref
60.
Boonen  S, Van Meirhaeghe  J, Bastian  L,  et al.  Balloon kyphoplasty for the treatment of acute vertebral compression fractures: 2-year results from a randomized trial.  J Bone Miner Res. 2011;26(7):1627-1637.PubMedGoogle ScholarCrossref
61.
Laupattarakasem  W, Laopaiboon  M, Laupattarakasem  P, Sumananont  C.  Arthroscopic debridement for knee osteoarthritis.  Cochrane Database Syst Rev. 2008;(1):CD005118. doi:10.1002/14651858.CD005118.pub2.PubMedGoogle Scholar
62.
Pinto  RZ, Maher  CG, Ferreira  ML,  et al.  Epidural corticosteroid injections in the management of sciatica: a systematic review and meta-analysis.  Ann Intern Med. 2012;157(12):865-877.PubMedGoogle ScholarCrossref
63.
Staal  JB, de Bie  R, de Vet  HC, Hildebrandt  J, Nelemans  P.  Injection therapy for subacute and chronic low-back pain.  Cochrane Database Syst Rev. 2008;(3):CD001824. doi:10.1002/14651858.CD001824.pub3.PubMedGoogle Scholar
64.
Buntin  MB, Zaslavsky  AM.  Too much ado about two-part models and transformation? comparing methods of modeling Medicare expenditures.  J Health Econ. 2004;23(3):525-542.PubMedGoogle ScholarCrossref
65.
Dimick  JB, Ryan  AM.  Methods for evaluating changes in health care policy: the difference-in-differences approach.  JAMA. 2014;312(22):2401-2402.PubMedGoogle ScholarCrossref
66.
Bertrand  M, Duflo  E, Mullainathan  S.  How much should we trust differences-in-differences estimates?  Q J Econ. 2004;119(1):249-275.Google ScholarCrossref
67.
Williams  RL.  A note on robust variance estimation for cluster-correlated data.  Biometrics. 2000;56(2):645-646.PubMedGoogle ScholarCrossref
68.
Chandra  A, Cutler  D, Song  Z. Who ordered that? the economics of treatment choices in medical care. In: Pauly MV, McGuire TG, Barros PP, eds. Handbook of Health Economics. Vol. 2. Amsterdam, the Netherlands: Elsevier Science; 2012:397-432.
69.
Smieliauskas  F, Lam  S, Howard  DH.  Impact of negative clinical trial results for vertebroplasty on vertebral augmentation procedure rates.  J Am Coll Surg. 2014;219(3):525-533.e1.PubMedGoogle ScholarCrossref
70.
Howard  DH, Shen  Y-C.  Comparative effectiveness research, technological abandonment, and health care spending.  Adv Health Econ Health Serv Res. 2012;23:103-121.PubMedGoogle Scholar
71.
Landon  BE.  Keeping score under a global payment system.  N Engl J Med. 2012;366(5):393-395.PubMedGoogle ScholarCrossref
72.
Cutler  DM, Skinner  JS, Stern  AD, Wennberg  DE. Physician Beliefs and Patient Preferences: A New Look at Regional Variation in Health Care Spending. Cambridge, MA: National Bureau of Economic Research Working Paper Series; 2013;19320.
73.
Gogineni  K, Shuman  KL, Chinn  D, Gabler  NB, Emanuel  EJ.  Patient demands and requests for cancer tests and treatments.  JAMA Oncol. 2015;1(1):33-39.PubMedGoogle ScholarCrossref
74.
McWilliams  JM, Dalton  JB, Landrum  MB, Frakt  AB, Pizer  SD, Keating  NL.  Geographic variation in cancer-related imaging: Veterans Affairs health care system versus Medicare.  Ann Intern Med. 2014;161(11):794-802.PubMedGoogle ScholarCrossref
Original Investigation
November 2015

Changes in Low-Value Services in Year 1 of the Medicare Pioneer Accountable Care Organization Program

Author Affiliations
  • 1Department of Health Care Policy, Harvard Medical School, Boston, Massachusetts
  • 2Division of General Internal Medicine and Primary Care, Department of Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts
  • 3Division of General Internal Medicine and Primary Care, Department of Medicine, Brigham and Women’s Hospital and Harvard Medical School, Boston, Massachusetts
JAMA Intern Med. 2015;175(11):1815-1825. doi:10.1001/jamainternmed.2015.4525
Abstract

Importance  Wasteful practices are widespread in the US health care system. It is unclear if payment models intended to improve health care efficiency, such as the Medicare accountable care organization (ACO) programs, discourage the provision of low-value services.

Objective  To assess whether the first year of the Medicare Pioneer ACO program was associated with a reduction in use of low-value services.

Design, Setting, and Participants  In a difference-in-differences analysis, we compared use of low-value services between Medicare fee-for-service beneficiaries attributed to health care provider groups that entered the Pioneer program (ACO group) and beneficiaries attributed to other health care providers (control group) before (2009-2011) vs after (2012) Pioneer ACO contracts began. Data analysis was conducted from December 1, 2014, to June 27, 2015. Comparisons were adjusted for beneficiaries’ sociodemographic and clinical characteristics as well as for geography. We decomposed estimates according to service characteristics (clinical category, price, and sensitivity to patient preferences) and compared estimates between subgroups of ACOs with higher vs lower baseline use of low-value services.

Main Outcomes and Measures  Use of, and spending on, 31 services in instances that provide minimal clinical benefit, measured as annual service counts per 100 beneficiaries and price-standardized annual service spending per 100 beneficiaries.

Results  During the precontract period, trends in the use of low-value services were similar for the ACO and control groups. The first year of ACO contracts was associated with a differential reduction (95% CI) of 0.8 low-value services per 100 beneficiaries for the ACO group (−1.2 to −0.4; P < .001), corresponding to a 1.9% differential reduction in service quantity (−2.9% to −0.9%) and a 4.5% differential reduction in spending on low-value services (−7.5% to −1.4%; P = .004). Differential reductions were similar for services less sensitive vs more sensitive to patient preferences and for higher- vs lower-priced services. The ACOs with higher than their markets’ mean baseline levels of low-value service use experienced greater service reductions (−1.2 services per 100 beneficiaries; −1.7 to −0.7; P < .001) than did ACOs with use below the mean (−0.2 services per 100 beneficiaries, −0.6 to −0.2; P = .41; P = .003 for test of difference between subgroups).

Conclusions and Relevance  During its first year, the Pioneer ACO program was associated with modest reductions in low-value services, with greater reductions for organizations providing more low-value care. Accountable care organization–like risk contracts may be able to discourage use of low-value services even without specifying services to target.

Introduction

Reducing unnecessary health care utilization, a source of substantial spending,1 is a central goal of many government2-4 and private5,6 initiatives. Recent efforts, such as the American Board of Internal Medicine’s Choosing Wisely campaign,7 have drawn attention to specific services that provide minimal clinical benefit.8,9 However, little is known about what strategies can effectively discourage the use of these services. Distinguishing high-value from low-value use of the same service is often challenging because value often depends on clinical context. As a result, efforts to directly limit overuse of specific services through coverage restrictions or other payment incentives may produce unintended consequences or achieve minimal gains.8,10,11

Other strategies intended to improve health care efficiency do not target specific services. Among the most prominent of these strategies are alternative payment models, such as the model used in the Medicare Pioneer accountable care organization (ACO) program, which places spending for all services under a global budget with incentives to stay within the budget and improve performance on quality measures. This approach has been associated with lower overall spending and improved or stable performance on standard quality measures.12-16

However, it is unknown whether payment reforms such as these are associated with disproportionate reductions in the use of low-value services. Although ACO-like payment models are intended to discourage the provision of services that contribute to spending but not to health, the combination of lower overall spending and improved performance on quality measures that has been observed may have resulted from reductions in high-value services affecting unmeasured dimensions of quality rather than from reductions in low-value services. More generally, because risk-based contracts do not incentivize reductions in overuse directly, it is unclear whether providers under these contracts are targeting low-value services in their broader efforts to control overall spending. If ACO-like payment models succeed in reducing the use of low-value services, there should be observable reductions in the delivery of the low-value services that can be measured directly. Moreover, if health care providers respond to ACO contracts by targeting low-value services, their efforts should result in greater reductions in spending on low-value services than in overall spending.

We constructed 31 claims-based measures of low-value services (ie, services that provide minimal clinical benefit on average). Using these measures and 2009-2012 Medicare fee-for-service claims, we conducted a difference-in-differences analysis comparing the use of low-value services between beneficiaries served by Pioneer ACOs and beneficiaries served by non-ACO providers before vs after the start of Pioneer contracts in 2012.

Methods
Background on the Pioneer ACO Program

In 2012, a total of 32 health care provider organizations volunteered to participate in the Medicare Pioneer ACO program in which participating organizations receive a bonus payment or are penalized if overall spending for an attributed patient population falls sufficiently below or above a financial benchmark, respectively. Performance on 33 quality measures determines the proportion of savings or losses shared by the ACO, although ACOs were only required to report on these measures to be eligible for maximum savings in 2012.17 None of the quality measures in Medicare ACO contracts assesses overuse of medical services. Data analysis was conducted from December 1, 2014, to June 27, 2015. The Harvard University Committee on the Use of Human Subjects in Research and the National Bureau of Economic Research institutional review board approved the study and waived the requirement of informed consent.

Study Population
Data and Inclusion Criteria

We examined services provided from 2009 to 2012 using Medicare claims for a random 20% sample of beneficiaries; in a given year, this sample includes members from the previous year plus a 20% sample of newly eligible beneficiaries. For each year, we included beneficiaries in the study sample if they were continuously enrolled in Parts A and B of traditional Medicare while alive during that year and the entire prior year. We used the previous year of claims to collect diagnoses and procedures used for case-mix adjustment or for assessing the appropriateness of service use. In each study year, beneficiaries were excluded if they did not receive primary care services necessary for attribution to provider organizations or if the beneficiaries were attributed to any of the 114 organizations that entered the Medicare Shared Savings Program later in 2012. Medicare Shared Savings Program ACOs faced weaker incentives than Pioneer ACOs to reduce spending and were active for only part of 2012. Thus, if Medicare Shared Savings Program ACOs took early steps to limit low-value services, inclusion of their beneficiaries in the control group could have biased our estimates.

ACO Group and Control Group

Each of the 32 organizations that entered the Pioneer ACO program was defined as the collection of National Provider Identifiers for physicians listed by the ACO as participating in the ACO contract (eMethods in the Supplement). Our definition of ACOs as sets of National Provider Identifiers reflects the organizations’ ability to include only a subset of affiliated physicians in their ACO contracts. Following the Medicare Shared Savings Program attribution rules and previously described methods,16 for each year in the study period, each beneficiary was assigned to the ACO (ACO group) or non-ACO (control group) practice that accounted for the greatest fraction of that beneficiary’s annual allowed charges for primary care services (eMethods in the Supplement). Non-ACO practices were defined by taxpayer identification numbers, which identify the billing practice, provider organization, or individual physician.

Study Variables
Measures of Low-Value Services

We constructed 31 claims-based measures of services that are low value, which we defined as providing minimal or no average clinical benefit in specific clinical scenarios. The measures, 26 of which were drawn from a prior study,8 were derived from evidence-based lists of low-value services. As described previously,8 we surveyed the following sources for candidate low-value services: the American Board of Internal Medicine Foundation’s Choosing Wisely initiative,18 the US Preventive Services Task Force D recommendations,19 the Canadian Agency for Drugs and Technologies in Health technology assessments,20 and peer-reviewed medical literature.21 Services selected for measure development met 3 criteria: the service was relevant to the Medicare population, evidence that the service confers minimal clinical benefit had been established before the start of the study period, and claims and enrollment data were sufficient to distinguish high-value use from low-value use with reasonable accuracy.

For each measure, we created an operational definition of low-value service occurrence based on characteristics of patients and the service they received. Relevant patient variables included demographic characteristics and diagnoses present in concurrent or past claims. In addition to the type of service received, some measure definitions incorporated the timing of the service (eg, time since an inpatient discharge) and the site of care. We defined low-value services conservatively, opting for more specific definitions that reduced the likelihood of classifying a high-value service as low value.8 We detected service occurrences meeting these definitions on the basis of claims data elements, including Current Procedural Terminology (CPT) service codes,22International Classification of Diseases, Ninth Revision (ICD-9)23 patient diagnosis codes, data from Medicare enrollment files, and condition indicators from the Chronic Condition Data Warehouse.24 Details regarding service identification, including codes used for service detection, are presented in the eMethods and eTable in the Supplement. To avoid duplicative counting of services, we did not include any service instances occurring within 7 days of the same service.

The primary outcome of this study was use of low-value services, defined as the annual count of all measured services. We chose this measure as our primary outcome because overall service counts equally weight clinical decisions to provide different services, whereas spending on low-value services is influenced heavily by use of more expensive services. We examined price-standardized spending on low-value services as a secondary outcome to compare changes in low-value spending with changes in overall spending associated with the Pioneer ACO program that were estimated previously using similar methods (see eMethods in the Supplement for price standardization methods).16

To assess whether any changes in low-value service use associated with Pioneer ACO contracts were concentrated in a specific clinical area or evident in multiple areas, we categorized the 31 low-value services into the following clinical categories: cancer screening, diagnostic and preventive testing, preoperative testing, imaging, cardiovascular testing and procedures, and other invasive procedures. We also categorized services as being priced higher (standardized price, $180-$13 331) or lower ($5-$117) than the median service price because ACOs would be unlikely to reduce higher-priced services in the absence of new payment incentives, whereas ACOs might restrict provision of lower-priced wasteful services even under fee-for-service incentives to improve quality without major reductions in revenue. Thus, reductions in the use of higher-priced, low-value services would provide stronger evidence of changes related specifically to ACO contract incentives.

Finally, to explore the possibility that patient preferences moderated providers’ responses to ACO contracts, we categorized services as less vs more sensitive to patient preferences (Table 1). For example, we considered testing for hypercoagulability following deep venous thrombosis as less sensitive to patient preferences because most patients would be unaware that such testing could be done. Table 1 presents each measure’s source and supporting literature, operational definition, and assigned categories of price and preference sensitivity.

Covariates

For each beneficiary, the following demographic and clinical covariates were assessed from Medicare claims and enrollment files: age (<65, 65-69, 70-74, 75-79, 80-84, and ≥85 years), sex, race/ethnicity, disability as the original reason for Medicare entitlement, presence of end-stage renal disease, presence of 27 chronic conditions in the Chronic Condition Data Warehouse by the start of each study year (including indicators for each condition and indicators for having ≥2 to ≥9 conditions), and the patient’s hierarchical condition category risk score. Because most low-value service measures do not apply to all beneficiaries (eg, low-value prostate-specific antigen tests were considered those for men aged ≥75 years), we also created indicators for whether beneficiaries qualified for potential receipt of each low-value service (see eMethods in the Supplement for definitions of these qualifying indicators).

ACO Baseline Levels of Low-Value Services

Because organizations providing more low-value services may have more opportunities to limit wasteful care, we assessed baseline use of low-value services for each ACO and tested whether changes in low-value service use associated with ACO contracts differed between ACOs with higher vs lower baseline use. We decomposed an ACO’s baseline level of low-value service use into 2 components. First, we assessed whether the ACO had a greater or lesser risk-adjusted count of low-value services per beneficiary than the control group in the ACO’s service area (eMethods in the Supplement). Second, we assessed whether the risk-adjusted rate of low-value service use among the control group in each ACO’s service area was greater or less than that of the median among ACO service areas.

This decomposition allowed us to examine whether an organization’s prior performance relative to its service area or service area performance relative to a national mean predicted changes under ACO contracts. This distinction bears on whether ACO contracts might be associated with convergence in provider practices within regions or across regions. Baseline levels of low-value care were assessed in 2008 to avoid bias from regression to the mean between the precontract period (2009-2011) and 2012; we found no evidence of regression to the mean in the precontract period (eMethods in the Supplement).

Statistical Analysis

We conducted a difference-in-differences analysis to quantify changes in the annual per-beneficiary rate of low-value services in the ACO group that differed from concurrent changes in the control group from the precontract period (2009-2011) to the postcontract period (2012), while adjusting for geography and any coincident changes in the groups’ measured patient characteristics. Specifically, we fit the following linear regression model64:

E(Yi,t,k,h) = β0 + β1ACO_indicatorskβ2(HRR_indicatorsh × yeart) + β3ACO_contractkt + β4covariatesit,

with E(Yi,t,k,h) denoting the expected value of outcome Y (ie, count of low-value services) for beneficiary i during year t assigned to ACO or non-ACO taxpayer identification number k living in a hospital referral region (HRR) h. The ACO_indicators is a vector of indicators specifying each organization in the ACO group, omitting the control group as the reference group; HRR_indicators × year is a vector of indicators for each HRR in each year of the sample with a reference HRR-year combination omitted; ACO_contract is an indicator specifying a Pioneer ACO in 2012; and covariates include patient sociodemographic and clinical covariates described above. The ACO indicators adjust for an organization’s mean level of low-value services in the precontract period and for changes in the distribution of ACO-assigned beneficiaries across ACOs between the precontract and postcontract periods. The HRR indicators mean that estimates are based on comparisons of each beneficiary in the ACO group with control group beneficiaries in the same geographic area, and the interaction of HRR and year indicators adjusts for region-specific trends in the use of low-value services in the control group.

Thus, the quantity of interest (β3) is the mean differential change in low-value services for ACO-attributed beneficiaries relative to local changes in low-value service use in the control group. To compare ACOs with higher vs lower baseline levels of low-value service use, we added to the model 2 interactions between the β3 term and each of the 2 measures of ACOs’ baseline low-value service use.

A key assumption of this difference-in-differences analysis is that the difference in adjusted rates of low-value service use between the ACO group and the control group in the precontract period would have remained constant in the postcontract period in the absence of the Pioneer program.65 We tested this assumption by comparing trends in low-value service use between the ACO group and control group during the 2009-2011 precontract period (eMethods in the Supplement).

We conducted several sensitivity analyses to test for potential sources of bias. First, we adjusted for any differences in trends in low-value service use between the ACO and control groups in the precontract period (eMethods and eFigure in the Supplement). Second, we excluded indicators of service qualification as covariates in case ACO contracts were associated with changes in the likelihood of patients satisfying qualifying conditions. Third, we tested for differential changes in sociodemographic and clinical characteristics from the precontract to postcontract periods between the ACO and control groups. If the composition of the ACO and control groups did not change differentially in these observed dimensions, it is less likely that there were differential changes in other unobserved dimensions. All analyses used robust variance estimators clustered at the level of ACOs (for the ACO group) or HRRs (for the control group).66,67

Results

The study sample included 693 218 person-years in the ACO group and 17 453 423 in the control group. In analyses adjusted for geographic area, beneficiary characteristics during the 2009-2011 precontract period were similar in the ACO and control groups, and differential changes in the ACO group were minimal (Table 2).

During the precontract period, the adjusted annual rate of low-value service use in the ACO group was 1.8 services per 100 beneficiaries lower (P = .02) than the control group (Table 3), but trends in the precontract period were similar (0.1 services per 100 beneficiaries per year greater for the ACO group; P = .74). Total spending on low-value services in the precontract period was similar for the ACO and control groups ($256 per 100 beneficiaries higher in the control group; P = .13), and trends were also similar ($20 per 100 beneficiaries per year greater for the control group; P = .88). The following results are reported as differential reductions (95% CIs). In year 1 of Pioneer contracts, there was a differential reduction in the use of low-value services for the ACO group (−0.8 services per 100 beneficiaries; −1.2 to −0.4; P < .001) or a reduction of 1.9% (−2.9% to −0.9%) relative to the expected 2012 mean for the ACO group of 41.0 services per 100 beneficiaries. This differential reduction in use corresponded to a 4.5% differential reduction in spending on low-value services (−7.5% to −1.4%; P = .004).

All clinical categories of low-value services except for preoperative services contributed to the overall differential reduction in the ACO group (Table 3). The differential reductions were statistically significant for 3 clinical categories (cancer screening, imaging, and cardiovascular testing and procedures). The greatest absolute reductions in service use occurred for the most frequently delivered services: cancer screening and imaging (Tables 1 and 3). Cardiovascular testing and procedures underwent the greatest differential reduction in relative terms (−6.3% for the ACO group; P = .05). In relative terms, differential reductions in low-value service use (differential relative reduction; 95% CI) were similar in magnitude for higher-priced services (−1.4%; −3.3% to 0.4%) and lower-priced services (−2.1%; −3.5% to −0.7%), as well as for services that were more and less sensitive to patient preferences (−1.7%; −3.2% to −0.3% vs −2.2%; −3.7% to −0.7%).

As shown in the Figure, ACOs with higher baseline levels of low-value service use than their service area exhibited a differential reduction of 1.2 services per 100 beneficiaries (95% CI, −1.7 to −0.7; P < .001 for test of estimate vs zero), and ACOs with lower baseline rates experienced a statistically insignificant differential reduction of 0.2 services per 100 beneficiaries (95% CI, −0.6 to 0.2; P = .41 for test of estimate vs zero; P = .003 for test of difference in differential reductions between ACO subgroups). Differential reductions in low-value service use were similar for ACOs serving areas with higher or lower baseline levels of low-value service use in the control group (P = .41).

Estimates were not substantially affected by adjusting for small differences in trends in low-value service use during the precontract period or by omitting service qualification indicators from regression models (eMethods and eFigure in the Supplement).

Discussion

Although many studies have examined the effects of various provider payment reforms on health care spending and patient outcomes, the use of low-value services has not been a focus of this literature.68 Use of such services has been shown to fall somewhat following the publication of clinical trials demonstrating their lack of effectiveness,69,70 but whether payment reforms further discourage use of these services has not been assessed.

We found that the first year of the Medicare Pioneer ACO program was associated with a modest reduction in use of low-value services that could be measured directly with claims data. These results are consistent with the hypothesis that alternative payment models with global budgets can discourage overuse even while preserving broad provider discretion in determining which services are of low value. Notably, the first year of the Pioneer program was associated with a 4.5% differential reduction in spending on low-value services, substantially larger than the 1.2% reduction in overall spending previously estimated with the same methods.16 This finding suggests that Pioneer ACOs targeted low-value services in their efforts to reduce spending despite a lack of financial incentives or quality reporting requirements specifically concerning overused services.

Utilization changes occurred broadly across multiple clinical categories. Relative reductions were similar for higher-priced and lower-priced services, suggesting that overall reductions in low-value service use were not simply driven by restrictions on service use that could have occurred without causing significant losses in reimbursement under fee-for-service payment.71 Differential reductions in low-value service use were also similar for services that were more or less sensitive to patient preferences. This finding is consistent with providers in ACOs recommending fewer low-value services and with research68,72,73 demonstrating that patient preferences may not be major obstacles to reducing low-value service use.

Reductions in low-value service use were concentrated among ACOs with higher baseline levels of use of these services relative to their service areas, whereas baseline performance of ACO geographic service areas did not predict reductions in low-value service use. First, these findings suggest that ACO initiatives may produce greater reductions in overuse if they encourage participation of provider organizations with more wasteful practices at baseline than other providers in their area. Second, these findings highlight the importance of practice variation within regional markets rather than across markets in predicting organizations’ prospects for improving efficiency.74 In service areas where overuse is especially common, providers may face difficulties in markedly reducing low-value service use below local norms.

Several limitations of this study warrant discussion. First, organizations opting to participate in the volunteer Pioneer program may have been uniquely well positioned to identify and reduce wasteful practices. Consequently, similar results may not be achieved if the Pioneer program or similar programs are expanded to include a different set of provider organizations. Second, although our difference-in-differences study design controls for fixed differences between the ACO group and control group, and even though we detected no significant difference in temporal trends in low-value service use between these groups, it is nevertheless possible that an independent contemporaneous factor affecting ACOs produced a differential change in 2012. It is also possible that organizations entering the Pioneer program may have differentially reduced low-value service use even in the absence of the program. However, we found no evidence that these organizations were experiencing faster reductions in low-value service use before the ACO contracts. In addition, reductions in use of higher-priced low-value services would cause substantial losses in fee-for-service revenue in the absence of ACO contracts, and we found that reductions were unrelated to service price.

Third, given the limited number of organizations participating in the ACO program, we could not assess the many organizational characteristics that might modify reductions in the use of low-value services. Fourth, we only examined the first year of the Pioneer ACO program, which was the only year for which claims data were available. Although prior studies13 have shown increasing effects of commercial ACO contracts over time, the same pattern may not hold in Medicare. Finally, our results do not constitute conclusive evidence of value improvement among Pioneer ACOs. It is possible that important high-value services also experienced reductions in 2012.

Conclusions

The Pioneer ACO program was associated with a modest reduction in low-value services, with greater reductions within organizations providing more low-value care. Despite the limitations of the study, our findings, taken together with those of studies demonstrating spending reductions greater than Medicare bonus payments16 and improved or stable performance on measures of patient experiences and quality,12 are consistent with the conclusion that the overall value of health care provided by Pioneer ACOs improved after their participation in an alternative payment model. Finally, our study demonstrates the utility of novel measures of low-value service use for evaluating the effects of health care policy initiatives.

Back to top
Article Information

Accepted for Publication: June 22, 2015.

Corresponding Author: J. Michael McWilliams, MD, PhD, Department of Health Care Policy, Harvard Medical School, 180 Longwood Ave, Boston, MA 02115 (mcwilliams@hcp.med.harvard.edu).

Published Online: September 21, 2015. doi:10.1001/jamainternmed.2015.4525.

Author Contributions: Drs Schwartz and McWilliams had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

Study concept and design: All authors.

Acquisition, analysis, or interpretation of data: All authors.

Drafting of the manuscript: Schwartz.

Critical revision of the manuscript for important intellectual content: All authors.

Statistical analysis: Schwartz, McWilliams.

Obtained funding: Schwartz, McWilliams.

Administrative, technical, or material support: Chernew.

Study supervision: Chernew, Landon, McWilliams.

Conflict of Interest Disclosures: Drs Schwartz and McWilliams report consulting for the Medicare Payment Advisory Commission on the use of measures of low-value care. Dr Chernew reports that he is a partner in VBID Health, LLC, which has a contract with Milliman to develop and market a tool to help insurers and employers quantify spending on low-value services. No other disclosures are reported.

Funding/Support: This study was supported by grants from the National Institute on Aging (P01 AG032952-06A1 and F30 AG044106-01A1) and Laura and John Arnold Foundation. We also acknowledge funding from the National Institute of Mental Health (grant U01 MH103018) for work involving the development and operationalizing of measures of low-value care.

Role of the Funder/Sponsor: The funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Additional Contributions: Adam Elshaug, PhD (Menzies Centre for Health Policy, University of Sydney), contributed to the selection of low-value services for measurement, Pasha Hamed, MA, provided statistical programming consultation, and Jesse B. Dalton, MA (Department of Health Care Policy, Harvard Medical School), assisted with research. There was no financial compensation.

References
1.
Berwick  DM, Hackbarth  AD.  Eliminating waste in US health care.  JAMA. 2012;307(14):1513-1516.PubMedGoogle ScholarCrossref
2.
Burwell  SM.  Setting value-based payment goals—HHS efforts to improve U.S. health care.  N Engl J Med. 2015;372(10):897-899.PubMedGoogle ScholarCrossref
3.
Coulam  RF, Gaumer  GL.  Medicare’s prospective payment system: a critical appraisal.  Health Care Financ Rev Annu Suppl. 1991:45-77.PubMedGoogle Scholar
4.
McGuire  TG, Newhouse  JP, Sinaiko  AD.  An economic history of Medicare part C.  Milbank Q. 2011;89(2):289-332.PubMedGoogle ScholarCrossref
5.
Song  Z, Chokshi  DA.  The role of private payers in payment reform.  JAMA. 2015;313(1):25-26.PubMedGoogle ScholarCrossref
6.
Choudhry  NK, Rosenthal  MB, Milstein  A.  Assessing the evidence for value-based insurance design.  Health Aff (Millwood). 2010;29(11):1988-1994.PubMedGoogle ScholarCrossref
7.
Cassel  CK, Guest  JA.  Choosing Wisely: helping physicians and patients make smart decisions about their care.  JAMA. 2012;307(17):1801-1802.PubMedGoogle ScholarCrossref
8.
Schwartz  AL, Landon  BE, Elshaug  AG, Chernew  ME, McWilliams  JM.  Measuring low-value care in Medicare.  JAMA Intern Med. 2014;174(7):1067-1076.PubMedGoogle ScholarCrossref
9.
Colla  CH, Morden  NE, Sequist  TD, Schpero  WL, Rosenthal  MB.  Choosing Wisely: prevalence and correlates of low-value health care services in the United States.  J Gen Intern Med. 2015;30(2):221-228.PubMedGoogle ScholarCrossref
10.
Elshaug  AG, McWilliams  JM, Landon  BE.  The value of low-value lists.  JAMA. 2013;309(8):775-776.PubMedGoogle ScholarCrossref
11.
Colla  CH.  Swimming against the current−what might work to reduce low-value care?  N Engl J Med. 2014;371(14):1280-1283.PubMedGoogle ScholarCrossref
12.
McWilliams  JM, Landon  BE, Chernew  ME, Zaslavsky  AM.  Changes in patients’ experiences in Medicare accountable care organizations.  N Engl J Med. 2014;371(18):1715-1724.PubMedGoogle ScholarCrossref
13.
Song  Z, Rose  S, Safran  DG, Landon  BE, Day  MP, Chernew  ME.  Changes in health care spending and quality 4 years into global payment.  N Engl J Med. 2014;371(18):1704-1714.PubMedGoogle ScholarCrossref
14.
Song  Z, Safran  DG, Landon  BE,  et al.  Health care spending and quality in year 1 of the alternative quality contract.  N Engl J Med. 2011;365(10):909-918.PubMedGoogle ScholarCrossref
15.
McWilliams  JM, Landon  BE, Chernew  ME.  Changes in health care spending and quality for Medicare beneficiaries associated with a commercial ACO contract.  JAMA. 2013;310(8):829-836.PubMedGoogle ScholarCrossref
16.
McWilliams  JM, Chernew  ME, Landon  BE, Schwartz  AL.  Performance differences in year 1 of Pioneer accountable care organizations.  N Engl J Med. 2015;372(20):1927-1936.PubMedGoogle ScholarCrossref
17.
Centers for Medicare & Medicaid Services (CMS), HHS.  Medicare program; Medicare Shared Savings Program: accountable care organizations. Final rule.  Fed Regist. 2011;76(212):67802-67990. PubMedGoogle Scholar
18.
Choosing Wisely. Lists of five things physicians and patients should question. http://www.choosingwisely.org/doctor-patient-lists/. Accessed February 16, 2015.
19.
US Preventive Services Task Force. Published recommendations. http://uspreventiveservicestaskforce.org/BrowseRec/Index/browse-recommendations. Accessed February 16, 2015.
20.
Canadian Agency for Drugs and Technologies in Health. Health technology assessments. https://www.cadth.ca/resources/hta-database-canadian-search-interface. Accessed February 16, 2015.
21.
Elshaug  AG, Watt  AM, Mundy  L, Willis  CD.  Over 150 potentially low-value health care practices: an Australian study.  Med J Aust. 2012;197(10):556-560.PubMedGoogle ScholarCrossref
22.
American Medical Association. CodeManager® Online. https://ocm.ama-assn.org/OCM/mainMenu.do. Accessed November 1, 2014.
23.
Centers for Medicare and Medicaid Services. ICD-9-CM Diagnosis and Procedure Codes: abbreviated and full code titles. https://www.cms.gov/Medicare/Coding/ICD9ProviderDiagnosticCodes/codes.html. Accessed November 1, 2014.
24.
Chronic Conditions Data Warehouse. http://www.ccwdata.org/. Accessed February 16, 2015.
25.
Holley  JL.  Screening, diagnosis, and treatment of cancer in long-term dialysis patients.  Clin J Am Soc Nephrol. 2007;2(3):604-610.PubMedGoogle ScholarCrossref
26.
Vesco  K, Whitlock  E, Eder  M,  et al. Screening for Cervical Cancer: A Systematic Evidence Review for the US Preventive Services Task Force. Rockville, MD: Agency for Healthcare Research and Quality; 2011.
27.
Whitlock  EP, Lin  JS, Liles  E, Beil  TL, Fu  R.  Screening for colorectal cancer: a targeted, updated systematic review for the US Preventive Services Task Force.  Ann Intern Med. 2008;149(9):638-658.PubMedGoogle ScholarCrossref
28.
Lin  K, Lipsitz  R, Miller  T, Janakiraman  S; U.S. Preventive Services Task Force.  Benefits and harms of prostate-specific antigen screening for prostate cancer: an evidence update for the U.S. Preventive Services Task Force.  Ann Intern Med. 2008;149(3):192-199.PubMedGoogle ScholarCrossref
29.
Bell  KJL, Hayen  A, Macaskill  P,  et al.  Value of routine monitoring of bone mineral density after starting bisphosphonate treatment: secondary analysis of trial data.  BMJ. 2009;338:b2266.PubMedGoogle ScholarCrossref
30.
Hillier  TA, Stone  KL, Bauer  DC,  et al.  Evaluating the value of repeat bone mineral density measurement and prediction of fractures in older women: the Study of Osteoporotic Fractures.  Arch Intern Med. 2007;167(2):155-160.PubMedGoogle ScholarCrossref
31.
Martí-Carvajal  AJ, Solà  I, Lathyris  D, Karakitsiou  D-E, Simancas-Racines  D.  Homocysteine-lowering interventions for preventing cardiovascular events.  Cochrane Database Syst Rev. 2013;1:CD006612. doi:10.1002/14651858.CD006612.pub3.PubMedGoogle Scholar
32.
Baglin  T, Gray  E, Greaves  M,  et al; British Committee for Standards in Haematology.  Clinical guidelines for testing for heritable thrombophilia.  Br J Haematol. 2010;149(2):209-220.PubMedGoogle ScholarCrossref
33.
Levin  A, Bakris  GL, Molitch  M,  et al.  Prevalence of abnormal serum vitamin D, PTH, calcium, and phosphorus in patients with chronic kidney disease: results of the study to evaluate early kidney disease.  Kidney Int. 2007;71(1):31-38.PubMedGoogle ScholarCrossref
34.
Palmer  SC, McGregor  DO, Craig  JC, Elder  G, Macaskill  P, Strippoli  GF.  Vitamin D compounds for people with chronic kidney disease not requiring dialysis.  Cochrane Database Syst Rev. 2009;(4):CD008175. doi:10.1002/14651858.CD008175.PubMedGoogle Scholar
35.
Garber  JR, Cobin  RH, Gharib  H,  et al; American Association of Clinical Endocrinologists and American Thyroid Association Taskforce on Hypothyroidism in Adults.  Clinical practice guidelines for hypothyroidism in adults: cosponsored by the American Association of Clinical Endocrinologists and the American Thyroid Association.  Endocr Pract. 2012;18(6):988-1028.PubMedGoogle ScholarCrossref
36.
Holick  MF.  Vitamin D deficiency.  N Engl J Med. 2007;357(3):266-281.PubMedGoogle ScholarCrossref
37.
Mohammed  T, Kirsch  J, Amorosa  J,  et al.  ACR Appropriateness Criteria Routine Admission and Preoperative Chest Radiography. Reston, VA: American College of Radiology; 2011.
38.
Joo  HS, Wong  J, Naik  VN, Savoldelli  GL.  The value of screening preoperative chest x-rays: a systematic review.  Can J Anaesth. 2005;52(6):568-574.PubMedGoogle ScholarCrossref
39.
Douglas  PS, Garcia  MJ, Haines  DE,  et al; American College of Cardiology Foundation Appropriate Use Criteria Task Force; American Society of Echocardiography; American Heart Association; American Society of Nuclear Cardiology; Heart Failure Society of America; Heart Rhythm Society; Society for Cardiovascular Angiography and Interventions; Society of Critical Care Medicine; Society of Cardiovascular Computed Tomography; Society for Cardiovascular Magnetic Resonance.  ACCF/ASE/AHA/ASNC/HFSA/HRS/SCAI/SCCM/SCCT/SCMR 2011 appropriate use criteria for echocardiography. A report of the American College of Cardiology Foundation Appropriate Use Criteria Task Force, American Society of Echocardiography, American Heart Association, American Society of Nuclear Cardiology, Heart Failure Society of America, Heart Rhythm Society, Society for Cardiovascular Angiography and Interventions, Society of Critical Care Medicine, Society of Cardiovascular Computed Tomography, and Society for Cardiovascular Magnetic Resonance endorsed by the American College of Chest Physicians.  J Am Coll Cardiol. 2011;57(9):1126-1166.PubMedGoogle ScholarCrossref
40.
Qaseem  A, Snow  V, Fitterman  N,  et al; Clinical Efficacy Assessment Subcommittee of the American College of Physicians.  Risk assessment for and strategies to reduce perioperative pulmonary complications for patients undergoing noncardiothoracic surgery: a guideline from the American College of Physicians.  Ann Intern Med. 2006;144(8):575-580.PubMedGoogle ScholarCrossref
41.
Fleisher  LA, Beckman  JA, Brown  KA,  et al; American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Writing Committee to Revise the 2002 Guidelines on Perioperative Cardiovascular Evaluation for Noncardiac Surgery); American Society of Echocardiography; American Society of Nuclear Cardiology; Heart Rhythm Society; Society of Cardiovascular Anesthesiologists; Society for Cardiovascular Angiography and Interventions; Society for Vascular Medicine and Biology; Society for Vascular Surgery.  ACC/AHA 2007 guidelines on perioperative cardiovascular evaluation and care for noncardiac surgery: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Writing Committee to Revise the 2002 Guidelines on Perioperative Cardiovascular Evaluation for Noncardiac Surgery): developed in collaboration with the American Society of Echocardiography, American Society of Nuclear Cardiology, Heart Rhythm Society, Society of Cardiovascular Anesthesiologists, Society for Cardiovascular Angiography and Interventions, Society for Vascular Medicine and Biology, and Society for Vascular Surgery.  Circulation. 2007;116(17):e418-e499. doi:10.1161/CIRCULATIONAHA.107.185699.PubMedGoogle ScholarCrossref
42.
Cornelius  RS, Martin  J, Wippold  FJ  II,  et al; American College of Radiology.  ACR appropriateness criteria sinonasal disease.  J Am Coll Radiol. 2013;10(4):241-246.PubMedGoogle ScholarCrossref
43.
Moya  A, Sutton  R, Ammirati  F,  et al; Task Force for the Diagnosis and Management of Syncope; European Society of Cardiology (ESC); European Heart Rhythm Association (EHRA); Heart Failure Association (HFA); Heart Rhythm Society (HRS).  Guidelines for the diagnosis and management of syncope (version 2009).  Eur Heart J. 2009;30(21):2631-2671.PubMedGoogle ScholarCrossref
44.
Jordan  J, Wippold  FI, Cornelius  R,  et al.  ACR Appropriateness Criteria: Headache. Reston, VA: American College of Radiology; 2009.
45.
Gronseth  GS, Greenberg  MK.  The utility of the electroencephalogram in the evaluation of patients presenting with headache: a review of the literature.  Neurology. 1995;45(7):1263-1267.PubMedGoogle ScholarCrossref
46.
Chou  R, Fu  R, Carrino  JA, Deyo  RA.  Imaging strategies for low-back pain: systematic review and meta-analysis.  Lancet. 2009;373(9662):463-472.PubMedGoogle ScholarCrossref
47.
Wolff  T, Guirguis-Blake  J, Miller  T, Gillespie  M, Harris  R.  Screening for carotid artery stenosis: an update of the evidence for the US Preventive Services Task Force.  Ann Intern Med. 2007;147(12):860-870.PubMedGoogle ScholarCrossref
48.
Buchbinder  R.  Plantar fasciitis.  N Engl J Med. 2004;350(21):2159-2166.PubMedGoogle ScholarCrossref
49.
Hendel  RC, Berman  DS, Di Carli  MF,  et al; American College of Cardiology Foundation Appropriate Use Criteria Task Force; American Society of Nuclear Cardiology; American College of Radiology; American Heart Association; American Society of Echocardiography; Society of Cardiovascular Computed Tomography; Society for Cardiovascular Magnetic Resonance; Society of Nuclear Medicine.  ACCF/ASNC/ACR/AHA/ASE/SCCT/SCMR/SNM 2009 appropriate use criteria for cardiac radionuclide imaging: a report of the American College of Cardiology Foundation Appropriate Use Criteria Task Force, the American Society of Nuclear Cardiology, the American College of Radiology, the American Heart Association, the American Society of Echocardiography, the Society of Cardiovascular Computed Tomography, the Society for Cardiovascular Magnetic Resonance, and the Society of Nuclear Medicine.  Circulation. 2009;119(22):e561-e587. doi:10.1161/CIRCULATIONAHA.109.192519.PubMedGoogle ScholarCrossref
50.
Boden  WE, O’Rourke  RA, Teo  KK,  et al; COURAGE Trial Research Group.  Optimal medical therapy with or without PCI for stable coronary disease.  N Engl J Med. 2007;356(15):1503-1516.PubMedGoogle ScholarCrossref
51.
Lin  GA, Dudley  RA, Redberg  RF.  Cardiologists’ use of percutaneous coronary interventions for stable coronary artery disease.  Arch Intern Med. 2007;167(15):1604-1609.PubMedGoogle ScholarCrossref
52.
Wheatley  K, Ives  N, Gray  R,  et al; ASTRAL Investigators.  Revascularization versus medical therapy for renal-artery stenosis.  N Engl J Med. 2009;361(20):1953-1962.PubMedGoogle ScholarCrossref
53.
Cooper  CJ, Murphy  TP, Cutlip  DE,  et al; CORAL Investigators.  Stenting and medical therapy for atherosclerotic renal-artery stenosis.  N Engl J Med. 2014;370(1):13-22.PubMedGoogle ScholarCrossref
54.
Goldstein  LB, Bushnell  CD, Adams  RJ,  et al; American Heart Association Stroke Council; Council on Cardiovascular Nursing; Council on Epidemiology and Prevention; Council for High Blood Pressure Research; Council on Peripheral Vascular Disease; Interdisciplinary Council on Quality of Care and Outcomes Research.  Guidelines for the primary prevention of stroke: a guideline for healthcare professionals from the American Heart Association/American Stroke Association.  Stroke. 2011;42(2):517-584.PubMedGoogle ScholarCrossref
55.
PREPIC Study Group.  Eight-year follow-up of patients with permanent vena cava filters in the prevention of pulmonary embolism: the PREPIC (Prevention du Risque d’Embolie Pulmonaire par Interruption Cave) randomized study.  Circulation. 2005;112(3):416-422.PubMedGoogle ScholarCrossref
56.
Sarosiek  S, Crowther  M, Sloan  JM.  Indications, complications, and management of inferior vena cava filters: the experience in 952 patients at an academic hospital with a level I trauma center.  JAMA Intern Med. 2013;173(7):513-517.PubMedGoogle ScholarCrossref
57.
Rajaram  SS, Desai  NK, Kalra  A,  et al.  Pulmonary artery catheters for adult patients in intensive care.  Cochrane Database Syst Rev. 2013;2:CD003408. doi:10.1002/14651858.CD003408.pub3.PubMedGoogle Scholar
58.
Kallmes  DF, Comstock  BA, Heagerty  PJ,  et al.  A randomized trial of vertebroplasty for osteoporotic spinal fractures.  N Engl J Med. 2009;361(6):569-579.PubMedGoogle ScholarCrossref
59.
Buchbinder  R, Osborne  RH, Ebeling  PR,  et al.  A randomized trial of vertebroplasty for painful osteoporotic vertebral fractures.  N Engl J Med. 2009;361(6):557-568.PubMedGoogle ScholarCrossref
60.
Boonen  S, Van Meirhaeghe  J, Bastian  L,  et al.  Balloon kyphoplasty for the treatment of acute vertebral compression fractures: 2-year results from a randomized trial.  J Bone Miner Res. 2011;26(7):1627-1637.PubMedGoogle ScholarCrossref
61.
Laupattarakasem  W, Laopaiboon  M, Laupattarakasem  P, Sumananont  C.  Arthroscopic debridement for knee osteoarthritis.  Cochrane Database Syst Rev. 2008;(1):CD005118. doi:10.1002/14651858.CD005118.pub2.PubMedGoogle Scholar
62.
Pinto  RZ, Maher  CG, Ferreira  ML,  et al.  Epidural corticosteroid injections in the management of sciatica: a systematic review and meta-analysis.  Ann Intern Med. 2012;157(12):865-877.PubMedGoogle ScholarCrossref
63.
Staal  JB, de Bie  R, de Vet  HC, Hildebrandt  J, Nelemans  P.  Injection therapy for subacute and chronic low-back pain.  Cochrane Database Syst Rev. 2008;(3):CD001824. doi:10.1002/14651858.CD001824.pub3.PubMedGoogle Scholar
64.
Buntin  MB, Zaslavsky  AM.  Too much ado about two-part models and transformation? comparing methods of modeling Medicare expenditures.  J Health Econ. 2004;23(3):525-542.PubMedGoogle ScholarCrossref
65.
Dimick  JB, Ryan  AM.  Methods for evaluating changes in health care policy: the difference-in-differences approach.  JAMA. 2014;312(22):2401-2402.PubMedGoogle ScholarCrossref
66.
Bertrand  M, Duflo  E, Mullainathan  S.  How much should we trust differences-in-differences estimates?  Q J Econ. 2004;119(1):249-275.Google ScholarCrossref
67.
Williams  RL.  A note on robust variance estimation for cluster-correlated data.  Biometrics. 2000;56(2):645-646.PubMedGoogle ScholarCrossref
68.
Chandra  A, Cutler  D, Song  Z. Who ordered that? the economics of treatment choices in medical care. In: Pauly MV, McGuire TG, Barros PP, eds. Handbook of Health Economics. Vol. 2. Amsterdam, the Netherlands: Elsevier Science; 2012:397-432.
69.
Smieliauskas  F, Lam  S, Howard  DH.  Impact of negative clinical trial results for vertebroplasty on vertebral augmentation procedure rates.  J Am Coll Surg. 2014;219(3):525-533.e1.PubMedGoogle ScholarCrossref
70.
Howard  DH, Shen  Y-C.  Comparative effectiveness research, technological abandonment, and health care spending.  Adv Health Econ Health Serv Res. 2012;23:103-121.PubMedGoogle Scholar
71.
Landon  BE.  Keeping score under a global payment system.  N Engl J Med. 2012;366(5):393-395.PubMedGoogle ScholarCrossref
72.
Cutler  DM, Skinner  JS, Stern  AD, Wennberg  DE. Physician Beliefs and Patient Preferences: A New Look at Regional Variation in Health Care Spending. Cambridge, MA: National Bureau of Economic Research Working Paper Series; 2013;19320.
73.
Gogineni  K, Shuman  KL, Chinn  D, Gabler  NB, Emanuel  EJ.  Patient demands and requests for cancer tests and treatments.  JAMA Oncol. 2015;1(1):33-39.PubMedGoogle ScholarCrossref
74.
McWilliams  JM, Dalton  JB, Landrum  MB, Frakt  AB, Pizer  SD, Keating  NL.  Geographic variation in cancer-related imaging: Veterans Affairs health care system versus Medicare.  Ann Intern Med. 2014;161(11):794-802.PubMedGoogle ScholarCrossref
×