[Skip to Navigation]
Sign In
Figure 1.  Flow Diagram of Sample Selection
Flow Diagram of Sample Selection

Q3 indicates third quarter; Q4, fourth quarter.

Figure 2.  Propensity Score–Weighted Risk-Adjusted Rates of Surgical Complications, Mortality, Length of Stay, and Hospital Costs Before and After the 2009 Implementation of the Pay-for-Performance Program Penalizing Hospital-Acquired Conditions
Propensity Score–Weighted Risk-Adjusted Rates of Surgical Complications, Mortality, Length of Stay, and Hospital Costs Before and After the 2009 Implementation of the Pay-for-Performance Program Penalizing Hospital-Acquired Conditions

Outcomes for surgical site infection (SSI) (A) and deep vein thrombosis (DVT) (B). The vertical lines indicate the implementation of the Hospital Acquired Conditions Present on Admission (HAC-POA) program. Although the HAC-POA was launched in the fourth quarter of 2007 (fiscal year [FY] 2008), the payment implication of the HAC-POA program for the intervention procedures included in this study began in the fourth quarter of 2008 (FY 2009). Intervention surgical procedures included the HAC-POA policy’s targeted procedures: cardiac (implantable electronic devices), orthopedic (spine, neck, shoulder, and elbow), and obesity-related bariatric procedures for SSI; total knee and total hip replacements for DVT; and all of the stated procedures for length of stay, mortality, and hospital costs. Control procedures include laparoscopic appendectomy and laparoscopic cholecystectomy. The error bars indicate 95% CIs. Models were adjusted for patient characteristics (race/ethnicity, sex, age, median household income for patient’s zip code, and Elixhauser comorbidity index), hospital characteristic (bed size, ownership, location and teaching status, and surgical volume), type of admission, procedure type, and year.

Table 1.  Propensity Score–Weighted Characteristics of Hospital Stays for the Intervention and Control Groupsa
Propensity Score–Weighted Characteristics of Hospital Stays for the Intervention and Control Groupsa
Table 2.  Propensity Score–Weighted Estimates of the Associations Between the HAC-POA Program and Surgical Outcomesa
Propensity Score–Weighted Estimates of the Associations Between the HAC-POA Program and Surgical Outcomesa
Supplement.

eMethods. Detailed Methods

eTable 1. ICD-9/10-CM Codes for Defining the Sample

eTable 2. Surgical Complication Rates for High-Volume Surgical Procedures in the Healthcare Cost and Utilization Project Data FY 2005-2017

eTable 3. Propensity Score-Unweighted Characteristics of Hospital Stays for the Pre-Policy Intervention Group, the Post-Policy Intervention Group, the Pre-Policy Control Group, and the Post-Policy Control Group

eTable 4. Covariate Unadjusted Propensity Score–Weighted Surgical Complication Rates, Length of Stay, and Hospital Costs Before and After the HAC-POA Program Implementation for Hospital Stays for Intervention Procedures and Control Procedures

eTable 5. Propensity Score–Weighted Estimates of the Association Between the HAC-POA Program and Surgical Site Infection, Deep Vein Thrombosis, and Mortality From the Logistic Regression Model and Hospital Costs From a Generalized Linear Model

eTable 6. Propensity Score–Weighted Estimates of the Association Between the HAC-POA Program and Surgical Outcomes From Difference-in-Differences Models Using a Placebo Implementation Year

eTable 7. Propensity Score–Weighted Estimates of the Association Between the HAC-POA Program and Surgical Outcomes From Model Using Intervention and Control Group-Specific Pre-Intervention Time Trends

eTable 8. Propensity Score–Weighted Estimates of the Association Between the HAC-POA Program and Surgical Outcomes among Patients Who Underwent the Target Surgical Procedures and Patients Who Underwent Carotid Endarterectomy

eTable 9. Propensity Score–Weighted Estimates of the Association Between the HAC-POA Program and Surgical Outcomes in Patients With Surgical Complications vs. Patients Without Surgical Complications

eFigure 1. Estimates of the Association Between the HAC-POA Program and Surgical Site Infections: Comparison Between the Intervention Procedures and the Synthetic Control Procedures

eFigure 2. Estimates of the Association Between the HAC-POA Program and Deep Vein Thrombosis: Comparison Between the Target Procedures and the Synthetic Control Procedures

eFigure 3. Ratio of Post-policy MSPE and Prepolicy MSPE: Surgical Site Infection

eFigure 4. Ratio of Post-policy MSPE and Pre-policy MSPE: Deep Vein Thrombosis

eReferences

1.
McDermott  KW, Freeman  WJ, Elixhauser  A. Overview of operating room procedures during inpatient stays in US hospitals, 2014. Agency for Healthcare Research and Quality. December 2017. Accessed January 7, 2018. https://www.hcup-us.ahrq.gov/reports/statbriefs/sb233-Operating-Room-Procedures-United-States-2014.pdf
2.
Kaye  DR, Luckenbaugh  AN, Oerline  M,  et al.  Understanding the costs associated with surgical care delivery in the Medicare population.   Ann Surg. 2020;271(1):23-28. doi:10.1097/SLA.0000000000003165 PubMedGoogle ScholarCrossref
3.
Sokol  DK, Wilson  J.  What is a surgical complication?   World J Surg. 2008;32(6):942-944. doi:10.1007/s00268-008-9471-6 PubMedGoogle ScholarCrossref
4.
Healy  MA, Mullard  AJ, Campbell  DA  Jr, Dimick  JB.  Hospital and payer costs associated with surgical complications.   JAMA Surg. 2016;151(9):823-830. doi:10.1001/jamasurg.2016.0773 PubMedGoogle ScholarCrossref
5.
Ban  KA, Minei  JP, Laronga  C,  et al.  American College of Surgeons and Surgical Infection Society: surgical site infection guidelines, 2016 update.   J Am Coll Surg. 2017;224(1):59-74. doi:10.1016/j.jamcollsurg.2016.10.029 PubMedGoogle ScholarCrossref
6.
Kandilov  AM, Coomer  NM, Dalton  K.  The impact of hospital-acquired conditions on Medicare program payments.   Medicare Medicaid Res Rev. 2014;4(4):E1-E23. doi:10.5600/mmrr.004.04.a01 PubMedGoogle ScholarCrossref
7.
Kwong  JZ, Weng  Y, Finnegan  M,  et al.  Effect of Medicare’s nonpayment policy on surgical site infections following orthopedic procedures.   Infect Control Hosp Epidemiol. 2017;38(7):817-822. doi:10.1017/ice.2017.86 PubMedGoogle ScholarCrossref
8.
Healy  D, Cromwell  J. Hospital-acquired conditions–present on admission: examination of spillover effects and unintended consequences. Centers for Medicare and Medicaid Services. September 2012. Accessed April 25, 2017. https://www.cms.gov/medicare/medicare-fee-for-service-payment/hospitalacqcond/downloads/hac-spillovereffects.pdf
9.
Kwong  W, Tomlinson  G, Feig  DS.  Maternal and neonatal outcomes after bariatric surgery; a systematic review and meta-analysis: do the benefits outweigh the risks?   Am J Obstet Gynecol. 2018;218(6):573-580. doi:10.1016/j.ajog.2018.02.003 PubMedGoogle ScholarCrossref
10.
Matthews  LJ, McConda  DB, Lalli  TAJ, Daffner  SD.  Orthostetrics: management of orthopedic conditions in the pregnant patient.   Orthopedics. 2015;38(10):e874-e880. doi:10.3928/01477447-20151002-53 PubMedGoogle ScholarCrossref
11.
Oranges  T, Dini  V, Romanelli  M.  Skin physiology of the neonate and infant: clinical implications.   Adv Wound Care (New Rochelle). 2015;4(10):587-595. doi:10.1089/wound.2015.0642 PubMedGoogle ScholarCrossref
12.
Weiss  BM, von Segesser  LK, Alon  E, Seifert  B, Turina  MI.  Outcome of cardiovascular surgery and pregnancy: a systematic review of the period 1984-1996.   Am J Obstet Gynecol. 1998;179(6 Pt 1):1643-1653. doi:10.1016/S0002-9378(98)70039-0 PubMedGoogle Scholar
13.
Patel  MS, Volpp  KG, Small  DS,  et al.  Association of the 2011 ACGME resident duty hour reforms with mortality and readmissions among hospitalized Medicare patients.   JAMA. 2014;312(22):2364-2373. doi:10.1001/jama.2014.15273 PubMedGoogle ScholarCrossref
14.
Krumholz  HM, Brindis  RG, Brush  JE,  et al; American Heart Association; Quality of Care and Outcomes Research Interdisciplinary Writing Group; Council on Epidemiology and Prevention; Stroke Council; American College of Cardiology Foundation; Endorsed by the American College of Cardiology Foundation.  Standards for statistical models used for public reporting of health outcomes: an American Heart Association scientific statement from the quality of care and outcomes research interdisciplinary writing group: cosponsored by the Council on Epidemiology and Prevention and the Stroke Council endorsed by the American College of Cardiology Foundation.   Circulation. 2006;113(3):456-462. doi:10.1161/CIRCULATIONAHA.105.170769 PubMedGoogle ScholarCrossref
15.
Jha  AK, Orav  EJ, Epstein  AM.  Low-quality, high-cost hospitals, mainly in South, care for sharply higher shares of elderly Black, Hispanic, and Medicaid patients.   Health Aff (Millwood). 2011;30(10):1904-1911. doi:10.1377/hlthaff.2011.0027 PubMedGoogle ScholarCrossref
16.
Centers for Medicare & Medicaid Services. Affected hospitals. September 29, 2014. Accessed November 8, 2018. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/HospitalAcqCond/AffectedHospitals.html
17.
Shadish  WR, Cook  TD, Campbell  DT.  Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Wadsworth Cengage Learning; 2002.
18.
Ibrahim  AM, Nathan  H, Thumma  JR, Dimick  JB.  Impact of the hospital readmission reduction program on surgical readmissions among Medicare beneficiaries.   Ann Surg. 2017;266(4):617-624. doi:10.1097/SLA.0000000000002368 PubMedGoogle ScholarCrossref
19.
Jha  AK, Joynt  KE, Orav  EJ, Epstein  AM.  The long-term effect of premier pay for performance on patient outcomes.   N Engl J Med. 2012;366(17):1606-1615. doi:10.1056/NEJMsa1112351 PubMedGoogle ScholarCrossref
20.
Ryan  AM, Burgess  JFJ  Jr, Tompkins  CP, Wallack  SS.  The relationship between Medicare’s process of care quality measures and mortality.   Inquiry. 2009;46(3):274-290. doi:10.5034/inquiryjrnl_46.03.274 PubMedGoogle ScholarCrossref
21.
Shih  T, Nicholas  LH, Thumma  JR, Birkmeyer  JD, Dimick  JB.  Does pay-for-performance improve surgical outcomes? an evaluation of phase 2 of the Premier Hospital Quality Incentive Demonstration.   Ann Surg. 2014;259(4):677-681. doi:10.1097/SLA.0000000000000425 PubMedGoogle ScholarCrossref
22.
RAND corporation. Analysis of hospital pay for performance. Accessed October 21, 2018. https://www.rand.org/pubs/technical_reports/TR562z12/analysis-of-hospital-pay-for-performance.html
23.
Bureau of Economic Analysis, US Department of Commerce. National income and product accounts. Accessed December 11, 2018. https://apps.bea.gov/iTable/iTable.cfm?reqid=19&step=3&isuri=1&1921=survey&1903=84
24.
Dunn  A, Grosse  SD, Zuvekas  SH.  Adjusting health expenditures for inflation: a review of measures for health services research in the United States.   Health Serv Res. 2018;53(1):175-196. doi:10.1111/1475-6773.12612 PubMedGoogle ScholarCrossref
25.
Daw  JR, Hatfield  LA.  Matching and regression to the mean in difference-in-differences analysis.   Health Serv Res. 2018;53(6):4138-4156. doi:10.1111/1475-6773.12993 PubMedGoogle ScholarCrossref
26.
Wing  C, Simon  K, Bello-Gomez  RA.  Designing difference in difference studies: best practices for public health policy research.   Annu Rev Public Health. 2018;39(1):453-469. doi:10.1146/annurev-publhealth-040617-013507 PubMedGoogle ScholarCrossref
27.
Stuart  EA, Huskamp  HA, Duckworth  K,  et al.  Using propensity scores in difference-in-differences models to estimate the effects of a policy change.   Health Serv Outcomes Res Methodol. 2014;14(4):166-182. doi:10.1007/s10742-014-0123-z PubMedGoogle ScholarCrossref
28.
Stuart  EA.  Matching methods for causal inference: a review and a look forward.   Stat Sci. 2010;25(1):1-21. doi:10.1214/09-STS313 PubMedGoogle ScholarCrossref
29.
Cameron  AC, Trivedi  PK.  Microeconometrics Using Stata. Revised. Stata Press; 2010.
30.
Greene  W.  The behaviour of the maximum likelihood estimator of limited dependent variable models in the presence of fixed effects.   Econometrics J. 2004;7(1):98-119. doi:10.1111/j.1368-423X.2004.00123.x Google ScholarCrossref
31.
Jones  AM. Models for health care. University of York, Centre for Health Economics. January 2010. Accessed March 11, 2017. https://www.york.ac.uk/media/economics/documents/herc/wp/10_01.pdf
32.
Mitchell  MN.  Interpreting and Visualizing Regression Models Using Stata. Stata Press; 2012.
33.
Manning  WG, Basu  A, Mullahy  J.  Generalized modeling approaches to risk adjustment of skewed outcomes data.   J Health Econ. 2005;24(3):465-488. doi:10.1016/j.jhealeco.2004.09.011 PubMedGoogle ScholarCrossref
34.
Manning  WG, Mullahy  J.  Estimating log models: to transform or not to transform?   J Health Econ. 2001;20(4):461-494. doi:10.1016/S0167-6296(01)00086-8 PubMedGoogle ScholarCrossref
35.
Basu  A, Manning  WG.  Issues for the next generation of health care cost analyses.   Med Care. 2009;47(7)(suppl 1):S109-S114. doi:10.1097/MLR.0b013e31819c94a1PubMedGoogle Scholar
36.
Abadie  A, Diamond  A, Hainmueller  J.  Synthetic control methods for comparative case studies: estimating the effect of California’s tobacco control program.   J Am Statistical Assoc. 2010;105(490):493-505. doi:10.1198/jasa.2009.ap08746 Google ScholarCrossref
37.
Ryan  AM, Burgess  JF  Jr, Dimick  JB.  Why we should not be indifferent to specification choices for difference-in-differences.   Health Serv Res. 2015;50(4):1211-1235. doi:10.1111/1475-6773.12270 PubMedGoogle ScholarCrossref
38.
Krell  RW, Girotti  ME, Dimick  JB.  Extended length of stay after surgery: complications, inefficient practice, or sick patients?   JAMA Surg. 2014;149(8):815-820. doi:10.1001/jamasurg.2014.629 PubMedGoogle ScholarCrossref
39.
Meyer  GS, Nelson  EC, Pryor  DB,  et al.  More quality measures versus measuring what matters: a call for balance and parsimony.   BMJ Qual Saf. 2012;21(11):964-968. doi:10.1136/bmjqs-2012-001081 PubMedGoogle ScholarCrossref
40.
Alteras  T, Meyer  J, Silow-Carroll  S. Hospital quality improvement: strategies and lessons from US hospitals. April 1, 2007. Accessed October 23, 2020. https://www.commonwealthfund.org/publications/fund-reports/2007/apr/hospital-quality-improvement-strategies-and-lessons-us-hospitals
41.
Weinick  RM, Chien  AT, Rosenthal  MB, Bristol  SJ, Salamon  J.  Hospital executives’ perspectives on pay-for-performance and racial/ethnic disparities in care.   Med Care Res Rev. 2010;67(5):574-589. doi:10.1177/1077558709354522 PubMedGoogle ScholarCrossref
42.
Casalino  LP, Gans  D, Weber  R,  et al.  US physician practices spend more than $15.4 billion annually to report quality measures.   Health Aff (Millwood). 2016;35(3):401-406. doi:10.1377/hlthaff.2015.1258 PubMedGoogle ScholarCrossref
43.
Brenner  MH, Curbow  B, Legro  MW.  The proximal-distal continuum of multiple health outcome measures: the case of cataract surgery.   Med Care. 1995;33(4)(suppl):AS236-AS244.PubMedGoogle Scholar
44.
Ryan  AM, Krinsky  S, Maurer  KA, Dimick  JB.  Changes in hospital quality associated with hospital value-based purchasing.   N Engl J Med. 2017;376(24):2358-2366. doi:10.1056/NEJMsa1613412 PubMedGoogle ScholarCrossref
45.
Borza  T, Oreline  MK, Skolarus  TA,  et al.  Association of the hospital readmissions reduction program with surgical readmissions.   JAMA Surg. 2018;153(3):243-250. doi:10.1001/jamasurg.2017.4585 PubMedGoogle ScholarCrossref
46.
Ramaswamy  A, Marchese  M, Cole  AP,  et al.  Comparison of hospital readmission after total hip and total knee arthroplasty vs spinal surgery after implementation of the Hospital Readmissions Reduction Program.   JAMA Netw Open. 2019;2(5):e194634. doi:10.1001/jamanetworkopen.2019.4634 PubMedGoogle Scholar
47.
Scally  CP, Thumma  JR, Birkmeyer  JD, Dimick  JB.  Impact of surgical quality improvement on payments in medicare patients.   Ann Surg. 2015;262(2):249-252. doi:10.1097/SLA.0000000000001069 PubMedGoogle ScholarCrossref
48.
Bastani  H, Goh  J, Bayati  M. Evidence of upcoding in pay-for-performance programs. Stanford University Graduate School of Business Research Paper No 15-43. July 13, 2015. doi:http://dx.doi.org/10.2139/ssrn.2630454
49.
Angrist  JD, Pischke  J-S.  Mastering Metrics: The Path From Cause to Effect. Princeton University Press; 2015.
50.
Ma  Y, Zhang  W, Lyman  S, Huang  Y.  The HCUP SID Imputation Project: improving statistical inferences for health disparities research by imputing missing race data.   Health Serv Res. 2018;53(3):1870-1889. doi:10.1111/1475-6773.12704 PubMedGoogle ScholarCrossref
51.
Kim  KM, Max  W, White  JS, Chapman  SA, Muench  U.  Do penalty-based pay-for-performance programs improve surgical care more effectively than other payment strategies? a systematic review.   Ann Med Surg (Lond). 2020;60:623-630. doi:10.1016/j.amsu.2020.11.060PubMedGoogle ScholarCrossref
52.
Kahneman  D, Tversky  A.  Prospect theory: an analysis of decision under risk.   Econometrica. 1979;47:263-291. doi:10.2307/1914185 Google ScholarCrossref
Original Investigation
Health Policy
August 18, 2021

Evaluation of Clinical and Economic Outcomes Following Implementation of a Medicare Pay-for-Performance Program for Surgical Procedures

Author Affiliations
  • 1Clinical Excellence Research Center, Stanford University School of Medicine, Palo Alto, California
  • 2Department of Social and Behavioral Sciences, University of California School of Nursing, San Francisco
  • 3Philip R. Lee Institute for Health Policy Studies, University of California School of Medicine, San Francisco
  • 4Department of Epidemiology & Biostatistics, University of California School of Medicine, San Francisco
  • 5Institute for Health & Aging, University of California, San Francisco
JAMA Netw Open. 2021;4(8):e2121115. doi:10.1001/jamanetworkopen.2021.21115
Key Points

Question  What is the association between the Hospital-Acquired Conditions Present on Admission program by the Centers for Medicare & Medicaid Services pay-for-performance program and surgical care quality and costs?

Findings  In this cross-sectional study, the Hospital-Acquired Conditions Present on Admission program was associated with a decreased incidence of surgical site infection (0.3 percentage points) in the targeted procedures and a reduction in length of stay (0.5 days) and hospital costs (8.1%). Deep vein thrombosis and in-hospital mortality did not improve.

Meaning  The findings of this study suggest that the pay-for-performance program was associated with improvement on several dimensions of surgical care, including small reductions in surgical site infection and length of stay, and moderate reductions in hospital costs.

Abstract

Importance  Surgical complications increase hospital costs by approximately $20 000 per admission and extend hospital stays by 9.7 days. Improving surgical care quality and reducing costs is needed for patients undergoing surgery, health care professionals, hospitals, and payers.

Objective  To evaluate the association of the Hospital-Acquired Conditions Present on Admission (HAC-POA) program, a mandated national pay-for-performance program by the Centers for Medicare & Medicaid Services, with surgical care quality and costs.

Design, Setting, and Participants  A cross-sectional study of Medicare inpatient surgical care stays from October 2004 through September 2017 in the US was conducted. The National Inpatient Sample and a propensity score–weighted difference-in-differences analysis of hospital stays with associated primary surgical procedures was used to compare changes in outcomes for the intervention and control procedures before and after HAC-POA program implementation. The sample consisted of 1 317 262 inpatient surgical episodes representing 1 198 665 stays for targeted procedures and 118 597 stays for nontargeted procedures. Analyses were performed between November 1, 2020, and May 7, 2021.

Exposures  Implementation of the HAC-POA program for the intervention procedures included in this study (fiscal year 2009).

Main Outcomes and Measures  Incidence of surgical site infections and deep vein thrombosis, length of stay, in-hospital mortality, and hospital costs. Analyses were adjusted for patient and hospital characteristics and indicators for procedure type, hospital, and year.

Results  In our propensity score–weighted sample, the intervention procedures group comprised 1 047 351 (88.5%) individuals who were White and 742 734 (60.6%) women; mean (SD) age was 75 (6.9) years. The control procedures group included 94 715 (88.0%) individuals who were White, and 65 436 (60.6%) women; mean (SD) age was 75 (7.1) years. After HAC-POA implementation, the incidence of surgical site infections in targeted procedures decreased by 0.3 percentage points (95% CI, −0.5 to −0.1 percentage points; P = .02) compared with nontargeted procedures. The program was associated with a reduction in length of stay by 0.5 days (95% CI, −0.6 to −0.4 days; P < .001) and hospital costs by 8.1% (95% CI, −10.2% to −6.1%; P < .001). No significant changes in deep vein thrombosis incidence and mortality were noted.

Conclusions and Relevance  The findings of this study suggest that the HAC-POA program is associated with small decreases in surgical site infection and length of stay and moderate decreases in hospital costs for patients enrolled in Medicare. Policy makers may consider these findings when evaluating the continuation and expansion of this program for other surgical procedures, and payers may want to consider adopting a similar policy.

Introduction

Surgical care composes 30% of hospital admissions,1 50% of overall hospital costs,1 and 50% of all Medicare spending.2 In particular, surgical complications, defined as the adverse and unintended results of surgery,3 increase hospital costs by approximately $20 000 per admission4 and extend hospital stays by 9.7 days.5 Improving surgical care quality and reducing costs are thus necessary for patients, health care professionals, hospitals, and payers.

In 2008, the Centers for Medicare & Medicaid Services (CMS) implemented the Hospital-Acquired Conditions Present on Admission (HAC-POA) program to reduce high-cost and high-volume complications among Medicare patients, and it remains in effect today. This mandatory pay-for-performance (P4P) policy penalizes hospitals by no longer paying for the treatment of preventable complications developed during a patient’s hospitalization. The HAC-POA program targets 14 selected conditions; those directly related to surgery include foreign objects retained after surgery, surgical site infection (SSI) following coronary artery bypass graft, cardiac implantable electronic device, bariatric surgery, certain orthopedic procedures, and deep vein thrombosis (DVT) or pulmonary embolism following certain orthopedic procedures.

Although it has been more than 10 years since the HAC-POA program was implemented, little is known about whether the program is associated with improved surgical care outcomes. Three studies have examined surgical outcomes associated with the HAC-POA program, but issues with study design, such as the absence of a comparison with the prepolicy period,6 a control group subject to spillover effects,7 and a short study period,8 have limited the evaluation of the HAC-POA program.

The aim of this cross-sectional study was to evaluate the association between the HAC-POA program and surgical care quality and costs using a difference-in-differences method. Specifically, we examined surgical complications (ie, SSI and DVT), length of stay (LOS), in-hospital mortality, and hospital costs among Medicare patients who underwent the HAC-POA–targeted surgical procedures compared with nontargeted procedures.

Methods
Data and Study Population

We used the National Inpatient Sample of the Healthcare Cost and Utilization Project (HCUP) from October 2004 through September 2017 to examine the association between the HAC-POA program and the incidence of SSI and DVT, LOS, in-hospital mortality, and hospital costs. The unit of analysis was an episode of inpatient surgical care, defined as the hospital stay with its accompanying events associated with the primary surgical procedure. We identified the primary surgical procedure using International Classification of Diseases, Ninth Revision (ICD-9) and International Statistical Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) codes (eTable 1 in the Supplement). The study was exempted by the University of California, San Francisco, Institutional Review Board because it was a secondary analysis of deidentified data. The study followed Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline for cross-sectional studies.

We applied several exclusion criteria: (1) maternal or neonatal inpatient services because the risk of complications and treatment management differs in these populations due to pregnancy-related physiologic changes,9-12 (2) transfers from other facilities,13 (3) observations with surgical complications as the first diagnosis (potentially preexisting conditions present on admission),14 (4) hospitals with fewer than 30 observations for each procedure to avoid unstable estimates due to small sample size,15 (5) hospital stays not paid by Medicare and not subject to the HAC-POA program (eg, hospital stays at critical access hospitals),16 and (6) observations with missing information on key study variables. Our sample consisted of Medicare-covered inpatient surgical care, representing 1 198 665 hospital stays for targeted procedures and 118 597 stays for nontargeted procedures. Figure 1 shows the sample selection process.

Targeted vs Nontargeted Procedures and Outcomes

Targeted surgical procedures included in this study were cardiac implantable electronic devices, bariatric surgery, certain orthopedic procedures (ie, spine, shoulder, and elbow), total knee arthroplasty, and total hip arthroplasty. We excluded coronary artery bypass graft from the targeted procedures because coronary artery bypass graft was targeted by other P4P programs (eg, the Premier Hospital Quality Incentive Demonstration), making it difficult to isolate the association with HAC-POA. We selected surgical procedures not targeted by the HAC-POA program (henceforth control procedures) from hospitals that performed both targeted and nontargeted procedures. Outcomes in these procedures indicate what would have happened to the targeted procedures (henceforth intervention procedures) without the policy.17 By comparing outcomes between the intervention and control procedures, the influence of potential confounders, such as using electronic health records to enhance the care process and other efforts to improve quality, is removed, as long as the association with targeted and nontargeted procedures is stable over time. The control group was selected from procedures not affected by the CMS policy. We selected laparoscopic cholecystectomy and laparoscopic appendectomy procedures as control groups because these procedures have similar average outcome levels (eTable 2 in the Supplement), involve a different surgical specialty (ie, general surgery), and are not affected by care relating to the targeted procedures. In addition, trends in the outcomes (except length of stay) before policy implementation are similar to those in the intervention procedures, and these procedures are high in volume and have a reasonably large number of complications. Additional information on the rationale for procedure selection, including criteria applied and how each criterion was evaluated, is available in eMethods in the Supplement.

The primary outcomes were the incidence of surgical complications (ie, SSI and DVT), measured dichotomously for each episode using ICD-9 and ICD-10-CM codes (eTable 1 in the Supplement). Secondary outcomes were LOS, in-hospital mortality, and hospital costs. Although these outcomes were not the stated aims of the HAC-POA program, we included them because they have been hypothesized to be downstream consequences of pay-for-performance programs.18-22 Length of stay was measured as the number of in-hospital days associated with the procedure. Mortality was defined as in-hospital death recorded for a particular surgical procedure. Hospital costs were measured for an inpatient stay with HCUP’s cost-to-charge ratios at the hospital level. We adjusted for inflation using the personal consumption expenditures health-by-function index,23 which is the most appropriate for health expenditures.24

Statistical Analysis

To measure the association between the HAC-POA program and surgical patients’ outcomes, we used a propensity score–weighted difference-in-differences analysis, a statistical technique that compares changes in an outcome for intervention and control procedures before and after the policy implementation (eMethods in the Supplement) to estimate the consequences of the program. Difference-in-differences analysis removes bias from observed and time-invariant unobserved confounders and thus enables us to isolate variation due to the policy.25,26 Although the HAC-POA launched in the fourth quarter of 2007 (fiscal year 2008), the payment implication of the HAC-POA program for the intervention procedures included in this study began in the fourth quarter of 2008 (fiscal year 2009).

We began by estimating propensity score weights for 4 groups (prepolicy intervention, postpolicy intervention, prepolicy control, and postpolicy control)27 because of substantial prepolicy imbalances in the unweighted sample (eMethods and eTable 3 in the Supplement). We then compared weighted characteristics of the inpatient stays of 3 groups (postpolicy intervention, prepolicy control, and postpolicy control) with the prepolicy intervention group using standardized difference in means (eMethods in the Supplement).28 We then measured associations of the program with surgical complications with linear probability models because linear models provide unbiased and consistent estimation with fixed effects.29,30 We estimated separate models for each primary outcome (SSI and DVT). To estimate hospital costs, linear regression with a log-transformed cost outcome was used, as used in other studies.31,32 Length of stay was analyzed using negative binomial regression to allow for overdispersion. All models controlled for patient and hospital characteristics and indicators for procedure type, year, and hosptial. Patient characteristics included race/ethnicity (White, Black, Hispanic, Asian/Pacific Islander, and other), sex, age, median household income for patient’s zip code by quartile, and 29 indicators from the modified Elixhauser comorbidity index. Race/ethnicity was categorized into 5 groups: White, Black, Hispanic, Asian/Pacific Islander, and other. The Asian and Pacific Islander populations were aggregated into a single category as collected by the HCUP. The individual categorized as Native American, multiracial individual, and other were combined into a single category (“other”) because of the small sample size. Hospital characteristics included bed size (small, medium, and large), ownership (public or private), location and teaching status (rural teaching, rural nonteaching, urban nonteaching, and urban teaching), log-transformed surgical volume for each hospital, and type of admission (elective and nonelective).

To determine whether hospitals may have shifted charges to patients without complications to compensate for potential charge penalties in the surgical complication group, we also estimated costs in linear probability models for stays with and without complications using a 3-way interaction (policy indicator × postpolicy indicator × complication indicator) between the difference-in-differences estimator and the indicator for patients with and without complications (eMethods in the Supplement).

We conducted a series of sensitivity analyses to assess the robustness of results. These analyses included alternative regression specifications, such as using logistic regression to model surgical complications and mortality and a 1-part generalized linear model with γ distribution and log link to model hospital costs.33-35 We also applied several econometric techniques to validate the difference-in-differences approach, including synthetic control methods,36 placebo difference-in-differences models, procedure-specific preintervention indicator variables,37 and the use of a different control procedure to address concerns related to wound class incomparability between groups. In addition, we assessed for potential negative consequences (increases in costs and lower quality of care) in the noncomplication group. Additional details on these analyses are provided in eMethods in the Supplement.

All analyses were estimated with robust SEs clustered by hospital to account for patients in the same hospital. We applied survey weights to obtain nationally representative estimates and account for HCUP’s complex survey designs. Statistical significance was assessed with a 2-sided significance level of P < .05. Analyses were performed between November 1, 2020, and May 7, 2021 using Stata MP, version 15.1 (StataCorp).

Results

Our sample included 1 317 262 inpatient surgical episodes representing 1 198 665 stays for intervention procedures and 118 597 stays for control procedures. In our propensity score–weighted sample, the intervention group included 1 047 351 (88.5%) individuals who were White and 742 734 (60.6%) women; the mean (SD) age was 75 (6.9) years. The control group included 94 715 (88.0%) individuals who were White and 65 436 (60.6%) women; the mean (SD) age was 75 (7.1) years. Table 1 reports propensity score–weighted characteristics of hospital stays for the 4 groups, indicating that weighting helped achieve balance in covariates and that the group composition of the intervention and control procedures did not change notably over time.

Table 2 reports covariate-adjusted propensity score–weighted difference-in-differences estimates for each study outcome (propensity score–weighted but unadjusted results are available in eTable 4 in the Supplement). The estimates—comparing changes in the outcomes from the preprogram to postprogram period between the intervention and control procedures—indicated that the HAC-POA program was associated with a significant reduction in SSI incidences of 0.3 percentage points (95% CI, −0.5 to −0.1 percentage points; P = .02). The HAC-POA program was also associated with a significant reduction in LOS of 0.5 days (95% CI, −0.6 to −0.4 days; P < .001) and hospital costs (−8.1%; 95% CI, −10.2% to −6.1%; P < .001). The HAC-POA program was, however, not associated with a reduction in DVT incidence (0.02 percentage points, 95% CI, −0.1 to 0.2; P = .80) and mortality (0.05 percentage points; 95% CI, −0.04 to 0.2; P = .30).

To better understand the outcomes of the policy and validate methodologic assumptions, we graphed outcomes over time (Figure 2). Covariate-adjusted propensity score–weighted incidence rates of SSI improved in the intervention procedures a few years after HAC-POA implementation. In the control procedures, the incidence of SSI worsened in 2008, decreased in 2012, then fluctuated. The incidence of DVT fluctuated in both the intervention and control procedures. The secondary outcomes (LOS, mortality, and hospital costs) worsened in 2008 in both the intervention and control procedures; LOS declined in both the intervention and control procedures starting in 2012, but declined slightly more in the intervention procedures, whereas hospital costs declined in the intervention procedures starting in 2012 but remained flat in the control procedures. The SSI, LOS, and hospital costs graphs show differences in prepolicy trends between the intervention and control procedures. We conducted additional sensitivity analyses to examine this violation in statistical assumption and possible bias in estimates.

Sensitivity Analyses

Sensitivity analysis showed consistent results across study outcomes, indicating benefits in association with the policy. First, we estimated the likelihood of surgical complications and mortality with average marginal effects using logistic regression and hospital costs with a generalized linear model. The results were nearly identical to those from the linear probability models (eTable 5 in the Supplement). Second, we performed several follow-up analyses to validate the results of the difference-in-differences analyses. To address residual differences in prepolicy trends and assess whether results are associated with other contemporaneous policies, we used synthetic control methods. Results indicated a significant reduction in SSI and no changes in DVT (eFigures 1-4 in the Supplement), consistent with our main results. We also tested for a placebo implementation year of the policy and found no association between the placebo policy variable and surgical outcomes (eTable 6 in the Supplement). To address differences in prepolicy trends, we estimated models with varying prepolicy time trends for procedures and found quantitatively similar results (eTable 7 in the Supplement). Furthermore, using carotid endarterectomy as an alternative control procedure, we observed consistent, but larger, improvements in all outcomes related to the HAC-POA program (eTable 8 in the Supplement). Third, our analysis evaluating potential negative consequences for patients without surgical complications found no significant difference in mortality, LOS, and hospital costs between the intervention and control procedures (eTable 9 in the Supplement).

Discussion

Our findings of the outcomes of a mandatory national CMS P4P program show significant improvements in several dimensions of surgical care. The incidence of SSI significantly decreased, and we observed a significant reduction in LOS and hospital costs, likely due to the decrease in SSI incidence. However, we did not find evidence of an association between the HAC-POA program and in-hospital mortality. We also found no evidence that the HAC-POA program was associated with cost shifting and lower quality among patients without surgical complications.

Our results are consistent with earlier research reporting that the HAC-POA program was associated with a decreased incidence of SSI.8 However, this study also found a statistically significance decrease in DVT. One reason why our results differ might be because previous research adopted a pre-post comparison without a control procedure that did not allow the authors to account for a counterfactual trend. Their study period ranged from 2008 to 2010, leaving later outcomes of the policy unexamined, while we used a difference-in-differences approach with more than 10 years of data.

The magnitude of effect size we found is small. However, cost implications are important, given the substantial costs associated with complications. The cost reduction of 8.1% that we observed in this study is of interest to hospitals, the CMS, and other payers. Considering that the risk-adjusted mean cost for targeted procedures before program implementation was approximately $22 912 per hospital stay, a cost reduction of 8.1% implies a decrease of $1856 for an admission involving a target procedure. Taking into account approximately 1.2 million targeted procedures included in this study, the annual cost savings for hospitals could be $170 million, similar to the cost estimate of complications in the Medicare program suggested by Kandilov and colleagues in 2014.6 The reduction in costs observed in this study are related to the decrease in surgical complications leading to fewer additional surgeries, less intensive care use, and shorter length of stay.38 Other contributing factors might be changes in practice patterns by health care professionals and hospitals in an effort to improve efficiency of care, including collaborative discharge planning and additional patient education. However, the administration and data collection needs of such a large program could be substantial,39 and the costs for hospitals to implement a P4P program should be considered when designing P4P programs.

To date, research evaluating the long-term outcomes of P4P programs is limited, but evidence suggests it may take years for hospitals to change their practices and observe improvements in quality.19 In our study, the incidence of SSI did not significantly decrease until 2014—5 years after program implementation. This length of time highlights the importance of evaluating the long-term effects of P4P programs. System and behavioral changes, such as changes to electronic health record infrastructure to improve documentation of presurgical and postsurgical management, and optimizing care coordination between surgeons, nurses, and other specialists, take time.40-42 Our outcomes are considered a more distal measure of care and, as such, likely require more time than a process measure, such as preoperative β-blockade therapy, which is a proximal measure of care.43 Also, the incidence of surgical site infection is only approximately 1% in the intervention procedures, which makes it difficult to improve. Health care professionals and hospitals may disproportionately focus on areas that occur more frequently than SSI. Mortality did not improve even 6 years after HAC-POA program implementation, which is consistent with evidence under the Medicare Premier Hospital Quality Incentive Demonstration19 and the Hospital Value-Based Purchasing program.44

Previous studies evaluating the long-term outcomes of the Hospital Readmissions Reduction Program, another penalty-based P4P policy, also found improvements in surgical care.45,46 Although additional studies are needed to evaluate penalty-based P4P programs, our results provide additional support for the promising benefits of penalty-based P4P programs. Expanding the HAC-POA program to include additional procedures might prove beneficial in improving care for a larger number of surgical patients.

One potential unintended consequence of the HAC-POA program is that hospitals might seek to recover shortfalls by increasing costs to patients without surgical complications. We examined whether patients without surgical complications might have experienced increases in costs but found no such evidence. However, our data show that, for Medicare patients undergoing surgeries for nontargeted procedures, the incidence of SSI increased a year before HAC-POA program implementation and remained high. This increase might suggest that hospitals shifted resources to improve the quality of targeted procedures at the expense of nontargeted procedures.45 Alternatively, hospitals and health care professionals may have lacked incentives to improve the quality of nontargeted procedures. Additional research is needed to further explore possible negative consequences of P4P efforts on nontargeted procedures and on specific subgroups of hospitals and patients. For example, hospitals serving a disproportionate share of vulnerable patients may be more likely to be penalized owing to resource limitations. Another challenge for P4P implementation and evaluation is that some hospitals may reject high-risk patients or change coding practices so as not to report complications or claim complications as present-on-admission conditions to avoid penalties.47,48

Limitations

This study has several limitations. First, the observed associations with the implementation of the HAC-POA program may be overestimated owing to undetected or underreported surgical complications. The HCUP data only capture SSIs and DVTs detected during hospitalization and not those occurring after discharge.12 However, if rates of missed complications are similar between targeted and nontargeted procedures, it would not bias the result. Second, preintervention trends were significantly different in LOS. Group composition might also have changed over time (eg, characteristics of patients who undergo certain procedure or the composition of clinicians providing care). To address this issue, we used propensity score weighting to remove substantial differences in the composition of each group. We also performed a series of sensitivity analyses, including difference-in-differences analysis with group-specific time trends37,49 and analyses using synthetic control methods36 to address selection bias across time and across group and confirmed the robustness of our main results. But there remains a potential for confounding from unmeasured time-variant changes. Third, the difference-in-differences model is susceptible to unobserved time-varying confounding, and we cannot rule out confounding from contemporaneous policies implemented during this study period (eg, the Hospital-Acquired Condition Reduction Program) and affected the surgical procedures and outcome measures under study. Fourth, hospital costs related to the implementation of the P4P intervention, such as hiring and training staff to oversee program implementation, could not be assessed in these data. Hospital payments are also specific to the service and payer; thus, hospital-specific cost-to-charge ratios may not fully capture the true costs of specific services. Fifth, although the HCUP, a large-scale data set, has been widely used in health care research, it is reported to have a moderate amount of missing data, especially patient race and ethnicity variables, that may bias the estimates.50 To address the concerns about missingness, we used a conditional multiple imputation by chained equation50 and found consistent results. Sixth, this study was based on administrative data, which rely on self-reported complications, and thus inherits limitations in coding practice changes. To minimize this limitation, we adjusted for time effects in our modeling, but the potential for confounding from coding practice change remains and the observed association might be overestimated.

Conclusions

Our study found evidence suggesting improved surgical care related to the implementation of CMS’s national P4P program using a penalty design. Penalties have recently become popular in P4P programs,51 likely because health care professionals and hospitals are more responsive to losses than gains.52 Although the incidence of surgical complications is low, the costs of complications are high, and our findings suggest that not paying for hospital-acquired infections might successfully encourage hospitals and health care professionals to improve care for surgical patients. Policy makers can use these findings when evaluating the continuation and expansion of this P4P program for the CMS. Other payers also may want to consider implementing similar policies.

Back to top
Article Information

Accepted for Publication: June 10, 2021.

Published: August 18, 2021. doi:10.1001/jamanetworkopen.2021.21115

Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2021 Kim KM et al. JAMA Network Open.

Corresponding Author: Kyung Mi Kim, PhD, RN, Clinical Excellence Research Center, Stanford University School of Medicine, 454 Quarry Rd, MC 5657, Palo Alto, CA 94304 (kyungkim@stanford.edu).

Author Contributions: Dr Kim had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: Kim, White, Chapman.

Acquisition, analysis, or interpretation of data: Kim, White, Max, Muench.

Drafting of the manuscript: Kim, Max.

Critical revision of the manuscript for important intellectual content: All authors.

Statistical analysis: Kim, Max, Muench.

Administrative, technical, or material support: Kim.

Supervision: Kim, Chapman, Muench.

Conflict of Interest Disclosures: None reported.

References
1.
McDermott  KW, Freeman  WJ, Elixhauser  A. Overview of operating room procedures during inpatient stays in US hospitals, 2014. Agency for Healthcare Research and Quality. December 2017. Accessed January 7, 2018. https://www.hcup-us.ahrq.gov/reports/statbriefs/sb233-Operating-Room-Procedures-United-States-2014.pdf
2.
Kaye  DR, Luckenbaugh  AN, Oerline  M,  et al.  Understanding the costs associated with surgical care delivery in the Medicare population.   Ann Surg. 2020;271(1):23-28. doi:10.1097/SLA.0000000000003165 PubMedGoogle ScholarCrossref
3.
Sokol  DK, Wilson  J.  What is a surgical complication?   World J Surg. 2008;32(6):942-944. doi:10.1007/s00268-008-9471-6 PubMedGoogle ScholarCrossref
4.
Healy  MA, Mullard  AJ, Campbell  DA  Jr, Dimick  JB.  Hospital and payer costs associated with surgical complications.   JAMA Surg. 2016;151(9):823-830. doi:10.1001/jamasurg.2016.0773 PubMedGoogle ScholarCrossref
5.
Ban  KA, Minei  JP, Laronga  C,  et al.  American College of Surgeons and Surgical Infection Society: surgical site infection guidelines, 2016 update.   J Am Coll Surg. 2017;224(1):59-74. doi:10.1016/j.jamcollsurg.2016.10.029 PubMedGoogle ScholarCrossref
6.
Kandilov  AM, Coomer  NM, Dalton  K.  The impact of hospital-acquired conditions on Medicare program payments.   Medicare Medicaid Res Rev. 2014;4(4):E1-E23. doi:10.5600/mmrr.004.04.a01 PubMedGoogle ScholarCrossref
7.
Kwong  JZ, Weng  Y, Finnegan  M,  et al.  Effect of Medicare’s nonpayment policy on surgical site infections following orthopedic procedures.   Infect Control Hosp Epidemiol. 2017;38(7):817-822. doi:10.1017/ice.2017.86 PubMedGoogle ScholarCrossref
8.
Healy  D, Cromwell  J. Hospital-acquired conditions–present on admission: examination of spillover effects and unintended consequences. Centers for Medicare and Medicaid Services. September 2012. Accessed April 25, 2017. https://www.cms.gov/medicare/medicare-fee-for-service-payment/hospitalacqcond/downloads/hac-spillovereffects.pdf
9.
Kwong  W, Tomlinson  G, Feig  DS.  Maternal and neonatal outcomes after bariatric surgery; a systematic review and meta-analysis: do the benefits outweigh the risks?   Am J Obstet Gynecol. 2018;218(6):573-580. doi:10.1016/j.ajog.2018.02.003 PubMedGoogle ScholarCrossref
10.
Matthews  LJ, McConda  DB, Lalli  TAJ, Daffner  SD.  Orthostetrics: management of orthopedic conditions in the pregnant patient.   Orthopedics. 2015;38(10):e874-e880. doi:10.3928/01477447-20151002-53 PubMedGoogle ScholarCrossref
11.
Oranges  T, Dini  V, Romanelli  M.  Skin physiology of the neonate and infant: clinical implications.   Adv Wound Care (New Rochelle). 2015;4(10):587-595. doi:10.1089/wound.2015.0642 PubMedGoogle ScholarCrossref
12.
Weiss  BM, von Segesser  LK, Alon  E, Seifert  B, Turina  MI.  Outcome of cardiovascular surgery and pregnancy: a systematic review of the period 1984-1996.   Am J Obstet Gynecol. 1998;179(6 Pt 1):1643-1653. doi:10.1016/S0002-9378(98)70039-0 PubMedGoogle Scholar
13.
Patel  MS, Volpp  KG, Small  DS,  et al.  Association of the 2011 ACGME resident duty hour reforms with mortality and readmissions among hospitalized Medicare patients.   JAMA. 2014;312(22):2364-2373. doi:10.1001/jama.2014.15273 PubMedGoogle ScholarCrossref
14.
Krumholz  HM, Brindis  RG, Brush  JE,  et al; American Heart Association; Quality of Care and Outcomes Research Interdisciplinary Writing Group; Council on Epidemiology and Prevention; Stroke Council; American College of Cardiology Foundation; Endorsed by the American College of Cardiology Foundation.  Standards for statistical models used for public reporting of health outcomes: an American Heart Association scientific statement from the quality of care and outcomes research interdisciplinary writing group: cosponsored by the Council on Epidemiology and Prevention and the Stroke Council endorsed by the American College of Cardiology Foundation.   Circulation. 2006;113(3):456-462. doi:10.1161/CIRCULATIONAHA.105.170769 PubMedGoogle ScholarCrossref
15.
Jha  AK, Orav  EJ, Epstein  AM.  Low-quality, high-cost hospitals, mainly in South, care for sharply higher shares of elderly Black, Hispanic, and Medicaid patients.   Health Aff (Millwood). 2011;30(10):1904-1911. doi:10.1377/hlthaff.2011.0027 PubMedGoogle ScholarCrossref
16.
Centers for Medicare & Medicaid Services. Affected hospitals. September 29, 2014. Accessed November 8, 2018. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/HospitalAcqCond/AffectedHospitals.html
17.
Shadish  WR, Cook  TD, Campbell  DT.  Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Wadsworth Cengage Learning; 2002.
18.
Ibrahim  AM, Nathan  H, Thumma  JR, Dimick  JB.  Impact of the hospital readmission reduction program on surgical readmissions among Medicare beneficiaries.   Ann Surg. 2017;266(4):617-624. doi:10.1097/SLA.0000000000002368 PubMedGoogle ScholarCrossref
19.
Jha  AK, Joynt  KE, Orav  EJ, Epstein  AM.  The long-term effect of premier pay for performance on patient outcomes.   N Engl J Med. 2012;366(17):1606-1615. doi:10.1056/NEJMsa1112351 PubMedGoogle ScholarCrossref
20.
Ryan  AM, Burgess  JFJ  Jr, Tompkins  CP, Wallack  SS.  The relationship between Medicare’s process of care quality measures and mortality.   Inquiry. 2009;46(3):274-290. doi:10.5034/inquiryjrnl_46.03.274 PubMedGoogle ScholarCrossref
21.
Shih  T, Nicholas  LH, Thumma  JR, Birkmeyer  JD, Dimick  JB.  Does pay-for-performance improve surgical outcomes? an evaluation of phase 2 of the Premier Hospital Quality Incentive Demonstration.   Ann Surg. 2014;259(4):677-681. doi:10.1097/SLA.0000000000000425 PubMedGoogle ScholarCrossref
22.
RAND corporation. Analysis of hospital pay for performance. Accessed October 21, 2018. https://www.rand.org/pubs/technical_reports/TR562z12/analysis-of-hospital-pay-for-performance.html
23.
Bureau of Economic Analysis, US Department of Commerce. National income and product accounts. Accessed December 11, 2018. https://apps.bea.gov/iTable/iTable.cfm?reqid=19&step=3&isuri=1&1921=survey&1903=84
24.
Dunn  A, Grosse  SD, Zuvekas  SH.  Adjusting health expenditures for inflation: a review of measures for health services research in the United States.   Health Serv Res. 2018;53(1):175-196. doi:10.1111/1475-6773.12612 PubMedGoogle ScholarCrossref
25.
Daw  JR, Hatfield  LA.  Matching and regression to the mean in difference-in-differences analysis.   Health Serv Res. 2018;53(6):4138-4156. doi:10.1111/1475-6773.12993 PubMedGoogle ScholarCrossref
26.
Wing  C, Simon  K, Bello-Gomez  RA.  Designing difference in difference studies: best practices for public health policy research.   Annu Rev Public Health. 2018;39(1):453-469. doi:10.1146/annurev-publhealth-040617-013507 PubMedGoogle ScholarCrossref
27.
Stuart  EA, Huskamp  HA, Duckworth  K,  et al.  Using propensity scores in difference-in-differences models to estimate the effects of a policy change.   Health Serv Outcomes Res Methodol. 2014;14(4):166-182. doi:10.1007/s10742-014-0123-z PubMedGoogle ScholarCrossref
28.
Stuart  EA.  Matching methods for causal inference: a review and a look forward.   Stat Sci. 2010;25(1):1-21. doi:10.1214/09-STS313 PubMedGoogle ScholarCrossref
29.
Cameron  AC, Trivedi  PK.  Microeconometrics Using Stata. Revised. Stata Press; 2010.
30.
Greene  W.  The behaviour of the maximum likelihood estimator of limited dependent variable models in the presence of fixed effects.   Econometrics J. 2004;7(1):98-119. doi:10.1111/j.1368-423X.2004.00123.x Google ScholarCrossref
31.
Jones  AM. Models for health care. University of York, Centre for Health Economics. January 2010. Accessed March 11, 2017. https://www.york.ac.uk/media/economics/documents/herc/wp/10_01.pdf
32.
Mitchell  MN.  Interpreting and Visualizing Regression Models Using Stata. Stata Press; 2012.
33.
Manning  WG, Basu  A, Mullahy  J.  Generalized modeling approaches to risk adjustment of skewed outcomes data.   J Health Econ. 2005;24(3):465-488. doi:10.1016/j.jhealeco.2004.09.011 PubMedGoogle ScholarCrossref
34.
Manning  WG, Mullahy  J.  Estimating log models: to transform or not to transform?   J Health Econ. 2001;20(4):461-494. doi:10.1016/S0167-6296(01)00086-8 PubMedGoogle ScholarCrossref
35.
Basu  A, Manning  WG.  Issues for the next generation of health care cost analyses.   Med Care. 2009;47(7)(suppl 1):S109-S114. doi:10.1097/MLR.0b013e31819c94a1PubMedGoogle Scholar
36.
Abadie  A, Diamond  A, Hainmueller  J.  Synthetic control methods for comparative case studies: estimating the effect of California’s tobacco control program.   J Am Statistical Assoc. 2010;105(490):493-505. doi:10.1198/jasa.2009.ap08746 Google ScholarCrossref
37.
Ryan  AM, Burgess  JF  Jr, Dimick  JB.  Why we should not be indifferent to specification choices for difference-in-differences.   Health Serv Res. 2015;50(4):1211-1235. doi:10.1111/1475-6773.12270 PubMedGoogle ScholarCrossref
38.
Krell  RW, Girotti  ME, Dimick  JB.  Extended length of stay after surgery: complications, inefficient practice, or sick patients?   JAMA Surg. 2014;149(8):815-820. doi:10.1001/jamasurg.2014.629 PubMedGoogle ScholarCrossref
39.
Meyer  GS, Nelson  EC, Pryor  DB,  et al.  More quality measures versus measuring what matters: a call for balance and parsimony.   BMJ Qual Saf. 2012;21(11):964-968. doi:10.1136/bmjqs-2012-001081 PubMedGoogle ScholarCrossref
40.
Alteras  T, Meyer  J, Silow-Carroll  S. Hospital quality improvement: strategies and lessons from US hospitals. April 1, 2007. Accessed October 23, 2020. https://www.commonwealthfund.org/publications/fund-reports/2007/apr/hospital-quality-improvement-strategies-and-lessons-us-hospitals
41.
Weinick  RM, Chien  AT, Rosenthal  MB, Bristol  SJ, Salamon  J.  Hospital executives’ perspectives on pay-for-performance and racial/ethnic disparities in care.   Med Care Res Rev. 2010;67(5):574-589. doi:10.1177/1077558709354522 PubMedGoogle ScholarCrossref
42.
Casalino  LP, Gans  D, Weber  R,  et al.  US physician practices spend more than $15.4 billion annually to report quality measures.   Health Aff (Millwood). 2016;35(3):401-406. doi:10.1377/hlthaff.2015.1258 PubMedGoogle ScholarCrossref
43.
Brenner  MH, Curbow  B, Legro  MW.  The proximal-distal continuum of multiple health outcome measures: the case of cataract surgery.   Med Care. 1995;33(4)(suppl):AS236-AS244.PubMedGoogle Scholar
44.
Ryan  AM, Krinsky  S, Maurer  KA, Dimick  JB.  Changes in hospital quality associated with hospital value-based purchasing.   N Engl J Med. 2017;376(24):2358-2366. doi:10.1056/NEJMsa1613412 PubMedGoogle ScholarCrossref
45.
Borza  T, Oreline  MK, Skolarus  TA,  et al.  Association of the hospital readmissions reduction program with surgical readmissions.   JAMA Surg. 2018;153(3):243-250. doi:10.1001/jamasurg.2017.4585 PubMedGoogle ScholarCrossref
46.
Ramaswamy  A, Marchese  M, Cole  AP,  et al.  Comparison of hospital readmission after total hip and total knee arthroplasty vs spinal surgery after implementation of the Hospital Readmissions Reduction Program.   JAMA Netw Open. 2019;2(5):e194634. doi:10.1001/jamanetworkopen.2019.4634 PubMedGoogle Scholar
47.
Scally  CP, Thumma  JR, Birkmeyer  JD, Dimick  JB.  Impact of surgical quality improvement on payments in medicare patients.   Ann Surg. 2015;262(2):249-252. doi:10.1097/SLA.0000000000001069 PubMedGoogle ScholarCrossref
48.
Bastani  H, Goh  J, Bayati  M. Evidence of upcoding in pay-for-performance programs. Stanford University Graduate School of Business Research Paper No 15-43. July 13, 2015. doi:http://dx.doi.org/10.2139/ssrn.2630454
49.
Angrist  JD, Pischke  J-S.  Mastering Metrics: The Path From Cause to Effect. Princeton University Press; 2015.
50.
Ma  Y, Zhang  W, Lyman  S, Huang  Y.  The HCUP SID Imputation Project: improving statistical inferences for health disparities research by imputing missing race data.   Health Serv Res. 2018;53(3):1870-1889. doi:10.1111/1475-6773.12704 PubMedGoogle ScholarCrossref
51.
Kim  KM, Max  W, White  JS, Chapman  SA, Muench  U.  Do penalty-based pay-for-performance programs improve surgical care more effectively than other payment strategies? a systematic review.   Ann Med Surg (Lond). 2020;60:623-630. doi:10.1016/j.amsu.2020.11.060PubMedGoogle ScholarCrossref
52.
Kahneman  D, Tversky  A.  Prospect theory: an analysis of decision under risk.   Econometrica. 1979;47:263-291. doi:10.2307/1914185 Google ScholarCrossref
×