[Skip to Content]
Access to paid content on this site is currently suspended due to excessive activity being detected from your IP address 54.166.74.94. Please contact the publisher to request reinstatement.
Sign In
Individual Sign In
Create an Account
Institutional Sign In
OpenAthens Shibboleth
[Skip to Content Landing]
Download PDF
Table 1.  
Characteristics of Hospitals Participating in ACS NSQIP Compared With Nonparticipating (Control) Hospitals Before and After Propensity Score Matching
Characteristics of Hospitals Participating in ACS NSQIP Compared With Nonparticipating (Control) Hospitals Before and After Propensity Score Matching
Table 2.  
Patient Characteristics at Hospitals Participating in the ACS NSQIP Compared With Nonparticipating (Control) Hospitals Before and After Propensity Score Matching
Patient Characteristics at Hospitals Participating in the ACS NSQIP Compared With Nonparticipating (Control) Hospitals Before and After Propensity Score Matching
Table 3.  
Risk-Adjusted Patient Outcomes Before vs After Enrolling in ACS NSQIP Compared With Matched Non–ACS NSQIP (Control) Hospitalsa
Risk-Adjusted Patient Outcomes Before vs After Enrolling in ACS NSQIP Compared With Matched Non–ACS NSQIP (Control) Hospitalsa
Table 4.  
Relative Risk of Risk-Adjusted Adverse Outcomes Among ACS NSQIP Hospitals and Matched Non–ACS NSQIP (Control) Hospitals in Pre-Post Analysis and Difference-in-Differences Analysis
Relative Risk of Risk-Adjusted Adverse Outcomes Among ACS NSQIP Hospitals and Matched Non–ACS NSQIP (Control) Hospitals in Pre-Post Analysis and Difference-in-Differences Analysis
Table 5.  
Medicare Payments Before vs After Enrolling in ACS NSQIP Compared With Matched Non–ACS NSQIP (Control) Hospitals
Medicare Payments Before vs After Enrolling in ACS NSQIP Compared With Matched Non–ACS NSQIP (Control) Hospitals
1.
Ingraham  AM, Richards  KE, Hall  BL, Ko  CY.  Quality improvement in surgery. Adv Surg. 2010;44:251-267.
PubMedArticle
2.
Fink  AS, Campbell  DA  Jr, Mentzer  RM  Jr,  et al.  The National Surgical Quality Improvement Program in nonveterans administration hospitals. Ann Surg.2002;236(3):344-353; discussion 344-353.
PubMedArticle
3.
Khuri  SF, Henderson  WG, Daley  J,  et al.  Successful implementation of the Department of Veterans Affairs’ National Surgical Quality Improvement Program in the private sector. Ann Surg. 2008;248(2):329-336.
PubMedArticle
4.
Best  WR, Khuri  SF, Phelan  M,  et al.  Identifying patient preoperative risk factors and postoperative adverse events in administrative databases. J Am Coll Surg. 2002;194(3):257-266.
PubMedArticle
5.
Lawson  EH, Louie  R, Zingmond  DS,  et al.  A comparison of clinical registry vs administrative claims data for reporting of 30-day surgical complications. Ann Surg. 2012;256(6):973-981.
PubMedArticle
6.
Ellner  SJ.  Hospital puts ACS NSQIP to the test and improves patient safety. Bull Am Coll Surg. 2011;96(9):9-11.
PubMed
7.
Glickson  J.  ACS NSQIP National conference. Bull Am Coll Surg. 2013;98(10):66-71.
PubMed
8.
Hall  BL, Hamilton  BH, Richards  K, Bilimoria  KY, Cohen  ME, Ko  CY.  Does surgical quality improve in the American College of Surgeons National Surgical Quality Improvement Program. Ann Surg. 2009;250(3):363-376.
PubMed
9.
Schilling  PL, Dimick  JB, Birkmeyer  JD.  Prioritizing quality improvement in general surgery. J Am Coll Surg. 2008;207(5):698-704.
PubMedArticle
10.
Schilling  PL, Dimick  JB, Birkmeyer  JD.  Prioritizing quality improvement in vascular surgery. Surg Innov. 2010;17(2):127-131.
PubMed
11.
Finks  JF, Osborne  NH, Birkmeyer  JD.  Trends in hospital volume and operative mortality for high-risk surgery. N Engl J Med. 2011;364(22):2128-2137.
PubMedArticle
12.
Ghaferi  AA, Birkmeyer  JD, Dimick  JB.  Variation in hospital mortality associated with inpatient surgery. N Engl J Med. 2009;361(14):1368-1375.
PubMedArticle
13.
Birkmeyer  JD, Stukel  TA, Siewers  AE, Goodney  PP, Wennberg  DE, Lucas  FL.  Surgeon volume and operative mortality in the United States. N Engl J Med. 2003;349(22):2117-2127.
PubMedArticle
14.
Little  RJA, Rubin  DB. Statistical Analysis With Missing Data.2nd ed. Hoboken, NJ: Wiley; 2002.
15.
Iezzoni  LI, Daley  J, Heeren  T,  et al.  Using administrative data to screen hospitals for high complication rates. Inquiry. 1994;31(1):40-55.
PubMed
16.
Weingart  SN, Iezzoni  LI, Davis  RB,  et al.  Use of administrative data to find substandard care. Med Care. 2000;38(8):796-806.
PubMedArticle
17.
Iezzoni  LI, Daley  J, Heeren  T,  et al.  Identifying complications of care using administrative data. Med Care. 1994;32(7):700-715.
PubMedArticle
18.
Lawthers  AG, McCarthy  EP, Davis  RB, Peterson  LE, Palmer  RH, Iezzoni  LI.  Identification of in-hospital complications from claims data. Med Care. 2000;38(8):785-795.
PubMedArticle
19.
Dimick  JB, Nicholas  LH, Ryan  AM, Thumma  JR, Birkmeyer  JD.  Bariatric surgery complications before vs after implementation of a national policy restricting coverage to centers of excellence. JAMA. 2013;309(8):792-799.
PubMedArticle
20.
Livingston  EH.  Procedure incidence and in-hospital complication rates of bariatric surgery in the United States. Am J Surg. 2004;188(2):105-110.
PubMedArticle
21.
Morris  AM, Baldwin  LM, Matthews  B,  et al.  Reoperation as a quality indicator in colorectal surgery. Ann Surg. 2007;245(1):73-79.
PubMedArticle
22.
Tsai  TC, Joynt  KE, Orav  EJ, Gawande  AA, Jha  AK.  Variation in surgical-readmission rates and quality of hospital care. N Engl J Med. 2013;369(12):1134-1142.
PubMedArticle
23.
Birkmeyer  JD, Gust  C, Baser  O, Dimick  JB, Sutherland  JM, Skinner  JS.  Medicare payments for common inpatient procedures. Health Serv Res. 2010;45(6 Pt 1):1783-1795.
PubMedArticle
24.
Dimick  JB, Weeks  WB, Karia  RJ, Das  S, Campbell  DA  Jr.  Who pays for poor surgical quality? J Am Coll Surg. 2006;202(6):933-937.
PubMedArticle
25.
Miller  DC, Gust  C, Dimick  JB, Birkmeyer  N, Skinner  J, Birkmeyer  JD.  Large variations in Medicare payments for surgery highlight savings potential from bundled payment programs. Health Aff (Millwood). 2011;30(11):2107-2115.
PubMedArticle
26.
Gottlieb  DJ, Zhou  W, Song  Y, Andrews  KG, Skinner  JS, Sutherland  JM.  Prices don’t drive regional Medicare spending variations. Health Aff (Millwood). 2010;29(3):537-543.
PubMedArticle
27.
Colla  CH, Wennberg  DE, Meara  E,  et al.  Spending differences associated with the Medicare Physician Group Practice Demonstration. JAMA. 2012;308(10):1015-1023.
PubMedArticle
28.
Dimick  JB, Ryan  AM.  Methods for evaluating changes in health care policy. JAMA. 2014;312(22):2401-2402.
PubMedArticle
29.
Volpp  KG, Rosen  AK, Rosenbaum  PR,  et al.  Mortality among hospitalized Medicare beneficiaries in the first 2 years following ACGME resident duty hour reform. JAMA. 2007;298(9):975-983.
PubMedArticle
30.
Wooldridge  JM. Introductory Econometrics: A Modern Approach.4th ed. Mason, OH: South Western, Cengage Learning; 2009.
31.
Internet Archive. Wayback Machine.http://www.archive.org/web/. Accessed January 7, 2015.
32.
Bertrand  M, Duflo  E, Mullainathan  S.  How much should we trust differences-in-differences estimates? Q J Econ. 2004;119(1):249-275. doi:10.1162/003355304772839588.Article
33.
Ryan  AJB, Dimick  J.  Why we shouldn’t be indifferent to specification in difference-in-differences analysis [published online December 9, 2014]. Health Serv Res. doi:10.1111/1475-6773.12270.
34.
Austin  PC.  Optimal caliper widths for propensity-score matching when estimating differences in means and differences in proportions in observational studies. Pharm Stat. 2011;10(2):150-161.
PubMedArticle
35.
Donald  SG, Lang  K.  Inference with difference-in-differences and other panel data. Rev Econ Stat. 2007;89(2):221-233. doi:10.1162/rest.89.2.221.Article
36.
Ryan  AM.  Effects of the Premier Hospital Quality Incentive Demonstration on Medicare patient mortality and cost. Health Serv Res. 2009;44(3):821-842.
PubMedArticle
37.
Elixhauser  A, Steiner  C, Harris  DR, Coffey  RM.  Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27.
PubMedArticle
38.
Southern  DA, Quan  H, Ghali  WA.  Comparison of the Elixhauser and Charlson/Deyo methods of comorbidity measurement in administrative data. Med Care. 2004;42(4):355-360.
PubMedArticle
39.
Zhang  J, Yu  KF.  What’s the relative risk? JAMA. 1998;280(19):1690-1691.
PubMedArticle
40.
Neumayer  L, Mastin  M, Vanderhoof  L, Hinson  D.  Using the Veterans Administration National Surgical Quality Improvement Program to improve patient outcomes. J Surg Res. 2000;88(1):58-61.
PubMedArticle
41.
Rowell  KS, Turrentine  FE, Hutter  MM, Khuri  SF, Henderson  WG.  Use of national surgical quality improvement program data as a catalyst for quality improvement. J Am Coll Surg. 2007;204(6):1293-1300.
PubMedArticle
42.
Hannan  EL, Sarrazin  MS, Doran  DR, Rosenthal  GE.  Provider profiling and quality improvement efforts in coronary artery bypass graft surgery. Med Care. 2003;41(10):1164-1172.
PubMedArticle
43.
Hibbard  JH, Stockard  J, Tusler  M.  Hospital performance reports. Health Aff (Millwood). 2005;24(4):1150-1160.
PubMedArticle
44.
Rosenthal  MB.  Nonpayment for performance? N Engl J Med. 2007;357(16):1573-1575.
PubMedArticle
45.
Rosenthal  MB.  Beyond pay for performance—emerging models of provider-payment reform. N Engl J Med. 2008;359(12):1197-1200.
PubMedArticle
46.
Walshe  K, Freeman  T.  Effectiveness of quality improvement. Qual Saf Health Care. 2002;11(1):85-87.
PubMedArticle
Original Investigation
February 3, 2015

Association of Hospital Participation in a Quality Reporting Program With Surgical Outcomes and Expenditures for Medicare Beneficiaries

Author Affiliations
  • 1Center for Healthcare Outcomes and Policy, University of Michigan, Ann Arbor
  • 2Center for Clinical Management Research, Veterans Affairs Ann Arbor Healthcare System, University of Michigan, Ann Arbor
  • 3Department of Health Policy and Management, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
  • 4Department of Health Policy and Management, School of Public Health, University of Michigan, Ann Arbor
JAMA. 2015;313(5):496-504. doi:10.1001/jama.2015.25
Abstract

Importance  The American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) provides feedback to hospitals on risk-adjusted outcomes. It is not known if participation in the program improves outcomes and reduces costs relative to nonparticipating hospitals.

Objective  To evaluate the association of enrollment and participation in the ACS NSQIP with outcomes and Medicare payments compared with control hospitals that did not participate in the program.

Design, Setting, and Participants  Quasi-experimental study using national Medicare data (2003-2012) for a total of 1 226 479 patients undergoing general and vascular surgery at 263 hospitals participating in ACS NSQIP and 526 nonparticipating hospitals. A difference-in-differences analytic approach was used to evaluate whether participation in ACS NSQIP was associated with improved outcomes and reduced Medicare payments compared with nonparticipating hospitals that were otherwise similar. Control hospitals were selected using propensity score matching (2 control hospitals for each ACS NSQIP hospital).

Main Outcomes and Measures  Thirty-day mortality, serious complications (eg, pneumonia, myocardial infarction, or acute renal failure and a length of stay >75th percentile), reoperation, and readmission within 30 days. Hospital costs were assessed using price-standardized Medicare payments during hospitalization and 30 days after discharge.

Results  After accounting for patient factors and preexisting time trends toward improved outcomes, there were no statistically significant improvements in outcomes at 1, 2, or 3 years after (vs before) enrollment in ACS NSQIP. For example, in analyses comparing outcomes at 3 years after (vs before) enrollment, there were no statistically significant differences in risk-adjusted 30-day mortality (4.3% after enrollment vs 4.5% before enrollment; relative risk [RR], 0.96 [95% CI, 0.89 to 1.03]), serious complications (11.1% after enrollment vs 11.0% before enrollment; RR, 0.96 [95% CI, 0.91 to 1.00]), reoperations (0.49% after enrollment vs 0.45% before enrollment; RR, 0.97 [95% CI, 0.77 to 1.16]), or readmissions (13.3% after enrollment vs 12.8% before enrollment; RR, 0.99 [95% CI, 0.96 to 1.03]). There were also no differences at 3 years after (vs before) enrollment in mean total Medicare payments ($40 [95% CI, −$268 to $348]), or payments for the index admission (−$11 [95% CI, −$278 to $257]), hospital readmission ($245 [95% CI, −$231 to $721]), or outliers (−$86 [95% CI, −$1666 to $1495]).

Conclusions and Relevance  With time, hospitals had progressively better surgical outcomes but enrollment in a national quality reporting program was not associated with the improved outcomes or lower Medicare payments among surgical patients. Feedback on outcomes alone may not be sufficient to improve surgical outcomes.

Introduction

Increased scrutiny of hospital performance has led to a proliferation of clinical registries used to benchmark outcomes. One of the most visible national quality reporting programs is the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP).13 The cornerstone of this program is an extensive clinical registry, with data abstracted directly from the medical record by trained personnel.1,4,5 The program provides hospitals with reports that include a detailed description of their risk-adjusted outcomes (eg, mortality, specific complications, and length of stay). These reports allow hospitals to benchmark their performance relative to all other ACS NSQIP hospitals. Participating hospitals are encouraged to focus improvement efforts on areas in which they perform poorly.

The extent to which participation in ACS NSQIP improves outcomes is unclear. Several single-center studies from participating hospitals report improvement in outcomes after targeting an area of poor performance with a quality improvement intervention.6,7 However, it is uncertain whether these changes represent salutary effects of the ACS NSQIP program, improvement that would have occurred without enrollment in the program, or simply regression to the mean. The only study evaluating all participating hospitals in the ACS NSQIP demonstrated that the majority of hospitals improved their outcomes over time.8 This study did not compare ACS NSQIP hospitals with a control group, making it difficult to conclude whether improvements in outcomes were truly associated with participation in this program, or simply represent background trends toward improved outcomes at all hospitals.

The objective of this study was to evaluate the association of participation in the ACS NSQIP with outcomes and payments among Medicare patients compared with control hospitals that did not participate in the program over the same period.

Methods
Data Source and Study Population

Data from the Medicare Analysis Provider and Review files for 2003-2012 were used to create the main analysis data sets. This data set contains hospital discharge abstracts for all fee-for-service acute care hospitalizations of US Medicare recipients, which accounts for approximately 70% of such admissions in the Medicare population. The Medicare denominator file was used to assess patient vital status at 30 days. The study was reviewed and approved by the University of Michigan institutional review board and was deemed exempt due to the use of secondary data.

Using procedure codes from the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM), all patients aged 65 to 99 years undergoing any of 11 high-risk general and vascular surgical procedures were identified: esophagectomy, pancreatic resection, colon resection, gastrectomy, liver resection, ventral hernia repair, cholecystectomy, appendectomy, abdominal aortic aneurysm repair, lower extremity bypass, and carotid endarterectomy (eAppendix 1 in the Supplement contains a complete list of ICD-9-CM codes). These procedures were chosen because they are common, high-risk general and vascular surgical procedures included in the ACS NSQIP registry. Because they account for the disproportionate share of morbidity and mortality in ACS NSQIP, they are the highest priority for quality improvement.9,10 To enhance the homogeneity of hospital case mix, small patient subgroups with much higher baseline risks were excluded. Also excluded were patients with procedure codes indicating that other operations were simultaneously performed (eg, coronary artery bypass and carotid endarterectomy) or were performed under extremely high-risk conditions (eg, ruptured abdominal aortic aneurysm).1113 Missing data were found in only 0.3% of race variable and 0.3% of hospital characteristics. Because these represented less than 1%, those patients were excluded from the analyses.14

Outcome Variables

Mortality, serious complications, reoperation, and readmission were assessed to determine whether enrollment in ACS NSQIP was associated with improved outcomes. Mortality was assessed as death within 30 days of the index surgical procedure, which was ascertained from the Medicare beneficiary denominator file. Complications were ascertained from primary and secondary ICD-9-CM diagnostic and procedure codes from the index hospitalization. A subset of codes that have been used in several prior studies of surgical outcomes (eAppendix 2 in the Supplement) was chosen, and these codes have been demonstrated to have high sensitivity and specificity in surgical populations.1518 For this analysis, serious complications were defined as the presence of a coded complication and an extended length of stay (>75th percentile for each procedure). Because most patients without complications are discharged earlier, the addition of the extended length of stay criterion was intended to increase the specificity of the outcome variable.19,20 Reoperations were ascertained using ICD-9-CM procedure codes indicating secondary procedures during the index hospitalization (eAppendix 3 in the Supplement). Reoperations are relatively common in many of these high-risk procedures and are accurately captured using ICD-9-CM procedure billing codes in administrative data sets.21 Readmissions were defined as an admission to any hospital within 30 days after discharge from the index procedure using standard methods.22

Medicare Payments

The association between enrollment in ACS NSQIP and reduced Medicare payments was also assessed. Quality improvement efforts can potentially decrease costs of care by preventing complications and lowering the intensity of resource use, which would be reflected in lower Medicare payments. Medicare facility payments were therefore used to explore the association of ACS NSQIP participation and lower resource use among Medicare beneficiaries.2325 For this study, Medicare facility payments from the Medicare Analysis Provider and Review were used, which include all payments related to the index hospitalization, readmissions, and high-cost outliers. Because Medicare payments vary across hospitals (eg, payments for disproportionate share of low-income patients and graduate medical education) and geographic regions (eg, payments are indexed to reflect differences in wages), a previously described method to “price-adjust” Medicare payments was used.25,26 In these analyses, all payments were adjusted for the year of operation by standardizing prices to the most recent year of data available.

Statistical Analysis

The goal of this analysis was to examine whether enrollment in the ACS NSQIP program was associated with improved outcomes for Medicare patients compared with similar hospitals that did not participate during the same period. A difference-in-differences approach was used, which is an econometric method for evaluating changes in outcomes occurring after implementation of a policy.2730 This approach isolates the improvement in outcomes related to an intervention (ie, enrollment in ACS NSQIP) that exceeds changes over the same period in a control group that was not exposed to the intervention. Enrollment in the ACS NSQIP was ascertained from the program’s semiannual reports. Because the University of Michigan was a participating site throughout the study period, regular semiannual reports were received, which include a list of all currently participating hospitals. Hospitals were assigned an enrollment date based upon when they the first appeared in the semiannual report. Moreover, these enrollment dates were verified using archival data of the ACS NSQIP website available to the general public from the Internet Archive.31 However, because the hospital list on the website may not be current, this source was only used to confirm (and not rule out) hospital participation.

Because the ACS NSQIP hospitals may not be representative of all hospitals, 2 separate strategies to adjust for the potential differences between ACS NSQIP and control hospitals were employed. First, propensity scores were used to match ACS NSQIP hospitals and control hospitals on baseline outcomes, surgical volume, and preenrollment trends in outcomes. To ensure that hospitals in the study and control groups were on the same trajectory for postoperative outcomes before ACS NSQIP enrollment of the study hospitals, they were matched for risk-adjusted mortality for year 1 and year 2 prior to ACS NSQIP enrollment. This matching ensured that our control hospitals and ACS NSQIP hospitals had “parallel trends” in the preenrollment period, which is one of the key assumptions of a difference-in-differences methodology.32,33 Second, multivariate adjustment was used to account for all observable hospital and patient characteristics that were not included in the propensity score model.

To create propensity scores for hospital matching, a logistic regression model was created with ACS NSQIP participation (vs not participating) as the dependent variable. Annual surgical volume, baseline risk-adjusted outcomes, and preenrollment trends in risk-adjusted outcomes were included as independent variables. This matching creates a matched cohort of ACS NSQIP and control hospitals with parallel trends in the 3 years prior to enrollment. Preenrollment trends in outcomes, within and between ACS NSQIP hospitals and control hospitals, were compared using univariate statistics and no significant difference in preenrollment mortality was noted over the preenrollment period. The C statistic for the propensity score model was 0.85, indicating excellent discrimination.

For matching, a caliper width of 0.2 times the standard deviation of the propensity score without replacement was used.34 Using a caliper width of 0.2 times the standard deviation of the propensity score yielded a 100% match of all ACS NSQIP hospitals with excellent reduction in bias (eAppendix 4 in the Supplement). A sensitivity analysis was performed narrowing the caliper width to 0.1, which demonstrated similar findings, but the caliper width excluded 60 ACS NSQIP hospitals, so a 0.2 caliper width was chosen. Although there was a very large pool of potential hospitals (ie, not participating in ACS NSQIP), it was determined that 1:2 matching (1 ACS NSQIP hospital to 2 control hospitals) was optimal based on the degree of bias reduction and percentage of ACS NSQIP hospitals matched.34 Further attempts to improve the bias reduction yielded fewer matched hospitals (fewer than all 263 participating ACS NSQIP hospitals could be matched to control hospitals). Covariate imbalance before and after matching was checked with t tests for equality of means, and standardized percentage bias before and after matching (together as the achieved percentage reduction in absolute bias) and using pseudo-R2. This propensity score matching resulted in an overall 98.4% reduction in bias and excellent overlap in propensity scores for the included variables (eAppendix 4 and eFigure in the Supplement). This reduction in bias reflects that among the 3 variables included in the propensity model, 98% of the imbalance in covariates was removed after matching. The pseudo-R2 was .01 following matching. There were no significant differences between preenrollment trends in outcomes between matched ACS NSQIP and control hospitals, thereby satisfying the “parallel trends” assumption.28,33

To perform the difference-in-differences analysis, regression models were used to evaluate the relationship between each dependent variable (mortality, serious complications, reoperations, readmissions, and Medicare payments) and enrollment in ACS NSQIP. Nonparticipating control hospitals were assigned the same enrollment year as their corresponding matched ACS NSQIP hospitals. For the dichotomous outcome variables, logistic regression was used, and for the continuous Medicare payment variables, generalized linear models with a log link were used. A dummy variable was included, indicating whether the patient had surgery before enrollment or after enrollment in ACS NSQIP, defined at year 1 after enrollment, year 2 after enrollment, and year 3 after enrollment. To adjust for linear time trends, a yearly time variable was included. Finally, 3 interaction terms of the ACS NSQIP (vs non–ACS NSQIP hospital [control]) variable and the before enrollment or after enrollment variable (ACS NSQIP × year 1 after enrollment, ACS NSQIP × year 2 after enrollment, or ACS NSQIP × year 3 after enrollment) were added. The coefficient from these interaction terms (ie, the difference-in-differences estimators), can be interpreted as the independent relationship of enrollment in ACS NSQIP and outcomes for Medicare patients at those periods.29,35,36 In all models evaluating outcomes and Medicare payments, patient characteristics were adjusted for by entering the 29 Elixhauser comorbid diseases as individual covariates, a widely used and previously validated approach for risk-adjustment in administrative data.37,38 These comorbidities were obtained from the ICD-9-CM coding during the same hospital admission. All models were adjusted for the type of surgery by including a categorical variable for each procedure. The difference-in-differences analyses were performed adjusting for all hospital covariates not included in the propensity score model (for-profit status, geographic region, bed size, teaching hospital status, and urban location). Additionally, all difference-in-differences analyses were performed adjusting for clustering at the hospital level with robust standard errors.

In addition to the main analysis, several sensitivity analyses were performed. First, to assess the effect of including the highest-risk patients, a difference-in-differences analysis including the previously excluded high-risk patient subgroups (eg, emergency surgery and ruptured abdominal aortic aneurysm repair) was performed. Second, to assess the effect of using hierarchical modeling rather than robust standard errors to account for clustering of similar patients within hospitals, a sensitivity analysis was performed using a multilevel model with hospital-level random effects.

All odds ratios were converted to relative risk because the former may not be an accurate representation of the risk ratio when an outcome variable is relatively common,39 and all 95% CIs were calculated using robust variance estimates. A P value less than .05 was used as the threshold for statistical significance and all reported P values were 2-sided. Model fit was assessed using goodness of fit and model discrimination was assessed using receiver operating characteristic (ROC) curves. All statistical analyses were conducted using Stata (StataCorp), version 12.0.

Results

A total of 294 hospitals enrolled in the ACS NSQIP during the study period. Of these, 20 hospitals were excluded because they performed only pediatric surgery or had no Medicare identifier. Among the 274 remaining hospitals, 5 hospitals were excluded due to incomplete participation (ie, hospitals that joined and dropped ACS NSQIP during the study period) and 6 hospitals were excluded because they only had 1 year of participation during our study period. In total, 263 ACS NSQIP hospitals were each matched with 2 control hospitals to yield 526 non–ACS NSQIP (control) hospitals. The 263 ACS NSQIP hospitals had a median follow-up after enrollment in the program of 3.8 years and a minimum of 2 years. Table 1 shows the hospital characteristics before and after propensity score matching for participating and nonparticipating hospitals. The ACS NSQIP hospitals and control hospitals were well matched for the variables used in the propensity score matching, including surgical volume and baseline outcomes for the 2 years prior to NSQIP enrollment. The baseline (during the year prior to enrollment) 30-day mortality was 4.9% for ACS NSQIP hospitals vs 5.0% for control hospitals (P = .55), serious complications were 11.3% for ACS NSQIP hospitals vs 10.2% for control hospitals (P <.001), reoperation was 0.5% for ACS NSQIP hospitals vs 0.5% for control hospitals (P = .66), and readmission was 13.0% for ACS NSQIP hospitals vs 12.6% for control hospitals (P = .28) (Table 1). Although many other hospital characteristics were clinically similar (nurse:patient ratio, percentage of Medicaid patients, and urban location), even after propensity matching, the ACS NSQIP hospitals were slightly larger with more admissions, higher total surgical operations, more employees, more operating rooms, and were more likely to have nonprofit status and be teaching hospitals (Table 1).

Patient characteristics were generally clinically similar at ACS NSQIP and control hospitals despite statistically significant differences (Table 2). Patients were clinically similar with respect to average age (75.7 years for ACS NSQIP hospitals vs 76.1 years for control hospitals, P < .001) and the proportion that were women (49.0% for ACS NSQIP hospitals vs 50.0% for control hospitals, P <.001) and nonwhite race (11.5% for ACS NSQIP hospitals vs 9.2% for control hospitals, P <.001). Approximately two-thirds of the included surgical cases at both ACS NSQIP hospitals (65.9%) and control hospitals (64.6%) were general surgery with the remaining cases representing major vascular procedures (Table 2). The procedure mix was comparable for both general and vascular surgery cases, although ACS NSQIP hospitals tended to perform more complex gastrointestinal cancer resections than control hospitals (esophagectomy, 1.4% for ACS NSQIP hospitals vs 0.6% for control hospitals, P <.001; pancreatectomy, 2.0% for ACS NSQIP hospitals vs 0.8% for control hospitals, P <.001; gastrectomy, 2.6% for ACS NSQIP hospitals vs 1.7% for control hospitals, P <.001). Although statistically significant differences were noted, patients at participating and nonparticipating hospitals were generally similar in terms of comorbid diseases with no clinically important differences apparent (Table 2).

Although there were slight trends toward improved outcomes in ACS NSQIP hospitals before vs after enrollment (year 1, year 2, and year 3), there were similar trends in control hospitals (Table 3). For example, 30-day mortality among ACS NSQIP hospitals declined from 4.6% (95% CI, 4.6%-4.7%) to 4.2% (95% CI, 4.2%-4.3%) during the study period (P <.001), compared with 4.9% (95% CI, 4.8%-4.9%) to 4.6% (95% CI, 4.5%-4.6%) among control hospitals (P <.001). In difference-in-differences analyses, there was no statistically significant reduction in any measured outcome after enrollment in ACS NSQIP (Table 4). For example, there was no significant difference in risk-adjusted 30-day mortality in the 3 years following enrollment: year 1 after enrollment (relative risk [RR], 0.96 [95% CI, 0.90-1.02]); year 2 after enrollment (RR, 0.94 [95% CI, 0.88-1.00]); year 3 after enrollment (RR, 0.96 [95% CI, 0.89-1.03]) (Table 4). Even at 3 years following enrollment, there remained no significant differences in the rates of serious complications (11.1% after enrollment vs 11.0% before enrollment; RR, 0.96 [95% CI, 0.91-1.00]), reoperations (0.49% after enrollment vs 0.45% before enrollment; RR, 0.97 [95% CI, 0.77-1.16]), and readmissions (13.3% after enrollment vs 12.8% before enrollment; RR, 0.99 [95% CI, 0.96-1.03]) (Table 4).

There were no statistically significant differences in 30-day Medicare payments in difference-in-differences analyses, even when facility payments were separated into payments for the index hospital stay, payments for readmissions, and payments for outliers (Table 5). For example, at 3 years following enrollment, there were no significant differences in mean total Medicare payments ($40 [95% CI, −$268 to $348]), payments for index admission (−$11 [95% CI, −$278 to $257]), payments for readmission ($245 [95% CI, −$231 to $721]), or payments for outliers (−$86 [95% CI, −$1666 to $1495]).

In a sensitivity analysis including patients that were in previously excluded high-risk patient subgroups, there were also no significant differences in the rates of 30-day mortality, serious complications, reoperations, or readmission following enrollment in the ACS NSQIP. In a second sensitivity analysis using hierarchical modeling, there were also no significant differences in the rates of 30-day mortality, serious complications, reoperations, or readmissions following enrollment in the ACS NSQIP.

Discussion

In this study, there was a slight time trend toward improved surgical outcomes in both ACS NSQIP and control hospitals. To evaluate the extent to which these improved outcomes were independently associated with enrollment in ACS NSQIP, we matched each ACS NSQIP hospital with 2 control hospitals that had similar trends in outcomes before enrollment, as well as similar baseline outcomes and surgical volumes. In a comparison between ACS NSQIP and matched control hospitals, there was no independent association of hospital enrollment in this quality reporting with improved outcomes or decreased Medicare payments at year 1, year 2, or year 3. Because of this control group of hospitals, the independent association of enrollment in ACS NSQIP with adverse outcomes and Medicare payments was isolated, removing any confounding background trends toward improved outcomes. These findings imply that participation in hospital quality reporting programs, such as ACS NSQIP, may not be sufficient to improve outcomes.

Prior studies reported a salutary effect of participation in ACS NSQIP. Several single-center studies have reported improvements in specific complications after a local quality improvement intervention.6,40,41 Many of these interventions were initiated because the hospital was identified as a poorly performing “outlier” on their ACS NSQIP report. After implementing best practices at their institution, most studies report improvement in risk-adjusted outcomes.6,40,41 However, because these studies lack a control group, it is difficult to know whether such changes represent true improvements in outcomes or simply reflect regression to the mean. Regression to the mean is observed when individuals with an extreme value on a measure spontaneously move back toward the average. Establishing differences between true improvement in outcomes and regression to the mean in quality improvement research is difficult. To do this, a control group is necessary. This was achieved by matching ACS NSQIP hospitals to a larger cohort of nonparticipating hospitals. Using the ACS NSQIP clinical registry, Hall and colleagues8 found that the majority of hospitals improved their risk-adjusted outcomes after enrolling in the program during 2005-2007. However, this study lacked a control group and it is not known if improved outcomes would have occurred in the absence of ACS NSQIP enrollment. By using a control group, secular improvements in mortality were found that were independent of ACS NSQIP enrollment.

This study has certain limitations. One is the use of administrative data rather than a clinical registry. Clinical registries may have more detailed information on patient risk factors and outcomes. Nonetheless, there is no other source of data that could be used to address this important question because no registry exists that collects data from both participating and nonparticipating hospitals. Medicare data provides the most comprehensive data available to capture not only the outcomes and payments at the participating hospitals, but also the nonparticipating hospitals. Furthermore, this study was specifically designed to take advantage of the strengths and to minimize the weaknesses of administrative data. The first weakness of administrative data is the assessment of patient comorbidities and severity of illness, which are needed for risk-adjustment. In our analysis, the best available comorbid disease index for risk-adjustment was used. Moreover, the study design, difference-in-differences, also mitigates this limitation by adjusting for any unobserved differences in patient case–mix that do not change over time.19,30,35 There is no reason to believe that changes in patient case–mix differed between participating and nonparticipating hospitals during the study period. Another limitation of administrative data is the identification of patient outcomes. This was addressed by assessing outcomes reliably coded in billing records, including mortality, reoperations, and readmissions. Identification of complications that rely on ICD-coding was optimized by only using complication codes known to have a high sensitivity and specificity in surgical patients.15,16 An extended length of stay criterion was added to our assessment of complications (ie, patients had to have both an ICD-9-CM code and a prolonged length of stay) to improve the specificity for identification of complication outcomes because patients with a prolonged length of stay likely have had complications.20

Another potential limitation relates to our ability to only evaluate the association of outcomes and participation in the ACS NSQIP through 2012. Because there may be a lag between ACS NSQIP enrollment and improved outcomes, we thought it was important to have at least 2 years of data to evaluate outcomes of participating hospitals. Consequently, all hospitals that enrolled in ACS NSQIP through 2010 were included and outcomes for at least the following 2 years were assessed. During the study period, outcomes feedback was the primary means by which ACS NSQIP affected surgical outcomes. Our findings suggest that this was not effective.

There are several potential reasons why improved outcomes among participating hospitals were not found. Conceivably, participating hospitals may not have initiated quality improvement efforts after receiving ACS NSQIP reports. The ACS NSQIP provides nonpublicly reported performance feedback, which may not adequately motivate participating hospitals to make changes. Other strategies have much stronger incentives. For example, the accountability of public reporting of hospital performance can motivate improvement.42,43 Other strategies, such as value-based purchasing, including pay-for-performance and nonpayment for adverse events, directly incentivize hospitals financially.44,45 It is also possible that hospitals participating in ACS NSQIP implemented quality improvement efforts but they did not improve outcomes.46 Clinical quality improvement is challenging for hospitals. Changing physician practice requires complex, sustained, multifaceted interventions, and most hospitals may not have the expertise or resources to launch effective quality improvement interventions.

Conclusions

Enrollment in a national surgical quality reporting program was not associated with improved outcomes or lower payments among Medicare patients. Feedback of outcomes alone may not be sufficient to improve surgical outcomes.

Back to top
Article Information

Corresponding Author: Nicholas H. Osborne, MD, MS, Section of Vascular Surgery, Cardiovascular Center 5168, 1500 E Medical Center Dr, SPC 5867, Ann Arbor, MI 48109-5867 (nichosbo@umich.edu).

Author Contributions: Dr Osborne and Ms Thumma had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

Study concept and design: Osborne, Nicholas, Ryan, Dimick.

Acquisition, analysis, or interpretation of data: Osborne, Nicholas, Thumma, Dimick.

Drafting of the manuscript: Osborne, Thumma, Dimick.

Critical revision of the manuscript for important intellectual content: Osborne, Nicholas, Ryan, Dimick.

Statistical analysis: Osborne, Nicholas, Ryan, Thumma, Dimick.

Obtained funding: Dimick.

Administrative, technical, or material support: Nicholas, Dimick.

Study supervision: Dimick.

Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Dr Dimick reports being a consultant and has an equity interest in ArborMetrix, which provides software and analytics for measuring hospital quality and efficiency, and receiving personal fees from the University of Virginia, University of Maryland, University of South Florida, University of Cincinnati, and Vanderbilt University. No other disclosures were reported.

Funding/Support: This study was supported by a grant R01AG039434 from the National Institute on Aging (Drs Dimick, Osborne, and Nicholas).

Role of the Sponsors: The National Institute on Aging had no role in the design and conduct of this study; collection, management, analysis, and interpretation of the data; or preparation of the manuscript.

Disclaimer: The views expressed herein do not necessarily represent the views of the United States government.

References
1.
Ingraham  AM, Richards  KE, Hall  BL, Ko  CY.  Quality improvement in surgery. Adv Surg. 2010;44:251-267.
PubMedArticle
2.
Fink  AS, Campbell  DA  Jr, Mentzer  RM  Jr,  et al.  The National Surgical Quality Improvement Program in nonveterans administration hospitals. Ann Surg.2002;236(3):344-353; discussion 344-353.
PubMedArticle
3.
Khuri  SF, Henderson  WG, Daley  J,  et al.  Successful implementation of the Department of Veterans Affairs’ National Surgical Quality Improvement Program in the private sector. Ann Surg. 2008;248(2):329-336.
PubMedArticle
4.
Best  WR, Khuri  SF, Phelan  M,  et al.  Identifying patient preoperative risk factors and postoperative adverse events in administrative databases. J Am Coll Surg. 2002;194(3):257-266.
PubMedArticle
5.
Lawson  EH, Louie  R, Zingmond  DS,  et al.  A comparison of clinical registry vs administrative claims data for reporting of 30-day surgical complications. Ann Surg. 2012;256(6):973-981.
PubMedArticle
6.
Ellner  SJ.  Hospital puts ACS NSQIP to the test and improves patient safety. Bull Am Coll Surg. 2011;96(9):9-11.
PubMed
7.
Glickson  J.  ACS NSQIP National conference. Bull Am Coll Surg. 2013;98(10):66-71.
PubMed
8.
Hall  BL, Hamilton  BH, Richards  K, Bilimoria  KY, Cohen  ME, Ko  CY.  Does surgical quality improve in the American College of Surgeons National Surgical Quality Improvement Program. Ann Surg. 2009;250(3):363-376.
PubMed
9.
Schilling  PL, Dimick  JB, Birkmeyer  JD.  Prioritizing quality improvement in general surgery. J Am Coll Surg. 2008;207(5):698-704.
PubMedArticle
10.
Schilling  PL, Dimick  JB, Birkmeyer  JD.  Prioritizing quality improvement in vascular surgery. Surg Innov. 2010;17(2):127-131.
PubMed
11.
Finks  JF, Osborne  NH, Birkmeyer  JD.  Trends in hospital volume and operative mortality for high-risk surgery. N Engl J Med. 2011;364(22):2128-2137.
PubMedArticle
12.
Ghaferi  AA, Birkmeyer  JD, Dimick  JB.  Variation in hospital mortality associated with inpatient surgery. N Engl J Med. 2009;361(14):1368-1375.
PubMedArticle
13.
Birkmeyer  JD, Stukel  TA, Siewers  AE, Goodney  PP, Wennberg  DE, Lucas  FL.  Surgeon volume and operative mortality in the United States. N Engl J Med. 2003;349(22):2117-2127.
PubMedArticle
14.
Little  RJA, Rubin  DB. Statistical Analysis With Missing Data.2nd ed. Hoboken, NJ: Wiley; 2002.
15.
Iezzoni  LI, Daley  J, Heeren  T,  et al.  Using administrative data to screen hospitals for high complication rates. Inquiry. 1994;31(1):40-55.
PubMed
16.
Weingart  SN, Iezzoni  LI, Davis  RB,  et al.  Use of administrative data to find substandard care. Med Care. 2000;38(8):796-806.
PubMedArticle
17.
Iezzoni  LI, Daley  J, Heeren  T,  et al.  Identifying complications of care using administrative data. Med Care. 1994;32(7):700-715.
PubMedArticle
18.
Lawthers  AG, McCarthy  EP, Davis  RB, Peterson  LE, Palmer  RH, Iezzoni  LI.  Identification of in-hospital complications from claims data. Med Care. 2000;38(8):785-795.
PubMedArticle
19.
Dimick  JB, Nicholas  LH, Ryan  AM, Thumma  JR, Birkmeyer  JD.  Bariatric surgery complications before vs after implementation of a national policy restricting coverage to centers of excellence. JAMA. 2013;309(8):792-799.
PubMedArticle
20.
Livingston  EH.  Procedure incidence and in-hospital complication rates of bariatric surgery in the United States. Am J Surg. 2004;188(2):105-110.
PubMedArticle
21.
Morris  AM, Baldwin  LM, Matthews  B,  et al.  Reoperation as a quality indicator in colorectal surgery. Ann Surg. 2007;245(1):73-79.
PubMedArticle
22.
Tsai  TC, Joynt  KE, Orav  EJ, Gawande  AA, Jha  AK.  Variation in surgical-readmission rates and quality of hospital care. N Engl J Med. 2013;369(12):1134-1142.
PubMedArticle
23.
Birkmeyer  JD, Gust  C, Baser  O, Dimick  JB, Sutherland  JM, Skinner  JS.  Medicare payments for common inpatient procedures. Health Serv Res. 2010;45(6 Pt 1):1783-1795.
PubMedArticle
24.
Dimick  JB, Weeks  WB, Karia  RJ, Das  S, Campbell  DA  Jr.  Who pays for poor surgical quality? J Am Coll Surg. 2006;202(6):933-937.
PubMedArticle
25.
Miller  DC, Gust  C, Dimick  JB, Birkmeyer  N, Skinner  J, Birkmeyer  JD.  Large variations in Medicare payments for surgery highlight savings potential from bundled payment programs. Health Aff (Millwood). 2011;30(11):2107-2115.
PubMedArticle
26.
Gottlieb  DJ, Zhou  W, Song  Y, Andrews  KG, Skinner  JS, Sutherland  JM.  Prices don’t drive regional Medicare spending variations. Health Aff (Millwood). 2010;29(3):537-543.
PubMedArticle
27.
Colla  CH, Wennberg  DE, Meara  E,  et al.  Spending differences associated with the Medicare Physician Group Practice Demonstration. JAMA. 2012;308(10):1015-1023.
PubMedArticle
28.
Dimick  JB, Ryan  AM.  Methods for evaluating changes in health care policy. JAMA. 2014;312(22):2401-2402.
PubMedArticle
29.
Volpp  KG, Rosen  AK, Rosenbaum  PR,  et al.  Mortality among hospitalized Medicare beneficiaries in the first 2 years following ACGME resident duty hour reform. JAMA. 2007;298(9):975-983.
PubMedArticle
30.
Wooldridge  JM. Introductory Econometrics: A Modern Approach.4th ed. Mason, OH: South Western, Cengage Learning; 2009.
31.
Internet Archive. Wayback Machine.http://www.archive.org/web/. Accessed January 7, 2015.
32.
Bertrand  M, Duflo  E, Mullainathan  S.  How much should we trust differences-in-differences estimates? Q J Econ. 2004;119(1):249-275. doi:10.1162/003355304772839588.Article
33.
Ryan  AJB, Dimick  J.  Why we shouldn’t be indifferent to specification in difference-in-differences analysis [published online December 9, 2014]. Health Serv Res. doi:10.1111/1475-6773.12270.
34.
Austin  PC.  Optimal caliper widths for propensity-score matching when estimating differences in means and differences in proportions in observational studies. Pharm Stat. 2011;10(2):150-161.
PubMedArticle
35.
Donald  SG, Lang  K.  Inference with difference-in-differences and other panel data. Rev Econ Stat. 2007;89(2):221-233. doi:10.1162/rest.89.2.221.Article
36.
Ryan  AM.  Effects of the Premier Hospital Quality Incentive Demonstration on Medicare patient mortality and cost. Health Serv Res. 2009;44(3):821-842.
PubMedArticle
37.
Elixhauser  A, Steiner  C, Harris  DR, Coffey  RM.  Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27.
PubMedArticle
38.
Southern  DA, Quan  H, Ghali  WA.  Comparison of the Elixhauser and Charlson/Deyo methods of comorbidity measurement in administrative data. Med Care. 2004;42(4):355-360.
PubMedArticle
39.
Zhang  J, Yu  KF.  What’s the relative risk? JAMA. 1998;280(19):1690-1691.
PubMedArticle
40.
Neumayer  L, Mastin  M, Vanderhoof  L, Hinson  D.  Using the Veterans Administration National Surgical Quality Improvement Program to improve patient outcomes. J Surg Res. 2000;88(1):58-61.
PubMedArticle
41.
Rowell  KS, Turrentine  FE, Hutter  MM, Khuri  SF, Henderson  WG.  Use of national surgical quality improvement program data as a catalyst for quality improvement. J Am Coll Surg. 2007;204(6):1293-1300.
PubMedArticle
42.
Hannan  EL, Sarrazin  MS, Doran  DR, Rosenthal  GE.  Provider profiling and quality improvement efforts in coronary artery bypass graft surgery. Med Care. 2003;41(10):1164-1172.
PubMedArticle
43.
Hibbard  JH, Stockard  J, Tusler  M.  Hospital performance reports. Health Aff (Millwood). 2005;24(4):1150-1160.
PubMedArticle
44.
Rosenthal  MB.  Nonpayment for performance? N Engl J Med. 2007;357(16):1573-1575.
PubMedArticle
45.
Rosenthal  MB.  Beyond pay for performance—emerging models of provider-payment reform. N Engl J Med. 2008;359(12):1197-1200.
PubMedArticle
46.
Walshe  K, Freeman  T.  Effectiveness of quality improvement. Qual Saf Health Care. 2002;11(1):85-87.
PubMedArticle
×