[Skip to Navigation]
Sign In
Figure 1.  Flow of Hospitalization Cohorts
Flow of Hospitalization Cohorts

NSQIP, National Surgical Quality Improvement Program; UHC, University HealthSystem Consortium.

aNumber of hospitals was constant in all subcohorts in the study. Four hospitals in the UHC did not contribute hospitalizations meeting the inclusion criteria.

Figure 2.  Adjusted Rates of Complications, Serious Complications, and Mortality by Hospital NSQIP Participation and Year
Adjusted Rates of Complications, Serious Complications, and Mortality by Hospital NSQIP Participation and Year

NSQIP, National Surgical Quality Improvement Program. Error bars indicate 95% CIs. Adjusted for patient comorbidity, operation type, age, and sex.

Table 1.  Characteristics of Hospitalizations by Procedure Typea
Characteristics of Hospitalizations by Procedure Typea
Table 2.  Characteristics of NSQIP and Non-NSQIP Hospitals and Patients
Characteristics of NSQIP and Non-NSQIP Hospitals and Patients
Table 3.  Hospitalizations by Hospital Participation in NSQIP and Yeara
Hospitalizations by Hospital Participation in NSQIP and Yeara
Table 4.  Patient Outcomes in NSQIP Hospitals vs Non-NSQIP Hospitals
Patient Outcomes in NSQIP Hospitals vs Non-NSQIP Hospitals
1.
Health Care Cost Institute.  2013 Health Care Cost and Utilization Report. October 2014. http://www.healthcostinstitute.org/files/2013%20HCCUR%2012-17-14.pdf. Accessed December 23, 2014.
2.
American College of Surgeons National Surgical Quality Improvement Program.  ACS NSQIP Semiannual Report, July 16, 2014. Chicago, IL: American College of Surgeons; 2014.
3.
Hall  BL, Hamilton  BH, Richards  K, Bilimoria  KY, Cohen  ME, Ko  CY.  Does surgical quality improve in the American College of Surgeons National Surgical Quality Improvement Program? an evaluation of all participating hospitals.  Ann Surg. 2009;250(3):363-376.PubMedGoogle Scholar
4.
Centers for Medicare & Medicaid Services.  Hospital-Acquired Conditions (Present on Admission Indicator).http://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/HospitalAcqCond/index.html?redirect=/HospitalAcqCond. Accessed August 5, 2014.
5.
Abadie  A.  Semiparametric difference-in-differences estimators.  Rev Econ Stud. 2005;72(1):1-19. doi: 10.1111/0034-6527.00321.Google ScholarCrossref
6.
Dimick  JB, Ryan  AM.  Methods for evaluating changes in health care policy: the difference-in-differences approach.  JAMA. 2014;312(22):2401-2402.PubMedGoogle ScholarCrossref
7.
Fetter  RB, Shin  Y, Freeman  JL, Averill  RF, Thompson  JD.  Case mix definition by diagnosis-related groups.  Med Care. 1980;18(2)(suppl):1-53. PubMedGoogle Scholar
8.
Davenport  DL, Holsapple  CW, Conigliaro  J.  Assessing surgical quality using administrative and clinical data sets: a direct comparison of the University HealthSystem Consortium Clinical Database and the National Surgical Quality Improvement Program data set.  Am J Med Qual. 2009;24(5):395-402.PubMedGoogle ScholarCrossref
9.
Lawson  EH, Louie  R, Zingmond  DS,  et al.  A comparison of clinical registry vs administrative claims data for reporting of 30-day surgical complications.  Ann Surg. 2012;256(6):973-981.PubMedGoogle ScholarCrossref
10.
Jha  AK, Joynt  KE, Orav  EJ, Epstein  AM.  The long-term effect of premier pay for performance on patient outcomes.  N Engl J Med. 2012;366(17):1606-1615.PubMedGoogle ScholarCrossref
11.
Ryan  AM.  Effects of the Premier Hospital Quality Incentive Demonstration on Medicare patient mortality and cost.  Health Serv Res. 2009;44(3):821-842.PubMedGoogle ScholarCrossref
12.
Shih  T, Nicholas  LH, Thumma  JR, Birkmeyer  JD, Dimick  JB.  Does pay-for-performance improve surgical outcomes? an evaluation of phase 2 of the Premier Hospital Quality Incentive Demonstration.  Ann Surg. 2014;259(4):677-681.PubMedGoogle ScholarCrossref
13.
Altom  LK, Deierhoi  RJ, Grams  J,  et al.  Association between Surgical Care Improvement Program venous thromboembolism measures and postoperative events.  Am J Surg. 2012;204(5):591-597.PubMedGoogle ScholarCrossref
14.
Hawn  MT, Vick  CC, Richman  J,  et al.  Surgical site infection prevention: time to move beyond the Surgical Care Improvement Program.  Ann Surg. 2011;254(3):494-499.PubMedGoogle ScholarCrossref
15.
Ingraham  AM, Cohen  ME, Bilimoria  KY,  et al.  Association of surgical care improvement project infection-related process measure compliance with risk-adjusted outcomes: implications for quality measurement.  J Am Coll Surg. 2010;211(6):705-714.PubMedGoogle ScholarCrossref
16.
Nicholas  LH, Osborne  NH, Birkmeyer  JD, Dimick  JB.  Hospital process compliance and surgical outcomes in Medicare beneficiaries.  Arch Surg. 2010;145(10):999-1004.PubMedGoogle ScholarCrossref
17.
Stulberg  JJ, Delaney  CP, Neuhauser  DV, Aron  DC, Fu  P, Koroukian  SM.  Adherence to surgical care improvement project measures and the association with postoperative infections.  JAMA. 2010;303(24):2479-2485.PubMedGoogle ScholarCrossref
18.
Urbach  DR, Govindarajan  A, Saskin  R, Wilton  AS, Baxter  NN.  Introduction of surgical safety checklists in Ontario, Canada.  N Engl J Med. 2014;370(11):1029-1038.PubMedGoogle ScholarCrossref
19.
Haynes  AB, Weiser  TG, Berry  WR,  et al; Safe Surgery Saves Lives Study Group.  A surgical safety checklist to reduce morbidity and mortality in a global population.  N Engl J Med. 2009;360(5):491-499.PubMedGoogle ScholarCrossref
Original Investigation
February 3, 2015

Association of Hospital Participation in a Surgical Outcomes Monitoring Program With Inpatient Complications and Mortality

Author Affiliations
  • 1Department of Surgery, Mayo Clinic Arizona, Phoenix
  • 2Mayo Clinic Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Surgical Outcomes Division, Phoenix, Arizona
  • 3Department of Surgery, Mayo Clinic Rochester, Rochester, Minnesota
  • 4University HealthSystem Consortium, Chicago, Illinois
  • 5Department of Health Systems Management, Rush University, Chicago, Illinois
JAMA. 2015;313(5):505-511. doi:10.1001/jama.2015.90
Abstract

Importance  Programs that analyze and report rates of surgical complications are an increasing focus of quality improvement efforts. The most comprehensive tool currently used for outcomes monitoring in the United States is the American College of Surgeons (ACS) National Surgical Quality Improvement Program (NSQIP).

Objective  To compare surgical outcomes experienced by patients treated at hospitals that did vs did not participate in the NSQIP.

Design, Setting, and Participants  Data from the University HealthSystem Consortium from January 2009 to July 2013 were used to identify elective hospitalizations representing a broad spectrum of elective general/vascular operations in the United States. Data on hospital participation in the NSQIP were obtained through review of semiannual reports published by the ACS. Hospitalizations at any hospital that discontinued or initiated participation in the NSQIP during the study period were excluded after the date on which that hospital’s status changed. A difference-in-differences approach was used to model the association between hospital-based participation in NSQIP and changes in rates of postoperative outcomes over time.

Exposure  Hospital participation in the NSQIP.

Main Outcomes and Measures  Risk-adjusted rates of any complications, serious complications, and mortality during a hospitalization for elective general/vascular surgery.

Results  The cohort included 345 357 hospitalizations occurring in 113 different academic hospitals; 172 882 (50.1%) hospitalizations were in NSQIP hospitals. Hospitalized patients were predominantly female (61.5%), with a mean age of 55.7 years. The types of procedures performed most commonly in the analyzed hospitalizations were hernia repairs (15.7%), bariatric (10.5%), mastectomy (9.7%), and cholecystectomy (9.0%). After accounting for patient risk, procedure type, underlying hospital performance, and temporal trends, the difference-in-differences model demonstrated no statistically significant differences over time between NSQIP and non-NSQIP hospitals in terms of likelihood of complications (adjusted odds ratio, 1.00; 95% CI, 0.97-1.03), serious complications (adjusted odds ratio, 0.98; 95% CI, 0.94-1.03), or mortality (adjusted odds ratio, 1.04; 95% CI, 0.94-1.14).

Conclusions and Relevance  No association was found between hospital-based participation in the NSQIP and improvements in postoperative outcomes over time within a large cohort of patients undergoing elective general/vascular operations at academic hospitals in the United States. These findings suggest that a surgical outcomes reporting system does not provide a clear mechanism for quality improvement.

Introduction

An estimated 27% of all inpatient hospital care involves surgical treatment.1 Health care systems desire optimal outcomes. Consequently, surgical results have become an increasing focus of quality improvement efforts. The most prominent surgical quality improvement effort is the National Surgical Quality Improvement Program (NSQIP), administered through the American College of Surgeons (ACS). Through the NSQIP, participating hospitals voluntarily contribute data pertaining to a sample of their surgical procedures. In return, each hospital receives periodic reports, with risk-adjusted outcomes achieved by that hospital relative to other participating hospitals. The NSQIP has expanded significantly since its inception in 2004 and currently includes 445 hospitals.2

Participation in the NSQIP may identify needs for quality improvement efforts. Using outcomes reports, hospitals are better able to identify areas in need of improvement and appropriately direct resources to enhance outcomes. A longitudinal analysis of surgical outcomes from the NSQIP demonstrated a significant reduction in the rates of complications that occurred at hospitals participating in the NSQIP.3 However, longitudinal analyses do not account for changes occurring with time that may not be related to a specific intervention. To demonstrate the effect of an intervention, outcomes must be assessed before and after the intervention and compared with a control group not exposed to the intervention. This sort of analysis has not been performed to assess the efficacy of NSQIP. We examined the surgical outcomes experienced by patients treated at hospitals that participated in NSQIP and compared them with a control group of patients treated at non-NSQIP hospitals. A difference-in-differences design was used to characterize and quantify the extent to which participation in the NSQIP was associated with a measurable improvement in surgical outcomes.

Methods
Data Sources

Data regarding hospitalizations for elective surgery were obtained through the University HealthSystem Consortium (UHC). The UHC is a consortium of 117 academic hospitals, with each member hospital contributing administrative data regarding its hospitalizations to a centralized data repository. These data include elements commonly found within discharge data sets, including the type of admission (elective, urgent, or emergency), dates of treatment, demographics, procedure codes, and diagnosis codes. Based on its internal processes, the UHC ascertains the presence (vs absence) of 29 different comorbidities and 14 different complications (eTables 1 and 2 in the Supplement). Each unit of observation, therefore, was a hospitalization within which an elective surgical procedure was performed.

The identity (including name and location) of the hospital where each hospitalization occurred was obtained using a Medicare identification number present for each observation in the UHC data set. Information regarding each hospital’s participation in the NSQIP was ascertained through review of semi-annual reports published biannually as part of the NSQIP. Hospital characteristics were obtained through UHC internal data sources. The study was approved by the institutional review board of the Mayo Clinic College of Medicine, including a waiver of consent for patients included in the study.

Case Selection

As a program, the NSQIP abstracts information regarding a sample of general surgery (including pancreas, colorectal, hernia, bariatric, liver, thyroid, esophageal, and appendix) and vascular operations. Therefore, the relationship between hospital-based participation in NSQIP and the outcomes of these operations was the primary focus of this study. Hospitalizations for analysis were selected on the basis of specific International Classification Of Diseases, Ninth Revision (ICD-9) procedure codes corresponding to commonly performed inpatient/vascular surgical operations (eTable 3 in the Supplement).

Inclusion/Exclusion Criteria

This study used a subset of UHC data pertaining to hospitalizations for inpatient surgical procedures performed between January 1, 2009, and July 1, 2013 (Figure 1). Although data prior to 2009 were available, these records were excluded because of important changes in data reporting requirements as stipulated by the Centers for Medicare & Medicaid Services.4 Since 2009, all ICD-9 diagnosis codes within the UHC data repository were coded using a “present on admission” specification. This specification is critical to the accurate categorization of diagnoses as being comorbidities (present on admission) vs complications (not present on admission). The analytic cohort was restricted to include only elective admissions, based on a discrete variable (admission status) within the UHC data set. This restriction was intended to minimize problems associated with modeling the acuity of admission (eg, elective vs emergency department admission vs trauma). Any hospitalization in which multiple procedures were performed on the day of admission (eg, laparoscopic colectomy plus ventral hernia repair) was also excluded. Hospitalizations for procedures that were performed infrequently in the data set (<2000 observations) were excluded to allow for a more stable estimate of procedural risk.

Data Classification

Data regarding postoperative outcomes occurring during the index hospitalization were analyzed based on discharge data from the UHC. The UHC reviews discharge data and reports risk-adjusted metrics across a broad range of postoperative outcomes (eTable 2 in the Supplement). Based on these UHC-defined outcomes, 3 types of postoperative outcomes were analyzed. First, a composite variable registering any complication (other than mortality) was generated using the entire set of UHC-defined postoperative outcomes (eTable 2). Second, a composite variable denoting a serious complication (other than mortality) was constructed using a subset of the overall set of complications (eTable 2). From within the UHC-defined set of postoperative complications, those that were associated with a likelihood of mortality of greater than 10% (within the UHC data set) were designated serious complications. For this reason, aspiration pneumonia (associated mortality risk = 14.6%) was considered a serious complication, whereas nosocomial pneumonia (associated mortality risk = 3.8%) was not. Third, mortality was assessed based on discharge status, a discrete variable within the UHC data set.

Statistical Analysis

The approach to analyzing the association between NSQIP participation and outcomes was based on an econometric technique termed the difference-in-differences approach. With this approach, the longitudinal association between exposures and outcomes can be studied within a population, with appropriate consideration of underlying temporal trends.5,6 Using a difference-in-differences model, 3 inpatient outcomes (complications, serious complications, and mortality) were analyzed, and the approach to modeling each of these outcomes was similar.

The initial step in this approach was to group related ICD-9 codes into “types” of procedures (eg, laparoscopic colectomy, thyroidectomy), and generate type-specific models to estimate the risk of each outcome. This level of procedural grouping, however, combined procedures with a heterogeneous level of risk (eg, right colectomy vs left colectomy, thyroid lobectomy vs complete thyroidectomy). Therefore, an estimation of risk inherent to each procedure was preserved by including a dummy variable corresponding to each procedure subtype (single ICD-9 code). To yield the most economical set of predictor variables, forward stepwise logistic regression (inclusion criterion, P < .10) was used to identify which of the 29 different comorbidities were relevant within each type-specific regression model. Other demographic variables including age and sex were forced into each model. To minimize the statistical effect of overfitting on the risk estimation models, any comorbidity present in less than 0.5% of the population (within each procedure type) was excluded from modeling. Also, procedure subtypes in which the outcome of interest occurred in less than 0.25% were excluded. These type-specific models yielded a risk score (predicted likelihood) for each of the 3 outcomes of interest for each observation in the cohort.

These risk scores were used as covariates in the hierarchical models as a designation of patient-specific risk. To construct the difference-in-differences model, an explicit consideration of temporal trends was included. Within the cohort, linear reductions in observed:expected risk occurred for each of the 3 outcomes of interest during the period of the study; therefore, year of operation was included as a linear covariate. An interaction term combining NSQIP participation with year of surgery was added. The magnitude and direction of this interaction term is the output of the difference-in-differences analysis and can be interpreted as estimating the relationship between NSQIP participation and outcomes accounting for underlying temporal trends.

In constructing this model, hospitals that changed NSQIP participation status (initiated or discontinued participation during the study period) posed a potential problem. The difference-in-differences model is intended to analyze over time the effect of an intervention that occurs at a specific point in time. To provide for the most straightforward analysis, hospitalizations occurring in any hospital that discontinued or initiated participation in the NSQIP during the period of the study were excluded after the date at which that hospital’s status changed.

We also sought to account for the fact that the hospitals in the analysis may have had different underlying rates of postoperative outcomes. To do this, a hospital-specific random effect (PROC GLIMMIX, SAS software version 9.3, SAS Institute Inc) was applied. This approach assumes that each hospital has a different underlying level of performance and accounts for this effect. Combined with the difference-in-differences technique, this model highlights changes in outcomes over time and obviates the need to specifically model hospital characteristics such as volume, academic orientation, bed size, and admissions. A Bonferroni correction for 3 comparisons was applied to the output of the difference-in-differences analysis to account for the assessment of NSQIP participation in 3 different models (analysis of complications, serious complications, and mortality). The same correction was applied to estimates of unadjusted risk differences comparing the rates of each of the 3 outcomes between NSQIP and non-NSQIP hospitals.

Results

Four hospitals in the UHC did not contribute hospitalizations meeting the inclusion criteria. Of the 429 214 hospitalizations available for analysis, 27 933 were excluded because of discontinuous NSQIP participation and 55 924 were excluded because the procedure subtype was performed fewer than 2000 times in the cohort (Figure 1).

Overall Results

The characteristics of the hospitalizations included in the analysis are described in Table 1. Patients were predominantly female (61.5%) with a mean age of 55.7 years. Complications, serious complications, and inpatient mortality occurred in 4.9%, 2.0%, and 0.8% of the study population, respectively. Of the hospitalizations in the cohort, 51.8% occurred at a NSQIP hospital. The type-specific models used to generate patient-level estimates of the risks of complications, serious complications, and mortality had weighted C statistics of 0.69, 0.72, and 0.79, respectively (eTables 4-6 in the Supplement).

Over the course of the study period, risk-adjusted rates of postoperative complications, serious complications, and mortality decreased for hospitalizations at both NSQIP and non-NSQIP hospitals (Figure 2). Overall rates of complications and serious complications were similar between NSQIP and non-NSQIP hospitals during the study period. Risk-adjusted mortality rates were lower at NSQIP hospitals throughout the study period.

Characteristics of Hospitals

Hospitalizations within a total of 113 UHC hospitals were included in the analysis. Of the 113 hospitals, 44 (39%) participated in NSQIP at some point during the period of the study, and a comparison of these hospitals vs non-NSQIP hospitals is shown in Table 2. The hospitals that participated in the NSQIP were larger and had higher numbers of annual inpatient operations/discharges. A greater proportion had a transplant program (84% vs 46%). Mean age and sex proportion of hospitalized patients were similar between the 2 types of hospitals. The Medicare Case Mix Index7 was higher in the NSQIP hospitals relative to the non-NSQIP hospitals (2.02 vs 1.79). The proportions of hospitalizations occurring at NSQIP vs non-NSQIP hospitals are shown in Table 3, categorized by year.

Patient Outcomes in NSQIP Hospitals vs Non-NSQIP Hospitals

A comparison of cumulative unadjusted postoperative outcomes by hospital-based NSQIP participation shows similar rates of complications (4.8% for NSQIP vs 5.0% for non-NSQIP hospitals; unadjusted risk difference, 0.16%; 95% CI, −0.02% to 0.34%). Unadjusted rates of serious complications (2.0% for NSQIP vs 2.1% for non-NSQIP hospitals; unadjusted risk difference, 0.14%; 95% CI, 0.02%-0.26%) and mortality (0.7% for NSQIP vs 1.0% for non-NSQIP hospitals; unadjusted risk difference, 0.29%; 95% CI, 0.19%-0.39%) were lower for patients treated at NSQIP hospitals than non-NSQIP hospitals. After accounting for patient risk, procedure type, underlying hospital performance, and temporal trends, the difference-in-differences model demonstrated no statistically significant differences over time between NSQIP and non-NSQIP hospitals in terms of likelihood of complications (adjusted odds ratio, 1.00; 95% CI, 0.97-1.03), serious complications (adjusted odds ratio, 0.98; 95% CI, 0.94-1.03), or mortality (adjusted odds ratio, 1.04; 95% CI, 0.94-1.14) (Table 4).

Discussion

In this study, we analyzed approximately 345 000 hospitalizations for major elective general/vascular surgical procedures in academic hospitals. Of these hospitalizations, approximately half were performed in hospitals that participated in a risk-adjusted surgical outcomes reporting program (NSQIP), and our methods were focused on understanding the association between participation in the NSQIP and changes in rates of surgical outcomes over time.

The hospitals that participated in the NSQIP were different from non-NSQIP hospitals in several aspects. NSQIP hospitals were larger, had a higher number of annual inpatient discharges, and were more likely to have an active transplant program. Also, NSQIP hospitals provided care to sicker patients than did non-NSQIP hospitals, represented in a higher Medicare Case Mix Index. Additionally, we found preliminary evidence that, while rates of complications and serious complications were similar between NSQIP and non-NSQIP hospitals, rates of mortality were lower at NSQIP hospitals throughout the study period.

Our study, however, was not intended to compare postoperative outcomes at NSQIP vs non-NSQIP hospitals. The analytic method applied in this study, a difference-in-differences model, assessed whether hospital-level participation in the NSQIP was associated with improved outcomes over time in a way that is different than in non-NSQIP hospitals. To appropriately adjust for potential underlying differences in hospital performance and patient mix, a hierarchical model was used that specifically accounted for a broad range of patient comorbidities as well as overall hospital-based rates of postoperative outcomes. Our study found that rates of inpatient complications, serious complications, and mortality did not improve differently over time based on hospital participation in NSQIP.

These findings differ from earlier work investigating the association between participation in NSQIP and hospital complication rates. Hall et al3 found that participating hospitals demonstrated a year-over-year improvement in risk-adjusted rates of complications and mortality during the period from 2005 to 2007.

Several differences between the current study and the study by Hall et al may explain the different results. The study by Hall et al did not account for the possibility that non-NSQIP hospitals had improvements in outcomes during the same period. The difference-in-differences model used in our study explicitly accounts for underlying temporal trends in surgical outcomes within the period of analysis.

This study was limited by factors inherent to administrative data. Hospitalizations were selected based on ICD-9 codes and may represent a somewhat different spectrum of cases than those in the NSQIP, which identifies operations based on Current Procedural Terminology coding. Also, our analysis was restricted to include only elective operations and complications that occurred within the index hospitalization. The rates of postoperative outcomes may therefore not be readily compared with other estimates of mortality that include urgent/emergency operations and may have information regarding postdischarge events. We do not believe these differences in our ascertainment of complications to be an important source of bias, however.

More important, complications may not have been coded according to rigorous standards in the administrative data used for this study. The accuracy of these types of data sources has been measured against prospective clinical registries with mixed results.8,9 Despite the potential inaccuracy of the outcomes data in this study, these same data form the basis for public reporting and pay for performance and therefore have an important level of face validity. Of greater concern would be the presence of bias, whereby hospital behavior with regard to accurate vs inaccurate administrative coding of comorbidities or complications would change as a result of participation in NSQIP. The extent to which this may have occurred is impossible for us to ascertain.

Sample size may also have limited the analysis. Even though our study encompassed the care provided to 345 357 patients, the methods used were designed to adjust for hospital-level variation. Therefore, the relatively small number of hospitals (113 hospitals) contributed to relatively wide confidence intervals around the point estimates for the outcomes we examined. However, this was the most appropriate approach to account for the fact that UHC hospitals that chose to participate in the NSQIP may have been fundamentally different from those that did not. Also, hospitals that participate in the UHC are disproportionately academic and may not be representative of domestic hospitals or NSQIP hospitals. It is possible that the quality improvement efforts in place in UHC hospitals may make the influence of NSQIP participation different than the influence it would have in other hospitals without similar efforts.

This study is not the first to find the absence of a relationship between participation in a surgical outcomes monitoring system and improvement in outcomes over time.10,11 Shih et al12 recently analyzed the performance of hospitals taking part in a pay-for-performance program organized through the Medicare Value-Based Purchasing program. In their analysis of 365 hospitals and more than 860 000 patients, they could detect no discernible relationship between participation in such a program and outcomes in a cohort of patients undergoing major cardiovascular or orthopedic procedures.

The failure of this and other studies to demonstrate an association between outcomes-oriented reporting systems and improved surgical outcomes may be related to difficulties translating outcomes reports into evidence-based approaches to quality improvement. The Surgical Care Improvement Project (SCIP) is an effort advanced by the Centers for Medicare & Medicaid Services and a spectrum of institutional partners to identify discrete processes of care (eg, appropriate timing and selection of antibiotic prophylaxis) that have an evidence-based link to surgical outcomes. Several studies in different contexts have failed to show a link between compliance with SCIP measures and outcomes, however.13-17 While checklist-based approaches are attractive, studies have not shown these to be of distinct benefit in modern health care delivery systems.18,19

This study has implications for hospitals and health care systems considering the role of programs that monitor surgical outcomes. Among hospitals providing care to patients undergoing general and vascular surgical procedures, our findings suggest that a surgical outcomes reporting system does not provide a clear mechanism for quality improvement.

Conclusions

This study found no association between hospital-based participation in the NSQIP and improvements in postoperative outcomes over time within a large cohort of patients undergoing elective general/vascular operations at academic hospitals in the United States. These findings suggest that a surgical outcomes reporting system does not provide a clear mechanism for quality improvement.

Back to top
Article Information

Corresponding Author: David A. Etzioni, MD, MSHS, Department of Surgery, Mayo Clinic Arizona, 5777 E Mayo Blvd, Phoenix, AZ 85054 (etzioni.david@mayo.edu).

Author Contributions: Dr Etzioni had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Study concept and design: Etzioni, Cima, Hohmann, Naessens, Habermann.

Acquisition, analysis, or interpretation of data: Etzioni, Wasif, Dueck, Hohmann, Naessens, Mathur, Habermann.

Drafting of the manuscript: Etzioni, Mathur.

Critical revision of the manuscript for important intellectual content: All authors.

Statistical analysis: Etzioni, Dueck, Naessens, Mathur.

Administrative, technical, or material support: Etzioni, Wasif, Cima, Naessens.

Study supervision: Etzioni.

Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest and none were reported.

References
1.
Health Care Cost Institute.  2013 Health Care Cost and Utilization Report. October 2014. http://www.healthcostinstitute.org/files/2013%20HCCUR%2012-17-14.pdf. Accessed December 23, 2014.
2.
American College of Surgeons National Surgical Quality Improvement Program.  ACS NSQIP Semiannual Report, July 16, 2014. Chicago, IL: American College of Surgeons; 2014.
3.
Hall  BL, Hamilton  BH, Richards  K, Bilimoria  KY, Cohen  ME, Ko  CY.  Does surgical quality improve in the American College of Surgeons National Surgical Quality Improvement Program? an evaluation of all participating hospitals.  Ann Surg. 2009;250(3):363-376.PubMedGoogle Scholar
4.
Centers for Medicare & Medicaid Services.  Hospital-Acquired Conditions (Present on Admission Indicator).http://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/HospitalAcqCond/index.html?redirect=/HospitalAcqCond. Accessed August 5, 2014.
5.
Abadie  A.  Semiparametric difference-in-differences estimators.  Rev Econ Stud. 2005;72(1):1-19. doi: 10.1111/0034-6527.00321.Google ScholarCrossref
6.
Dimick  JB, Ryan  AM.  Methods for evaluating changes in health care policy: the difference-in-differences approach.  JAMA. 2014;312(22):2401-2402.PubMedGoogle ScholarCrossref
7.
Fetter  RB, Shin  Y, Freeman  JL, Averill  RF, Thompson  JD.  Case mix definition by diagnosis-related groups.  Med Care. 1980;18(2)(suppl):1-53. PubMedGoogle Scholar
8.
Davenport  DL, Holsapple  CW, Conigliaro  J.  Assessing surgical quality using administrative and clinical data sets: a direct comparison of the University HealthSystem Consortium Clinical Database and the National Surgical Quality Improvement Program data set.  Am J Med Qual. 2009;24(5):395-402.PubMedGoogle ScholarCrossref
9.
Lawson  EH, Louie  R, Zingmond  DS,  et al.  A comparison of clinical registry vs administrative claims data for reporting of 30-day surgical complications.  Ann Surg. 2012;256(6):973-981.PubMedGoogle ScholarCrossref
10.
Jha  AK, Joynt  KE, Orav  EJ, Epstein  AM.  The long-term effect of premier pay for performance on patient outcomes.  N Engl J Med. 2012;366(17):1606-1615.PubMedGoogle ScholarCrossref
11.
Ryan  AM.  Effects of the Premier Hospital Quality Incentive Demonstration on Medicare patient mortality and cost.  Health Serv Res. 2009;44(3):821-842.PubMedGoogle ScholarCrossref
12.
Shih  T, Nicholas  LH, Thumma  JR, Birkmeyer  JD, Dimick  JB.  Does pay-for-performance improve surgical outcomes? an evaluation of phase 2 of the Premier Hospital Quality Incentive Demonstration.  Ann Surg. 2014;259(4):677-681.PubMedGoogle ScholarCrossref
13.
Altom  LK, Deierhoi  RJ, Grams  J,  et al.  Association between Surgical Care Improvement Program venous thromboembolism measures and postoperative events.  Am J Surg. 2012;204(5):591-597.PubMedGoogle ScholarCrossref
14.
Hawn  MT, Vick  CC, Richman  J,  et al.  Surgical site infection prevention: time to move beyond the Surgical Care Improvement Program.  Ann Surg. 2011;254(3):494-499.PubMedGoogle ScholarCrossref
15.
Ingraham  AM, Cohen  ME, Bilimoria  KY,  et al.  Association of surgical care improvement project infection-related process measure compliance with risk-adjusted outcomes: implications for quality measurement.  J Am Coll Surg. 2010;211(6):705-714.PubMedGoogle ScholarCrossref
16.
Nicholas  LH, Osborne  NH, Birkmeyer  JD, Dimick  JB.  Hospital process compliance and surgical outcomes in Medicare beneficiaries.  Arch Surg. 2010;145(10):999-1004.PubMedGoogle ScholarCrossref
17.
Stulberg  JJ, Delaney  CP, Neuhauser  DV, Aron  DC, Fu  P, Koroukian  SM.  Adherence to surgical care improvement project measures and the association with postoperative infections.  JAMA. 2010;303(24):2479-2485.PubMedGoogle ScholarCrossref
18.
Urbach  DR, Govindarajan  A, Saskin  R, Wilton  AS, Baxter  NN.  Introduction of surgical safety checklists in Ontario, Canada.  N Engl J Med. 2014;370(11):1029-1038.PubMedGoogle ScholarCrossref
19.
Haynes  AB, Weiser  TG, Berry  WR,  et al; Safe Surgery Saves Lives Study Group.  A surgical safety checklist to reduce morbidity and mortality in a global population.  N Engl J Med. 2009;360(5):491-499.PubMedGoogle ScholarCrossref
×