Importance
In fiscal year (FY) 2015, the Centers for Medicare & Medicaid Services (CMS) instituted the Hospital-Acquired Condition (HAC) Reduction Program, which reduces payments to the lowest-performing hospitals. However, it is uncertain whether this program accurately measures quality and fairly penalizes hospitals.
Objective
To examine the characteristics of hospitals penalized by the HAC Reduction Program and to evaluate the association of a summary score of hospital characteristics related to quality with penalization in the HAC program.
Design, Setting, and Participants
Data for hospitals participating in the FY2015 HAC Reduction Program were obtained from CMS’ Hospital Compare and merged with the 2014 American Hospital Association Annual Survey and FY2015 Medicare Impact File. Logistic regression models were developed to examine the association between hospital characteristics and HAC program penalization. An 8-point hospital quality summary score was created using hospital characteristics related to volume, accreditations, and offering of advanced care services. The relationship between the hospital quality summary score and HAC program penalization was examined. Publicly reported process-of-care and outcome measures were examined from 4 clinical areas (surgery, acute myocardial infarction, heart failure, pneumonia), and their association with the hospital quality summary score was evaluated.
Exposures
Penalization in the HAC Reduction Program.
Main Outcomes and Measures
Hospital characteristics associated with penalization.
Results
Of the 3284 hospitals participating in the HAC program, 721 (22.0%) were penalized. Hospitals were more likely to be penalized if they were accredited by the Joint Commission (24.0% accredited, 14.4% not accredited; odds ratio [OR], 1.33; 95% CI, 1.04-1.70); they were major teaching hospitals (42.3%; OR, 1.58; 95% CI, 1.09-2.29) or very major teaching hospitals (62.2%; OR, 2.61; 95% CI, 1.55-4.39; vs nonteaching hospitals, 17.0%); they cared for more complex patient populations based on case mix index (quartile 4 vs quartile 1: 32.8% vs 12.1%; OR, 1.98; 95% CI, 1.44-2.71); or they were safety-net hospitals vs non–safety-net hospitals (28.3% vs 19.9%; OR, 1.36; 95% CI, 1.11-1.68). Hospitals with higher hospital quality summary scores had significantly better performance on 9 of 10 publicly reported process and outcomes measures compared with hospitals that had lower quality scores (all P ≤ .01 for trend). However, hospitals with the highest quality score of 8 were penalized significantly more frequently than hospitals with the lowest quality score of 0 (67.3% [37/55] vs 12.6% [53/422]; P < .001 for trend).
Conclusions and Relevance
Among hospitals participating in the HAC Reduction Program, hospitals that were penalized more frequently had more quality accreditations, offered advanced services, were major teaching institutions, and had better performance on other process and outcome measures. These paradoxical findings suggest that the approach for assessing hospital penalties in the HAC Reduction Program merits reconsideration to ensure it is achieving the intended goals.
The Affordable Care Act (ACA) established the Hospital-Acquired Condition (HAC) Reduction Program in an effort to reduce the incidence of preventable adverse events that occur during hospitalizations in the United States.1 Hospitals in this program are evaluated according to 2 domains.2,3 Domain 1 constitutes 35% of the total score and is solely based on the Agency for Healthcare Research and Quality’s (AHRQ) Patient Safety for Selected Indicators (PSI)-90 composite measure. Domain 2 accounts for the remaining 65% of the total score and consists of an average of 2 intensive care unit–based nosocomial infections: central line–associated bloodstream infections (CLABSI) and catheter-associated urinary tract infections (CAUTI). Beginning October 1, 2014 (fiscal year [FY] 2015), hospitals scoring in the worst quartile had their Centers for Medicare & Medicaid Services’ (CMS’) payments reduced by 1%, totaling approximately $373 million nationally.4,5
However, there are concerns that the HAC program may be unfairly penalizing hospitals because its component metrics have measurement issues.6 First, the HAC program’s component measures may be subject to ascertainment bias, with reported HAC rates affected by variations in clinical practice, documentation, or local data abstraction procedures.7-9 Second, they may inadequately risk-adjust for differences in case mix, patient comorbidities, and patient sociodemographic characteristics.10,11 There may also be issues related to hospital-to-hospital differences in interpretation of the coding rules.12
We sought to evaluate the characteristics and performance of hospitals penalized in this program. The objectives of this study were (1) to compare the characteristics of hospitals penalized in the HAC Reduction Program with those not penalized and (2) to determine the association between a composite measure of hospital quality and penalization in the HAC program.
Hospitals receiving Medicare Part A payments through the inpatient prospective payment system are included in the HAC Reduction Program.5 Hospitals located in Maryland are excluded under a federal waiver because they have an alternative payment program.5 Participating hospitals are assigned an overall score ranging from 1 to 10 where higher scores reflect worse performance.3 Scores are based on 3 components: PSI-90 performance (Domain 1, 35%); and CLABSI and CAUTI performance (Domain 2, 65%). While the PSI-90 measure is based on administrative claims data, the CLABSI and CAUTI measures come from the Centers for Disease Control and Prevention’s National Healthcare Safety Network (NHSN).13
Data for NHSN measures are only accessible to CMS for hospitals participating in the Hospital Inpatient Quality Reporting Program.2 Hospitals with insufficient data for 1 NHSN measure (CLABSI or CAUTI) have a Domain 2 score based solely on the measure with sufficient data.2 In addition, hospitals may have no Domain 2 score if they either have insufficient data for both CLABSI and CAUTI or if they do not participate in the Hospital Inpatient Quality Reporting Program and receive waiver exemption. In these instances, the total HAC score is based only on Domain 1 (PSI-90).2 Hospitals that do not participate in the Inpatient Quality Reporting Program and that fail to apply for a waiver receive a maximum score of 10 (worst performance) for both CLABSI and CAUTI.2 Regarding PSI-90, if the number of eligible discharges for any individual component measure is fewer than 3, then the national risk-adjusted rate is substituted for that component. If there are insufficient data overall to calculate a PSI-90 score, then no Domain 1 score is present and the total HAC score based only on Domain 2.2
Scores for all participating hospitals were obtained from CMS’ Hospital Compare website.14 For FY2015, CMS identified hospitals with a total score greater than 7.0 as comprising the worst-performing quartile of hospitals.15 Hospitals in the HAC Reduction Program were categorized as penalized or not penalized on the basis of this score. In addition, using the individual scores for each HAC component (CLABSI, CAUTI, PSI-90), the worst-performing hospital quartile for each of these individual measures was examined. This study was reviewed and deemed nonhuman subjects research by the Northwestern University institutional review board.
To ascertain hospital characteristics, we merged the HAC Reduction Program’s list of participating hospitals with the 2014 American Hospital Association (AHA) Annual Survey and the September 2014 update of the FY2015 CMS Payment Update Impact File using each hospital’s unique CMS Certification Number. American Hospital Association data provided several structural characteristics, including bed size, admission volume, Joint Commission accreditation, American College of Surgeons Commission on Cancer accreditation, provision of transplant services, level I trauma center status, inpatient surgeries per bed, nurse-to-bed ratio, and Council of Teaching Hospitals (COTH) membership. Of the 3284 hospitals included in the HAC Reduction Program, 31 (0.9%) were not included in the 2014 AHA data and were omitted from analyses requiring these data.
The CMS Impact File was used to identify each hospital’s percentage of disproportionate share patients and case mix index (CMI). Similar to other studies, safety-net hospitals were defined as those in the highest quartile of disproportionate share patients.16 The CMI is calculated by summing all diagnosis-related group weights and dividing the sum by the total number of Medicare patient discharges for a specific time period.17 A higher CMI indicates more complex admissions and potentially sicker patient populations. In addition, following the approach used by previous studies, the resident-to-bed ratio from the CMS Impact File was used to categorize hospitals into the following groups: nonteaching (0.000); very minor teaching (0.001-0.049); minor teaching (0.050-0.249); major teaching (0.250-0.599); and very major teaching (≥0.600).18
Because accurate abstraction and ascertainment of outcomes is thought to be a concern regarding the validity of the HAC program, we sought to include a measure reflecting a hospital’s adoption of rigorous data abstraction procedures. Participation in a systematic clinical database registry for general surgery is used in CMS’ Hospital Inpatient Quality Reporting Program and publicly reported on its Hospital Compare website.19 Hospitals must collect data in a standardized fashion, and hospitals are audited to ensure adherence to data collection practices. Moreover, many HAC component measures assess surgical care. Hospitals were deemed to participate in a clinical surgical registry if enrolled in the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP), the Michigan Surgical Quality Collaborative (MSQC), or the Surgical Care and Outcomes Assessment Program (SCOAP). A list of all adult hospitals within the United States participating in the ACS NSQIP was obtained from the January 2015 ACS NSQIP semiannual report.20 Hospitals participating in the MSQC and SCOAP were identified from each program’s website.21,22
Hospital Quality Summary Score
To evaluate the association between hospital characteristics and penalties in the HAC Reduction Program, we developed a hospital quality summary score composed of several characteristics.8 Variables were chosen a priori for inclusion on the basis of frequently examined indicators of hospital quality, resources, and services, such as hospital volume, various accreditations, and complexity of services offered.8,23-30 The hospital quality summary score consisted of the following variables: (1) highest quartile of inpatient admission volume; (2) Joint Commission accreditation; (3) Commission on Cancer accreditation; (4) provision of transplant services; (5) status as a level I trauma center; (6) highest quartile nurse-to-bed ratio; (7) member of Council of Teaching Hospitals; and (8) participation in a clinical surgical registry. One point was awarded for each component measure present; the overall hospital quality summary score ranged from 0 to 8, with 8 representing the highest score possible. We tested the robustness of our findings by performing several sensitivity analyses that varied the hospital characteristics included in our hospital quality summary score.
Publicly Reported Process-of-Care and Outcome Measures
We also sought to determine if the relationship between hospital quality summary scores (0-8 scale) and penalty in the HAC program was consistent with other publicly reported, well-established measures of hospital quality. As such, hospital performance on process-of-care and outcome measures were obtained from the December 2014 release of Hospital Compare data.31 There were 16 inpatient process-of-care measures included from 4 clinical areas (acute myocardial infarction [AMI], heart failure, pneumonia, and surgery). These clinical areas were chosen because they are also a focus of the FY2015 Hospital Value-Based Purchasing Program.32 Measures with a national mean score of 99% or greater were considered “topped out” and excluded (n = 6) due to insufficient variation in performance between hospitals. Also, measures were excluded (n = 3) if fewer than 50% of HAC Reduction Program hospitals reported data for that measure. Subsequently, the following 7 measures were included in our study: (1) AMI-10: patient with AMI prescribed statin at discharge; (2) HF-1: patient with heart failure received discharge instructions; (3) HF-3: patient with heart failure with left ventricular systolic dysfunction received an angiotensin-converting enzyme inhibitor or angiotensin receptor blocker; (4) PN-6: patient with pneumonia given appropriate initial antibiotic; (5) SCIP-CARD-2: preoperative β-blocker continued perioperatively; (6) SCIP-INF-3: prophylactic antibiotics discontinued within 24 hours after surgery; and (7) SCIP-INF-9: urinary catheter removed in surgical patients within 2 days postoperatively. Outcome measures examined included the risk-standardized 30-day mortality rates in use by the FY2015 Hospital Value-Based Purchasing Program: AMI, heart failure, and pneumonia.32 Several sensitivity analyses were also performed to evaluate associations between these publicly reported measures and several alternative hospital quality summary scores.
χ2 Tests were used to assess bivariate relationships between penalization in the HAC Reduction Program and hospital characteristics. Multivariable logistic regression models were also developed to examine hospital characteristics associated with penalization in the HAC program. Hospital characteristics were included in adjusted models if they were examined in prior studies evaluating CMS pay-for-performance programs, reflected hospital quality, or assessed the case mix or complexity of care provided.16,23,24,26-30 The following characteristics were included in the models: hospital bed size, Joint Commission accreditation, Commission on Cancer accreditation, status as a level I trauma center, highest-quartile nurse-to-bed ratio, resident-to-bed ratio, safety-net hospital status, participation in a clinical surgical registry, and CMI. All analyses were performed at the hospital level.
Mean process-of-care and outcome measure performance rates across hospital quality summary score categories were assessed using the Cuzick extension of the Wilcoxon rank-sum test for trends. To evaluate the association between HAC penalization and hospital quality summary scores, a Cochran-Armitage test for trends was used. As a sensitivity analysis, HAC penalization was also evaluated across hospital quality summary scores for those hospitals that submitted a score for all 3 individual HAC components: PSI-90, CLABSI, and CAUTI. Also, the proportion of hospitals in the worst-performing quartile for each individual HAC component was evaluated across hospital quality summary scores using the Cochran-Armitage test for trends. All P values reported are 2-sided with statistical significance set at .05. Statistical analyses were performed in SAS version 9.3 (SAS Institute) and Stata version 13.1 (StataCorp).
Hospital Factors Associated With HAC Program Penalization
Of the 3284 hospitals included in the HAC Reduction Program, 721 (22.0%) were penalized. Hospitals were penalized more frequently if they were larger (≥400 beds vs <100 beds; 38.7% vs 13.9%; P < .001) (Table 1), had more hospital admissions (highest quartile vs lowest quartile; 35.4% vs 13.5%; P < .001), were accredited by the Joint Commission vs nonaccredited (24.0% vs 14.4%; P < .001), were a level I trauma center vs non–level I trauma center (47.4% vs 19.1%; P < .001), had a higher nurse-to-bed ratio (highest quartile vs lowest quartile; 29.3% vs 17.4%; P < .001), or were clinical surgical registry participants vs nonparticipants (37.5% vs 19.3%; P < .001). Although 17.0% of nonteaching hospitals were penalized, there was a stepwise increase in penalization rates as the level of teaching hospital intensity increased, with 42.3% of major and 62.2% of very major teaching hospitals penalized (P < .001) (Table 1). In addition, hospitals with a higher CMI (quartile 4 vs quartile 1; 32.8% vs 12.1%; P < .001) were also penalized more often, as were safety-net hospitals (vs non–safety-net hospitals; 28.3% vs 19.9%; P < .001).
Of the 3253 hospitals with AHA data available, all had a Domain 1 (PSI-90) score. However, 650 (20.0%) did not have a Domain 2 score, and the total HAC score was based solely on Domain 1 (PSI-90). Of these 650 hospitals, 581 (89.4%) were small hospitals with less than 100 beds. Whereas 13.7% of hospitals without a Domain 2 score were penalized, 24.0% of those with a Domain 2 score were penalized (P < .001).
In multivariable analyses, factors associated with a higher likelihood of penalization included accreditation by the Joint Commission (odds ratio [OR], 1.33; 95% CI, 1.04-1.70), both major teaching hospitals (OR, 1.58; 95% CI, 1.09-2.29) and very major teaching hospitals (OR, 2.61; 95% CI, 1.55-4.39) compared with nonteaching hospitals, safety-net hospitals (OR, 1.36; 95% CI, 1.11-1.68), level I trauma center status (OR, 1.80; 95% CI, 1.33-2.43), and hospitals participating in a clinical surgical registry (OR, 1.31; 95% CI, 1.02-1.68) (Table 2 and eTable 1 in the Supplement). In addition, when compared with hospitals caring for the least complex patients, there was a stepwise increase in the likelihood of a penalization with increasing quartile of hospital CMI. Hospitals in the quartile caring for the most complex patients (ie, highest CMI) had nearly a 2-fold higher odds of being penalized than hospitals caring for the least complex patients (OR, 1.98; 95% CI, 1.44-2.71).
Performance by Hospital Quality Summary Score
Mean process-of-care measure adherence rates and risk-standardized 30-day mortality rates were assessed across hospital quality summary scores (Figure 1 and eTable 2 and eTable 3 in the Supplement). The mean hospital quality summary score for the entire cohort was 2.1 (SD, 1.8; median, 2.0; skewness, 1.3; range, 0-8). When evaluating the overall trend of process-of-care and outcome measure performance across ordered hospital quality summary scores, hospitals with a higher quality summary score performed significantly better on all measures than those with a lower hospital quality summary score (P < .001 for all except HF-3, P = .01 for trend), with the exception of discontinuing postoperative antibiotics within 24 hours of surgery (SCIP-INF-3; P = .36 for trend). These findings were similar in sensitivity analyses when different hospital characteristics were used to constitute the hospital quality summary score (Figure 1 footnote).
Although hospitals with higher hospital quality summary scores had better performance on 9 of 10 publicly reported process and outcome measures evaluated, they were penalized significantly more frequently in the HAC Reduction Program than those with lower hospital quality summary scores (P < .001 for trend) (Figure 2 and eTable 4 in the Supplement). Of the 422 hospitals with a hospital quality summary score of 0, 53 hospitals (12.6%; 95% CI, 9.4%-15.7%) were penalized. In contrast, 37 of the 55 hospitals with a quality score of 8 were penalized (67.3%; 95% CI, 54.9%-79.7%), representing a more than 5-fold higher penalty rate (P < .001). These findings were similar in sensitivity analyses when different hospital characteristics were used to constitute the hospital quality summary score or when only hospitals (n = 2242) that submitted scores for all 3 individual HAC components were considered (eTable 4 and eTable 5 in the Supplement).
Additional analyses were performed to evaluate hospital performance on each HAC component across hospital quality summary scores. Hospitals with higher hospital quality summary scores were less likely to be in the bottom quartile on the CLABSI component than those with lower hospital quality summary scores (P < .001) (Figure 3 and eTable 6 in the Supplement). However, hospitals with higher hospital quality summary scores were more likely to be in the bottom quartile on both the PSI-90 and CAUTI components than those with lower scores (P < .001 for both). For PSI-90, 38 of 422 hospitals (9.0%) with a hospital quality summary score of 0 were in the lowest-performing quartile, whereas 33 of 55 hospitals (60.0%) with a hospital quality summary score of 8 were in the lowest-performing quartile.
In this study, we found that hospitals with more structural characteristics reflecting volume, accreditations, and the offering of advanced services had better performance on publicly reported process-of-care and outcome metrics but were penalized significantly more frequently in the HAC Reduction Program. For example, hospitals with the highest quality summary score were penalized more than 5-fold more frequently (67.3% vs 12.6%) compared with hospitals with the lowest quality summary score. These findings suggest that penalization in the HAC program may not reflect poor quality of care, but rather, these findings may be due to measurement and validity issues of the HAC program component measures.
One explanation for these findings may be that these component measures are affected by surveillance bias, where differences in clinical practice result in varying rates of identifying an adverse outcome.7 For example, identification of postoperative venous thromboembolism (VTE) events is dependent on how frequently imaging studies are ordered to look for VTE.8 Hospitals with higher rates of VTE imaging have higher VTE event rates, despite better VTE prophylaxis adherence rates. Factors associated with high hospital VTE imaging rates include accreditation by the Joint Commission, a high resident-to-bed ratio, and case severity.33 Our study also found that these factors were associated with penalization in the HAC Reduction Program. These similarities may be related to the use of PSI-90 in the HAC program in which VTE is the second highest-weighted component. Thus, hospitals that look more for adverse events frequently identify more events and incorrectly appear to have worse performance.
Hospital-to-hospital differences in information technology may also result in differences in the detection of adverse events. For example, electronic surveillance systems often assist hospital infection preventionists in their identification of hospital-acquired infections. However, Stone et al34 reported that only 34.3% of NHSN facilities use an electronic surveillance system. In the absence of these systems, identification of hospital-acquired infections is done manually and is largely effort-dependent.35 Our study’s findings that larger hospitals with multiple accreditations were more frequently penalized may reflect hospital differences with respect to surveillance systems as well as the background and number of data abstractors. Furthermore, as we found a significant association between clinical registry participation and HAC program penalization, it may be that hospitals participating in clinical registries have standardized procedures and data infrastructures that facilitate better adverse event identification. Again, hospitals that are more thorough in identifying events may be more susceptible to penalties in the HAC program.
Inadequate risk adjustment may also explain why hospitals with seemingly higher levels of quality are penalized in the HAC program. Previous studies have found that hospitals serving vulnerable or medically complex patient populations may be penalized more often in CMS pay-for-performance programs.16,36 Many pay-for-performance measures, including PSI-90, are derived from administrative data. While valuable for multiple purposes, these data are captured primarily for hospitals to bill payers, and administrative data have noted limitations in the ability to risk-adjust for clinically important variables.37 Thus, interhospital comparisons in initiatives like the HAC Reduction Program that use PSI-90 may be influenced by patient-level comorbidities, operative complexity, and other factors that are not accurately captured or available in administrative data.11 This may explain why teaching, safety-net, and higher-CMI hospitals are penalized more frequently in the HAC program. Moreover, when evaluating each HAC component individually, we found PSI-90 performance markedly worsened across hospital quality summary scores, suggesting that problems with this measure may underlie inappropriate penalization. The CLABSI and CAUTI NHSN measures used in the HAC program, although clinically collected, also have risk-adjustment concerns. For both measures, risk adjustment is performed using only 3 variables: type of patient care location, hospital affiliation with a medical school, and bed size.38
The discrepancy in penalization rates between smaller and larger hospitals may also have been affected by whether they reported data for each HAC domain. Approximately one-fifth of all HAC program hospitals had their total score determined solely by PSI-90. These were primarily small hospitals that likely benefited from having their total HAC score based only on PSI-90 for 2 reasons. First, PSI-90 is reliability adjusted, which results in shrinkage of estimates toward the overall average, particularly in those hospitals with smaller sample sizes.39 Second, hospitals with fewer than 3 discharges for any component of PSI-90 have the national rate substituted for the hospital rate of that component when calculating the overall composite score. Both of these methodological factors make it less likely that smaller hospitals will have a PSI-90 score that identifies them as a poorly performing outlier. Thus, penalization rates in smaller hospitals, particularly in those without a Domain 2 score (CLABSI/CAUTI), may be lower as a consequence of these methodological considerations. Nonetheless, in evaluating only hospitals with complete data for all 3 components, the paradoxical association between hospital quality summary scores and HAC program penalization persisted.
Several limitations should be noted. First, the results presented are only relevant for the current (FY2015) HAC Reduction Program. On October 1, 2015 (FY2016), this program will incorporate surgical site infections and reweight the included domains. However, the NHSN’s surgical site infections measure may similarly experience issues related to ascertainment bias, risk adjustment, and coding guideline interpretation.40 Second, in formulating our hospital quality summary score, we chose to include several characteristics that are commonly used in research and health policy evaluations. Varying the metrics included in this quality score did not substantively change the results of our analysis. Nevertheless, there are undoubtedly other important measures of hospital quality that we were unable to assess in our composite measure. For example, metrics that may be important in the identification and reporting of adverse events may include the number of infection preventionists employed, the background and experience of data abstractors, and the availability of electronic health records. Third, we validated our hospital quality summary score using publicly reported and established process and outcome measures. However, hospital quality is likely multifaceted, and hospitals that demonstrate high performance on outcomes like mortality may not necessarily translate this performance to the component measures included in the HAC program. Thus, although we suspect methodological issues may be responsible for the contradictory findings seen in hospital performance, we were unable to test this association directly.
Our study can be interpreted in 2 ways. First, traditional quality metrics (eg, accreditations, process measures, mortality) may be flawed and thus conflict with hospital HAC measure performance. Alternatively, the HAC Reduction Program may not accurately measure hospital quality. The ACA established this program with the commendable purpose of motivating hospitals to reduce HACs. However, the value of the HAC program is dependent on the validity and acceptance of its component measures as indicators of preventable patient harm. There are numerous concerns about the validity of PSI-90,11 and consideration should be given to exclude this measure from pay-for-performance initiatives such as the HAC Reduction Program. In addition, in the absence of a national mechanism by which to accurately measure and compare outcomes, assessing process measures may be a more appropriate approach for averting preventable harm. The use of well-defined process measures also obviates much of the need for detailed risk adjustment. Furthermore, the current “bottom-quartile” approach to penalization in the HAC program produces 2 concerning results: (1) some hospitals performing statistically “as expected” are penalized, and (2) regardless of whether hospitals collectively demonstrate improvement, an appreciable proportion will be subjected to financial penalty. Rather than uniformly penalize the worst-performing quartile of hospitals, penalization should be based on statistically significant higher-than-expected adverse event rates, as in other CMS pay-for-performance programs.41
Among hospitals participating in the HAC Reduction Program, hospitals that were penalized more frequently had more quality accreditations, offered advanced services, were major teaching institutions, and had better performance on other process and outcome measures. These paradoxical findings suggest that the approach for assessing hospital penalties in the HAC Reduction Program merits reconsideration to ensure it is achieving the intended goals.
Corresponding Author: Karl Y. Bilimoria, MD, MS, Surgical Outcomes and Quality Improvement Center (SOQIC), Department of Surgery and Center for Healthcare Studies, Feinberg School of Medicine, Northwestern University, 633 N St Clair St, 20th Floor, Chicago, IL 60611 (k-bilimoria@northwestern.edu).
Author Contributions: Dr Rajaram had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: All authors.
Acquisition, analysis, or interpretation of data: Rajaram, Chung, Mohanty, Bilimoria.
Drafting of the manuscript: Rajaram, Bilimoria.
Critical revision of the manuscript for important intellectual content: All authors.
Statistical analysis: Rajaram, Chung, Mohanty, Pavey, Bilimoria.
Obtained funding: Bilimoria.
Administrative, technical, or material support: Rajaram, Kinnier, Barnard, McHugh, Bilimoria.
Study supervision: Bilimoria.
Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Dr Bilimoria reported having received support from the National Institutes of Health, Agency for Healthcare Research and Quality, American Board of Surgery, American College of Surgeons, Accreditation Council for Graduate Medical Education, National Comprehensive Cancer Network, American Cancer Society, Health Care Services Corporation, California Health Care Foundation, Northwestern University, the Robert H. Lurie Comprehensive Cancer Center, Northwestern Memorial Foundation, and Northwestern Memorial Hospital and honoraria from hospitals and professional societies for clinical care and quality improvement research presentations. No other disclosures were reported.
Funding/Support: Dr Rajaram is supported by the Agency for Healthcare Research and Quality (T32HS000078), the American College of Surgeons Clinical Scholars in Residence Program, and an unrestricted educational grant from Merck.
Role of the Funder/Sponsor: The funding sponsors had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication. The sponsors had no access to the data and did not perform any of the study analysis.
1.Patient Protection and Affordable Care Act, 42 USC §18001 et seq, (2010).
8.Bilimoria
KY, Chung
J, Ju
MH,
et al. Evaluation of surveillance bias and the validity of the venous thromboembolism quality measure.
JAMA. 2013;310(14):1482-1489.
PubMedGoogle ScholarCrossref 9.Dixon-Woods
M, Leslie
M, Bion
J, Tarrant
C. What counts? an ethnographic study of infection data reported to a patient safety program.
Milbank Q. 2012;90(3):548-591.
PubMedGoogle ScholarCrossref 10.McGregor
JC, Harris
AD. The need for advancements in the field of risk adjustment for healthcare-associated infections.
Infect Control Hosp Epidemiol. 2014;35(1):8-9.
PubMedGoogle ScholarCrossref 11.Rajaram
R, Barnard
C, Bilimoria
KY. Concerns about using the Patient Safety Indicator-90 composite in pay-for-performance programs.
JAMA. 2015;313(9):897-898.
PubMedGoogle ScholarCrossref 12.Trick
WE. Decision making during healthcare-associated infection surveillance: a rationale for automation.
Clin Infect Dis. 2013;57(3):434-440.
PubMedGoogle ScholarCrossref 16.Joynt
KE, Jha
AK. Characteristics of hospitals receiving penalties under the Hospital Readmissions Reduction Program.
JAMA. 2013;309(4):342-343.
PubMedGoogle ScholarCrossref 18.Rajaram
R, Chung
JW, Jones
AT,
et al. Association of the 2011 ACGME resident duty hour reform with general surgery patient outcomes and with resident examination performance.
JAMA. 2014;312(22):2374-2384.
PubMedGoogle ScholarCrossref 24.Bilimoria
KY, Bentrem
DJ, Stewart
AK, Winchester
DP, Ko
CY. Comparison of commission on cancer-approved and -nonapproved hospitals in the United States: implications for studies that use the National Cancer Data Base.
J Clin Oncol. 2009;27(25):4177-4181.
PubMedGoogle ScholarCrossref 25.Birkmeyer
JD, Siewers
AE, Finlayson
EV,
et al. Hospital volume and surgical mortality in the United States.
N Engl J Med. 2002;346(15):1128-1137.
PubMedGoogle ScholarCrossref 26.Demetriades
D, Martin
M, Salim
A, Rhee
P, Brown
C, Chan
L. The effect of trauma center designation and trauma volume on outcome in specific severe injuries.
Ann Surg. 2005;242(4):512-517.
PubMedGoogle Scholar 28.Needleman
J, Buerhaus
P, Mattke
S, Stewart
M, Zelevinsky
K. Nurse-staffing levels and the quality of care in hospitals.
N Engl J Med. 2002;346(22):1715-1722.
PubMedGoogle ScholarCrossref 29.Schmaltz
SP, Williams
SC, Chassin
MR, Loeb
JM, Wachter
RM. Hospital performance trends on national quality measures and the association with Joint Commission accreditation.
J Hosp Med. 2011;6(8):454-461.
PubMedGoogle ScholarCrossref 30.Cohen
ME, Liu
Y, Ko
CY, Hall
BL. Improved surgical outcomes for ACS NSQIP hospitals over time: evaluation of hospital cohorts with up to 8 years of participation [published online February 26, 2015].
Ann Surg. doi:
10.1097/SLA.0000000000001192.
PubMedGoogle Scholar 33.Chung
JW, Ju
MH, Kinnier
CV, Haut
ER, Baker
DW, Bilimoria
KY. Evaluation of hospital factors associated with hospital postoperative venous thromboembolism imaging utilisation practices.
BMJ Qual Saf. 2014;23(11):947-956.
PubMedGoogle ScholarCrossref 34.Stone
PW, Pogorzelska-Maziarz
M, Herzig
CT,
et al. State of infection prevention in US hospitals enrolled in the National Health and Safety Network.
Am J Infect Control. 2014;42(2):94-99.
PubMedGoogle ScholarCrossref 35.McBryde
ES, Brett
J, Russo
PL, Worth
LJ, Bull
AL, Richards
MJ. Validation of statewide surveillance system data on central line-associated bloodstream infection in intensive care units in Australia.
Infect Control Hosp Epidemiol. 2009;30(11):1045-1049.
PubMedGoogle ScholarCrossref 36.Gilman
M, Adams
EK, Hockenberry
JM, Milstein
AS, Wilson
IB, Becker
ER. Safety-net hospitals more likely than other hospitals to fare poorly under Medicare’s value-based purchasing.
Health Aff (Millwood). 2015;34(3):398-405.
PubMedGoogle ScholarCrossref 39.Dimick
JB, Ghaferi
AA, Osborne
NH, Ko
CY, Hall
BL. Reliability adjustment for reporting hospital outcomes with surgery.
Ann Surg. 2012;255(4):703-707.
PubMedGoogle ScholarCrossref 40.Ju
MH, Ko
CY, Hall
BL, Bosk
CL, Bilimoria
KY, Wick
EC. A comparison of 2 surgical site infection monitoring systems.
JAMA Surg. 2015;150(1):51-57.
PubMedGoogle ScholarCrossref