Hospital quality summary score includes the following variables: (1) inpatient admission volume, (2) Joint Commission accreditation, (3) Commission on Cancer accreditation, (4) presence of transplant services, (5) level I trauma center status, (6) nurse to bed ratio, (7) Council of Teaching Hospitals membership, and (8) clinical surgical registry participation. SIR indicates standardized infection ratio (risk-adjusted observed to expected ratio for hospital-acquired infections [a score of <1 indicates fewer infections than expected, a score of 1 indicates an expected number of infections, and a score of >1 indicates more infections that expected.]). Error bars indicate 95% CIs.
aDetermined by use of the Cochran-Armitage test for trends.
Customize your JAMA Network experience by selecting one or more topics from the list below.
Minami CA, Dahlke AR, Barnard C, et al. Association Between Hospital Characteristics and Performance on the New Hospital-Acquired Condition Reduction Program’s Surgical Site Infection Measures. JAMA Surg. 2016;151(8):777–779. doi:https://doi.org/10.1001/jamasurg.2016.0408
The Centers for Medicare and Medicaid Services Hospital-Acquired Condition (HAC) Reduction Program implemented financial penalties for poorly performing hospitals in federal fiscal year 2015. However, higher-quality hospitals appear to be penalized significantly more often in the HAC program than lower-quality hospitals.1 In federal fiscal year 2016, surgical site infection (SSI) outcome measures for colon surgery and abdominal hysterectomy will be incorporated into the HAC program. Our objective was to evaluate the association between hospital characteristics and SSI measures.
The SSI scores from Centers for Medicare and Medicaid Services Hospital Compare were merged with the fiscal year 2015 Medicare Impact File and the 2014 American Hospital Association Annual Survey data. The association between hospital quality summary score1 (hospital characteristics including size, accreditations, and advanced services) and SSI outcomes was assessed using a Cochran-Armitage test for trends. Logistic regression models were developed to assess the association between hospital characteristics and SSI measures for colectomy and hysterectomy separately. This study was deemed nonhuman subjects research by the Northwestern University institutional review board.
Of 3283 hospitals participating in the HAC program, 2029 reported colorectal SSI performance and 828 reported hysterectomy SSI performance. Hospitals with higher hospital quality summary scores were more frequently poor performers (bottom quartile) for SSI (colon: P = .003; hysterectomy: P < .001) (Figure, A and B) and had higher standardized infection ratios (P < .001 for colon and hysterectomy) (Figure, C and D). Hospitals were more likely to be poor performers for colon SSI and hysterectomy SSI if they were a teaching hospital, safety-net hospital, or level I trauma center (Table). In multivariable analyses, teaching hospitals were more likely to be poor performers for colorectal SSI (very major teaching: odds ratio, 2.41 [95% CI, 1.40-4.14]), but the association was not as consistent for hysterectomy (Table).
We found that hospitals with higher structural quality summary scores paradoxically performed worse on both SSI measures when compared with their counterparts with lower-quality summary scores. Thus, higher SSI scores may not truly reflect poor quality of care and may instead be indicative of the measurement bias and validity issues that have been previously suggested for other HAC program component measures.1
There are several possible explanations. First, surveillance bias can result from variability in institutional clinical practices regarding the threshold at which potentially infected wounds are opened or cultured.2 Second, the different data collection methods used by hospitals reporting to the Centers for Disease Control and Prevention National Healthcare Safety Network may lead to hospital-level variation in event capture. Hospitals using an electronic surveillance system, which triggers review by an infection preventionist when a possible infection is detected, may uncover more events than would a manual review. These differences may be compounded by infection preventionists’ variable experience, workload, and auditing practices.3 Moreover, 38% of National Healthcare Safety Network–enrolled facilities do not have an infection preventionist, which may lead to hospital-level variation in detection.4 Hospitals may also fail to report events in a standardized fashion with strict adherence to the abstraction guidelines.5 Increased documentation when trainees are involved may also partly explain the higher rates of identifying events among teaching hospitals.6
Third, although these SSI measures are risk-adjusted for limited patient and procedure characteristics, the adjustment is likely inadequate, potentially resulting in hospitals that treat sicker patients being incorrectly categorized as poor performers.
Implementing more rigorous surveillance and data abstraction standards and requiring formal audit programs to ensure standardized coding may help to address some of these factors. Furthermore, changes to the measures’ risk adjustment should be expanded to include hospital case mix, patient comorbidities, and other relevant procedural factors. It is possible that our findings reflect a lack of a conceptual link between Centers for Medicare and Medicaid Services’ definition of high-quality care and hospital characteristics that affect performance on various measures. Still, the paradoxical relationship between SSI measure performance and hospital quality warrants reconsideration of the addition of the current SSI measures to the HAC Reduction Program in federal fiscal year 2016.
Corresponding Author: Karl Y. Bilimoria, MD, MS, Surgical Outcomes and Quality Improvement Center, Department of Surgery, Feinberg School of Medicine, Northwestern University, 633 N St Clair, 20th Floor, Chicago, IL 60611 (email@example.com).
Published Online: April 6, 2016. doi:10.1001/jamasurg.2016.0408.
Author Contributions: Drs Bilimoria and Minami had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Minami, Barnard, Kinnier, Rajaram, Bilimoria.
Acquisition, analysis, or interpretation of data: All authors.
Drafting of the manuscript: Minami, Bilimoria.
Critical revision of the manuscript for important intellectual content: All authors.
Statistical analysis: Minami, Dahlke, Rajaram, Bilimoria.
Obtained funding: Bilimoria.
Administrative, technical, or material support: Minami, Dahlke, Barnard, Kinnier, Noskin.
Study supervision: Rajaram, Noskin, Bilimoria.
Conflict of Interest Disclosures: Dr Bilimoria received honoraria from hospitals and professional societies for clinical care and quality improvement research presentations. No other disclosures are reported.
Funding/Support: Dr Bilimoria has received support from the National Institutes of Health, the Agency for Healthcare Research and Quality, the American Board of Surgery, the American College of Surgeons, the Accreditation Council for Graduate Medical Education, the National Comprehensive Cancer Network, the American Cancer Society, the Health Care Services Corporation, the California Health Care Foundation, Northwestern University, the Robert H. Lurie Comprehensive Cancer Center, the Northwestern Memorial Foundation, and Northwestern Memorial Hospital.
Role of the Funder/Sponsor: These funding sources had no role in the design and conduct of the study; collection, management, analysis, or interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Create a personal account or sign in to: