ACS NSQIP indicates American College of Surgeons National Surgical Quality Improvement Program; NHSN, National Healthcare Safety Network; and O:E Ratio, observed to expected ratio.
Ju MH, Ko CY, Hall BL, Bosk CL, Bilimoria KY, Wick EC. A Comparison of 2 Surgical Site Infection Monitoring Systems. JAMA Surg. 2015;150(1):51-57. doi:10.1001/jamasurg.2014.2891
Surgical site infection (SSI) has emerged as the leading publicly reported surgical outcome and is tied to payment determinations. Many hospitals monitor SSIs using the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP), in addition to mandatory participation (for most states) in the Centers for Disease Control and Prevention’s National Healthcare Safety Network (NHSN), which has resulted in duplication of effort and incongruent data.
To identify discrepancies in the implementation of the NHSN and the ACS NSQIP at hospitals that may be affecting the respective SSI rates.
Design, Setting, and Participants
A pilot sample of hospitals that participate in both the NHSN and the ACS NSQIP.
For each hospital, observed rates and risk-adjusted observed to expected ratios for year 2012 colon SSIs were collected from both programs. The implementation methods of both programs were identified, including telephone interviews with infection preventionists who collect data for the NHSN at each hospital.
Main Outcomes and Measures
Collection methods and colon SSI rates for the NHSN at each hospital were compared with those of the ACS NSQIP.
Of 16 hospitals, 11 were teaching hospitals with at least 500 beds. The mean observed colon SSI rates were dissimilar between the 2 programs, 5.7% (range, 2.0%-14.5%) for the NHSN vs 13.5% (range, 4.6%-26.7%) for the ACS NSQIP. The mean difference between the NHSN and the ACS NSQIP was 8.3% (range, 1.6%-18.8%), with the ACS NSQIP rate always higher. The correlation between the observed to expected ratios for the 2 programs was nonsignificant (Pearson product moment correlation, ρ = 0.4465; P = .08). The NHSN collection methods were dissimilar among interviewed hospitals. An SSI managed as an outpatient case would usually be missed under the current NHSN practices.
Conclusions and Relevance
Colon SSI rates from the NHSN and the ACS NSQIP cannot be used interchangeably to evaluate hospital performance and determine reimbursement. Hospitals should not use the ACS NSQIP colon SSI rates for the NHSN reports because that would likely result in the hospital being an outlier for performance. It is imperative to reconcile SSI monitoring, develop consistent definitions, and establish one reliable method. The current state hinders hospital improvement efforts by adding unnecessary confusion to the already complex arena of perioperative improvement.
Surgical site infection (SSI) is one of most commonly reported hospital-related infections and is associated with increased morbidity, length of hospital stay, and overall cost.1,2 The need for infection surveillance was recognized more than 40 years ago by the Centers for Disease Control and Prevention (CDC)3 and the Joint Commission on Accreditation of Hospitals4 and was shown to be effective in the prevention of SSIs.5 Since the establishment in 2005 of the CDC’s National Healthcare Safety Network (NHSN),6,7 SSI has emerged as the leading outcome measure of surgical quality. The NHSN’s data for SSIs occurring after colon surgery or abdominal hysterectomy are being used in the National Quality Forum SSI measure (No. 0753), which also has been incorporated into Medicare’s Hospital Inpatient Quality Reporting program in fiscal year 2011’s final rule, publicly reported on the Hospital Compare website since December 2012 and tied to payment determinations since February 2013 in the Medicare Hospital Value-Based Purchasing program.8
However, recent investigations have questioned the accuracy of the NHSN data and the sufficiency of its risk-adjustment methods.9 The New York State Department of Health audited the NHSN SSI data for colon procedures and found a 10.9% false-positive rate and a 39.6% false-negative rate.10 In an institutional study11 examining SSI surveillance after congenital cardiac surgery, the annual SSI rate differed 17% to 71% between the NHSN surveillance data and a national surgical registry data.
In addition to legally mandated participation in the NHSN in most states, many hospitals monitor SSIs using the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP), which has been shown to be superior to administrative claims data for identifying many postoperative complications.12 The ACS NSQIP uses the same verbal description definition for SSI as the NHSN. However, despite congruent definitions, hospitals report that their SSI rates and performance evaluations differ considerably between the 2 programs. Therefore, the objective of this study was to identify discrepancies in the implementation of the NHSN and the ACS NSQIP at hospitals that may be affecting the respective SSI rates and subsequent performance evaluations.
The Northwestern University Institutional Review Board reviewed and approved the study. The study was deemed exempt, and informed consent was not obtained. Sixteen hospitals had noticed inconsistent data with the NHSN and the ACS NSQIP and volunteered to participate in this pilot study. Characteristics of each hospital were obtained from the American Hospital Association 2010 annual survey data.13 We focused on colon surgery because of its status as a current hospital performance measure.8 To understand the NHSN implementation at each hospital, 30-minute telephone interviews with infection preventionists who conduct colon surgery SSI surveillance were performed. Interviews with the ACS NSQIP data abstractors at each hospital were not necessary because the ACS NSQIP is conducted according to standardized processes and is audited to minimize variation in data abstraction. The ACS NSQIP clinical support team was consulted for clarification of data collection methods for hospitals that have questions about their data abstraction and for this study. Collection methods for the NHSN at each hospital were compared with each other and with the ACS NSQIP methods.
At each hospital, observed rates and risk-adjusted observed to expected (O:E) ratios for year 2012 colon SSIs according to the NHSN and the ACS NSQIP programs were collected and compared with each other. The current ACS NSQIP hospital evaluations report performance in terms of odds ratios; however, these were converted to O:E ratios for direct comparison with the NHSN O:E ratios. Pearson product moment correlation was used to compare the NHSN and the ACS NSQIP risk-adjusted colon SSI O:E ratios.
A software program (SAS, release 9.3; SAS Institute Inc) was used to perform statistical analyses. Statistical significance was set at .05.
Of 16 hospitals, 2 had fewer than 300 beds, 3 had 300 to 500 beds, and 11 had at least 500 or more beds (Table 1). All hospitals were nongovernmental and not for profit. Eleven hospitals had residency training programs approved by the Accreditation Council for Graduate Medical Education, 15 hospitals had accreditation by the Joint Commission on Accreditation of Hospitals, 12 hospitals were accredited by the ACS Commission on Cancer, and 9 hospitals were designated as a level I trauma center.
The NHSN defines the numerator and the denominator for SSI surveillance.14 The NHSN identifies colon cases using International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes (Table 2). All colon cases, except those with an American Society of Anesthesiologists score of 6, are eligible for SSI surveillance (the denominator). Because of the lack of granularity of ICD-9-CM procedure codes, the NHSN codes will identify some proctectomy cases (total proctocolectomy with ileal pouch anal anastomosis and even proctectomy with ileal pouch anal anastomosis and low anterior resections). In the ACS NSQIP, these procedures would be included based on American Medical Association Current Procedural Terminology code designation in the proctectomy targeted procedure group. At some hospitals, infection control departments may eliminate proctectomies with ileal pouch anal anastomosis or low anterior resections after discussion with surgeons and recoding of the procedure by the hospital. The NHSN uses the CDC’s definition for SSI, including superficial incisional, deep incisional, and organ space.15 Any SSIs that occur within 30 days of eligible colon cases, except stoma site infections, are included as an event in the numerator. If multiple procedures are performed through the same incision (eg, colon and small-bowel resection) and an SSI occurs, the SSI event would be assigned only to the ICD-9-CM code with the highest risk. In previous years, small-bowel cases had higher risk assignment than colon cases. Since January 2013, colon surgery has been associated with the highest risk (second only to liver transplantation) for abdominal operations.14
The NHSN provides some short online modules for infection preventionists on how to collect data for colon SSI surveillance, but these are not intended to be specific guidelines.14 As a result, the NHSN collection methods were found to vary among the participating hospitals. Although at least 6 of 16 hospitals interviewed used some form of electronic system (commercial or developed by the institution) to trigger cases to be reviewed by an infection preventionist, these trigger rules were not standardized among the hospitals. Readmission to the same hospital was the most commonly used trigger rule: if a patient who had undergone an eligible colon case was readmitted within the follow-up period, then an electronic trigger would flag that case to be reviewed by the infection preventionist. Some institutions added additional criteria such as the starting of antibiotics within 24 hours of readmission, the collection of a wound culture, a debridement surgical procedure, or a diagnostic code. Only one hospital included outpatient prescriptions for antibiotics as a trigger rule. For most of the institutions interviewed, the infection preventionists reviewed only cases that were identified by the electronic trigger system. The following SSIs would not usually trigger a review: SSI diagnosed during an index hospitalization, SSI managed as an outpatient case, or SSI treated at a hospital outside of the institutional system. The electronic surveillance system is run daily for some institutions, while for others it is run monthly.
Some institutions do not have an electronic trigger system. In this case, a data analyst would use ICD-9-CM codes to identify all colon cases to be reviewed by the infection preventionists. Then, the infection preventionists would review daily microbiology results for positive cultures after colon surgical procedures or hospital readmission records related to surgical wound infections. However, an SSI managed on an outpatient basis (the most common setting for the management of superficial SSIs) would again be missed.
At some institutions, each infection preventionist was assigned to certain procedure groups or floor units. Others had only a few infection preventionists with no assigned roles. Data auditing is also dependent on the institution. Some managers for infection prevention or quality improvement conduct audits of every case of SSI, and some managers perform spot checks. Most of the institutions reported having an inadequate workforce to conduct detailed data audits.
Since 2009, the NHSN has moved to a standardized infection ratio model.16 The standardized infection ratio is calculated by dividing the number of observed infections by the number of expected infections. The number of expected infections is estimated from multivariable logistic regression models using the NHSN baseline data (2006-2008), which represent a standard population’s SSI experience (adjusting for age, anesthesia type, American Society of Anesthesiologists class, duration of surgery, medical school affiliation, bed size, and wound classification).14 The NHSN provides hospitals with monthly reports.
To join the ACS NSQIP, a hospital must demonstrate that it has adequate resources and staffing for data collection and analysis. Dedicated surgical clinical reviewers, the data abstractors for the ACS NSQIP at each hospital, receive formal training and continuous support from the ACS NSQIP clinical support team.17 Protocols for data acquisition and transmission are defined in the ACS NSQIP operating manual to optimize accuracy.17 The surgical clinical reviewers make telephone calls, send out letters, and conduct public record searches in addition to reviewing all inpatient, outpatient, and available outside facility records to obtain complete 30-day follow-up data. To further ensure that the data are rigorously collected, members of the clinical support team visit samples of participating hospitals and perform audits. Hospitals that fail an audit receive further education and are removed from the reporting process until the hospital passes remediation.
Each numerator and denominator for collecting SSI events and other perioperative variables are defined in the operating manual.17 The ACS NSQIP identifies colon cases using Current Procedural Terminology codes (Table 2). Unless a hospital participates in the colectomy procedure targeted program option (in which all colectomy cases are reviewed), cases to be reviewed by the surgical clinical reviewers are selected as a systematic sample of general and vascular or multispecialty cases performed in the hospital. The ACS NSQIP uses the CDC’s verbal description definition for SSI. If the skin was left open, then superficial incisional SSI cannot be assigned, but deep incisional and organ-space SSIs can be. If multiple procedures were performed through the same incision and an SSI occurred after surgery, then the SSI would be assigned to the designated principal procedure. In addition, SSI cannot be assigned to the case if a wound infection was present at the time of surgery.
The ACS NSQIP uses hierarchical multivariable logistic regression modeling for hospital performance risk adjustment.18 This statistical approach accounts for the clustering of patients within hospitals, reduces false-positive rates due to multiple sampling, and adjusts for hospitals with small numbers of cases (Bayesian shrinkage or reliability adjustment). For each model, a forward selection process is used first to select a set of strong predictor variables from more than 30 available patient variables. C statistics and Brier scores are used to evaluate the discrimination and calibration of the models. Brier score accounts for discrimination and calibration of the models, with a score range from 0.0 to 1.0 and with 0.0 being the perfect prediction. This statistical approach levels the playing fields for hospitals with large proportions of high-risk patients to those that do not and for hospitals with high case volumes to those that are low volume. Every 3 months, the ACS NSQIP provides hospitals with risk-adjusted reports that allow participating hospitals to compare their risk-adjusted outcomes with those of other hospitals. In addition, the ACS NSQIP provides continuous real-time, risk-adjusted estimates for 6 measure models, including one for colon surgery death or serious morbidity.
The mean observed colon SSI rates were dissimilar between the 2 programs at 5.7% (range, 2.0%-14.5%) for the NHSN vs 13.5% (range, 4.6%-26.7%) for the ACS NSQIP (Table 3). The mean difference between the NHSN and the ACS NSQIP was 8.3% (range, 1.6%-18.8%), with the ACS NSQIP rate always higher. The risk-adjusted O:E ratios for the 2 programs were also different (Figure). The correlation between the risk-adjusted O:E ratios for the 2 programs was nonsignificant (Pearson product moment correlation, ρ = 0.4465; P = .08). One hospital declined to share its NHSN observed rate.
After examining 16 hospitals that collect data on SSIs after colon surgery for both the NHSN and the ACS NSQIP, we found considerable differences in the implementation between the 2 programs, with marked variation in that of the NHSN. None of the hospitals interviewed had the same NHSN program implementation as the other hospitals in the group. Some hospitals use electronic trigger rules (not standardized among the hospitals) to select cases for review by infection preventionists, while infection preventionists at other hospitals review daily microbiology results or hospital readmission records for potential cases. Most of the time, an SSI managed on an outpatient basis (the most common setting for the management of superficial SSIs) would be missed under the current NHSN practices. In contrast, the ACS NSQIP abstractors reviewed all available inpatient and outpatient records of sampled colon cases for 30 days after surgery. In addition, follow-up letters and telephone calls were made to reduce the chance of missing outside hospital care or clinic records or missing outpatient diagnosis and the management of infections. With such variation in program implementation, the resulting data on SSI rates and hospital performance differed between the NHSN and the ACS NSQIP, with an 8.3% mean difference in infection rates. The ACS NSQIP rates were also always higher. These findings were similar to those in the study by Atchley et al11 comparing the NHSN data with The Society of Thoracic Surgeons Congenital Heart Surgery Database. In addition, the correlation between risk-adjusted hospital performances on colon SSIs according to the 2 programs was not statistically significant. A hospital performing well in one program might be performing poorly according to the other program.
Although a consensus panel report was published in 1998 with infrastructure requirements for infection control and prevention,19 the document failed to give detailed instructions on how to carry out such requirements, which might have contributed to the large variation in the NHSN program implementation across hospitals. In a study of approximately 1000 acute care hospitals, Stone et al20 found large variation in the organization and structure of infection control and prevention programs. Interpretations of hospital-associated infection definitions were also shown to be heterogeneous in a survey of more than 100 infection preventionists and hospital epidemiologists.21 Many of the infection preventionists interviewed for the present study had stated that the lack of detailed protocols on how to obtain and interpret data is one of the greatest weaknesses of the NHSN.
The recent study by Stone et al20 found that only 34% of acute care hospitals use electronic surveillance systems. Some might argue that there would not be as much variation among hospitals if all hospitals used electronic systems for the NHSN case findings. In a single-institution study,22 however, an electronic surveillance system did not identify a large proportion of SSIs and had different risk factor and cost estimates compared with the ACS NSQIP data abstracted by the surgical clinical reviewers. In addition, we found that trigger rules and data sources are not consistent across the hospitals that use electronic surveillance systems.
With such inconsistencies in program implementation and data interpretation across the NHSN hospitals, it is not surprising that the resulting SSI rates were incongruent and differed greatly from clinical registry data, as demonstrated from previous state10 and institutional11 studies. In contrast, Shiloach et al23 showed that the ACS NSQIP collected robust data through its training and audit procedure and found that interrater reliability audits had improved over the years. Another study12 also showed the superiority of the ACS NSQIP over administrative claims data for identifying postoperative complications. Our comparisons of colon SSI rates revealed that SSI rates from the NHSN were lower than those from the ACS NSQIP by a mean difference of 8.3% (range, 1.6%-18.8%).
Multiple patient comorbidities, such as diabetes mellitus, malnutrition, anemia, and tobacco use, have been shown to be significant risk factors for SSIs24 and have been used in risk adjustment of hospital performance on colon SSIs to account for hospitals with greater proportions of high-risk patients. The NHSN reports a standardized infection ratio, which is analogous to an O:E ratio calculated using multivariable logistic regression with a fixed set of predictors. In comparison, the ACS NSQIP uses hierarchical multivariable logistic regression, which not only adjusts for different patient characteristics but also accounts both for the clustering of patients and for hospitals with smaller numbers of patients during the current collection period.18 With different rates and risk-adjustment methods, it is not surprising that there were no statistically significant correlations between the NHSN and the ACS NSQIP risk-adjusted O:E ratios.
These discrepancies come at a time when hospitals are experiencing intense pressure to save money and improve the quality of care. Multiple programs at the federal, state, and local levels have engaged hospitals in initiatives to improve care but require significant hospital resource commitments. Duplicative data abstractions have diluted efforts and distracted health care providers to focus on data discrepancies rather than improving care. An opportunity exists to standardize definitions, to improve the consistency of collection methods, and to ensure rigor of data by implementing formal audits.
This study should be considered with its limitations. A sampling bias might exist because we interviewed a pilot sample of 16 hospitals that may not represent all hospitals in the United States. These hospitals had noticed inconsistent data with the NHSN and the ACS NSQIP and were willing to participate in the study. It is possible that the differences we found between the NHSN and the ACS NSQIP might be smaller in other hospitals. Other hospital infrastructure might affect the reliability of the NHSN and ACS NSQIP data, including the presence of inpatient or outpatient electronic medical records, hospital-employed surgeons vs independent surgeons, and the locus of the ACS NSQIP and NHSN data abstraction in a hospital’s organizational structure. Regardless, the inconsistences that we described question the validity of the NHSN data and suggest that the NHSN and ACS NSQIP data are not interchangeable. In addition, while the current ACS NSQIP reporting uses odds ratios in hospital performance evaluations, this work used O:E ratios to match those of the NHSN. However, this conversion could have introduced only a small element of drift in these comparisons.
Colon SSI rates from the NHSN and the ACS NSQIP should not be used interchangeably to evaluate hospital performance for the purposes of quality improvement, public reporting, or pay for performance at this time. Similarly, hospitals should not use the ACS NSQIP colon SSI rates for NHSN reporting because that practice would likely result in false comparisons and apparent poorer performance status. Great variation exists among hospitals in data collection methods within the NHSN. It is imperative to establish one reliable method for SSI monitoring. The current state is likely hindering hospital improvement efforts by adding unnecessary confusion to the already complex task of measuring perioperative performance. Hospitals are potentially spending unnecessary time and resources collecting duplicative data and being distracted by discrepancies in reports.
Accepted for Publication: August 4, 2014.
Corresponding Author: Elizabeth C. Wick, MD, Department of Surgery, The Johns Hopkins University, 600 N Wolfe St, Blalock Room 658, Baltimore, MD 21287 (email@example.com).
Published Online: November 26, 2014. doi:10.1001/jamasurg.2014.2891.
Author Contributions: Dr Wick had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: All authors.
Acquisition, analysis, or interpretation of data: Ju, Ko, Hall, Bilimoria, Wick.
Drafting of the manuscript: Ju, Wick.
Critical revision of the manuscript for important intellectual content: All authors.
Statistical analysis: Ju.
Administrative, technical, or material support: Ko, Hall.
Study supervision: Wick.
Conflict of Interest Disclosures: Dr Ju reported receiving a stipend that is partially supported by grant 5T32HL094293 from the National Institutes of Health and the American College of Surgeons Clinical Scholars in Residence program. Dr Hall reported being a paid consultant for the American College of Surgeons. No other disclosures were reported.