As an example of data feedback received by participating hospitals, this Pareto chart compares surgical site infection (SSI) rates for clean wound elective inpatient cases among Michigan Surgical Quality Collaborative (MSQC) hospitals. Gray bars on the left represent high outliers, or poor performers. In our quality improvement efforts, we focus on the right side of the figure, or best performers. We directly identify these hospitals and share best practices.
Morbidity odds ratios (ORs) comparing period 1 (T1) with period 2 (T2) in Michigan Surgical Quality Collaborative (MSQC) and non-Michigan American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) hospitals. In MSQC hospitals, the OR for morbidity was significantly lower in T2 vs T1 for all cases, elective inpatient cases, and emergency cases. In addition, OR changes for morbidity were compared between MSQC and non-Michigan ACS NSQIP hospitals.
Trends in the ratios of observed to expected morbidity between Michigan Surgical Quality Collaborative (MSQC) and non-Michigan American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) hospitals over time (January 2005 through November 2007). There was a significant difference in the slopes of the lines (P = .001), with a significant trend toward lower ratios of observed to expected morbidity in MSQC hospitals (slope, −0.0008; P = .008) vs non-Michigan ACS NSQIP hospitals (slope, 0.001; P = .11).
Shewhart control chart demonstrating changes in the Michigan Surgical Quality Collaborative (MSQC) ratios of observed to expected morbidity over time (circle represents a process change in which a run of ≥8 consecutive points are on one side of the center line).
Changes in rates of major postoperative complications between Michigan Surgical Quality Collaborative (MSQC) period 1 (T1) and period 2 (T2). * P < .05.
Customize your JAMA Network experience by selecting one or more topics from the list below.
Campbell DA, Englesbe MJ, Kubus JJ, et al. Accelerating the Pace of Surgical Quality Improvement: The Power of Hospital Collaboration. Arch Surg. 2010;145(10):985–991. doi:10.1001/archsurg.2010.220
A regional collaborative approach is an efficient platform for surgical quality improvement.
Retrospective cohort study.
Patients undergoing general and vascular surgical procedures in 16 hospitals of the Michigan Surgical Quality Collaborative (MSQC) were evaluated quarterly to discuss surgical quality, to identify best practices, and to assess problems with process implementation.
Main Outcome Measures
Results among MSQC patients were compared with those among 126 non-Michigan hospitals participating in the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) over the same interval.
A total of 315 699 patients were included in the analysis. To assess improvement, patients were stratified into 2 periods (T1 and T2). The 35 422 MSQC patients (10.7% morbidity in T1 vs 9.7% in T2 [9.0% reduction], P = .002) showed improvement, while 280 277 non-Michigan ACS NSQIP patients did not (12.4% morbidity in T1 and T2, P = .49). No improvements in mortality rates were noted in either group. Overall, the odds of experiencing a complication in T2 compared with T1 were significantly less in the MSQC group (odds ratio, 0.898) than in the non-Michigan ACS NSQIP group (odds ratio, 1.000) (P=.004).
A statewide surgical quality improvement collaborative supported by a third-party payer showed significant improvement in quality and high levels of participant satisfaction.
A need for improved quality and reduced cost underlies current visions for health care reform. That the 2 subjects are related is nowhere more apparent than in the field of surgery, where a potentially preventable complication might result in weeks of added hospitalization and in tens of thousands of dollars in incremental costs. For example, it has been estimated that the occurrence of ventilator-associated pneumonia following a surgical procedure adds more than $50 000 to the base cost of $5000 for uncomplicated care, a 10-fold difference.1,2 Taking this example a step further, there are effective and inexpensive techniques for the prevention of ventilator-associated pneumonia, the broad implementation of which could save many millions of dollars in health care costs nationwide.3,4 Accelerating the pace of discovery and implementing effective surgical quality improvement strategies fit squarely in the middle of President Barack Obama's health care agenda.
However, there is no consensus on the best way to discover new and more effective quality improvement strategies. Individual surgeons write of anecdotal experience, but such reports are based on small numbers and lack insight about portability to other institutions. Although important, large national initiatives in quality, such as the Surgical Care Improvement Program,5 are at the opposite extreme, being so broad in design that detail about implementation is lost. There is little opportunity to generate new knowledge about the link between processes and outcomes of care.
A different path to quality surgery involves the regional organization of hospitals into collaboratives focused exclusively on quality improvement. This approach provides suitable numbers of patients for study and relies on comprehensive data involving patient risk, processes, and outcomes of care; as a result, important opportunities are yielded to generate new knowledge about what works in quality improvement. Such collaboratives—most prominently the Northern New England Cardiovascular Disease Study Group,6 the Blue Cross Blue Shield of Michigan Cardiovascular Consortium Angioplasty collaborative quality improvement project,7,8 and the Keystone Intensive Care Unit Project in Michigan9,10—have achieved marked and rapid improvement in quality using a regional process that emphasizes the sharing of best practices and the reliable implementation of strategies already known to be effective.
We describe herein an effective quality improvement collaborative involving 34 mostly community hospitals in Michigan that use the principles of rapid discovery and distribution of information about best practices.11 The backbone of the collaborative is a pay-for-participation approach that is supported by the dominant third-party payer in the state, Blue Cross Blue Shield of Michigan/Blue Care Network (BCBSM/BCN). After 4 years of experience, we report that the group functions well using this structure and that surgical quality has improved significantly.
The Michigan Surgical Quality Collaborative (MSQC) is a group of 34 Michigan hospitals performing general and vascular surgical procedures. Most are community-based institutions: 21 of 34 hospitals (61.8%) do not having teaching programs. All MSQC hospitals use a common quality reporting infrastructure, the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP), the data from which support the identification of common quality improvement opportunities. The details of quality reporting using the ACS NSQIP system have been previously described.12-14 The multiple advantages of the system include the use of precise definitions for surgical complications, a defined end point for postoperative mortality and morbidity (30 days), a 97% 30-day follow-up rate, a validated risk-adjustment method, a regular (yearly) interrater reliability assessment, and an external auditing function.15 Failure of a surgical clinical nurse reviewer (SCNR) to achieve a defined score on interrater reliability testing leads to retraining, and failure to achieve a defined score on the yearly audit leads to exclusion of the center's data from the ACS NSQIP database.
Representatives from each hospital, typically an SCNR, a defined surgeon champion, and other quality improvement specialists, meet quarterly to review aggregate and hospital-specific data produced by the ACS NSQIP. In this setting, best performers are identified in various areas, and members of the best-performing institutions are invited to share insights that they believe have contributed to their success. For example, Figure 1 shows a sample handout to participants at quarterly meetings in which hospitals are ranked according to the incidence of surgical site infection for clean cases. The MSQC meeting format uses an electronic audience response system that engages the audience in group discussion about quality-related questions, which are the focus of attention at collaborative meetings. Results and presentations from the quarterly meetings are available at an MSQC Web site (http://www.msqc.org), as are various recommended quality improvement strategies. Collaboration is also enhanced by the use of a hard-copy newsletter and, recently, by the production of quality improvement strategies and a description of various best practices on the MSQC YouTube channel (http://www.youtube.com/msqc1).
A single data set (provided by the ACS NSQIP) containing MSQC and non-Michigan ACS NSQIP hospital information was used for all analyses. Well-established ACS NSQIP case collection techniques were used at all hospitals and for all patients.16 Over the defined observation interval, numerous hospitals enrolled. Therefore, to prevent potential bias introduced by new hospital enrollment, only MSQC and non-Michigan ACS NSQIP hospitals that were enrolled over the entire observation period were included in the statistical analysis. As a result, patients from 16 MSQC hospitals and 126 non-Michigan ACS NSQIP hospitals were included in the analysis.
Cases were evaluated in the following categories: elective inpatient, emergency, outpatient, and all cases. Period 1 (T1) was defined as April 1, 2005, through March 31, 2007, and period 2 (T2) was defined as April 1, 2007, through December 31, 2007. These periods were determined by BCBSM/BCN prospectively to evaluate the effectiveness of the program. Presumably, the starting time for T2 was determined to correlate with the expected effects of focused quality improvement efforts by the collaborative.
Unadjusted differences between MSQC and non-Michigan ACS NSQIP patients by period are reported as the interaction between these effects using univariate analysis. χ2 Cross-tabulation was used to compare changes in morbidity between the 2 periods. We also calculated the odds that a patient would experience a complication in T2 compared with T1 for each group. We used Tarone17 test to compare the odds ratios (ORs) between the MSQC and non-Michigan ACS NSQIP groups for these periods.
To adjust for patient and case mix factors between the groups and time, we used multivariate logistic regression models to compare differences between results in T1 and T2. Models were created at the patient level, and all patients who underwent operation in MSQC hospitals (n = 35 422) or non-Michigan ACS NSQIP hospitals (n = 280 277) were entered into the models. The primary outcome measures were mortality and morbidity. Differences between T1 and T2 were reported as an OR (95% confidence interval). In addition to the group and time effects, we included patient comorbidities and intraoperative factors in the model to determine if the group and period affected morbidity.12,15 Testing for a significant interaction in outcomes between the 2 periods and the 2 groups would be a strong indication of differences in outcomes between the collaborative and national initiatives.
We used a process control chart to assess temporal trends in quality improvement in the MSQC. Standard statistical process control rules used in the analysis flagged observed-expected trends (ie, outliers and increasing or decreasing trends) or changes in the outcomes as the MSQC program progressed. Using defined rules violations, we identified special-cause variation (indicating a statistically significant process change).18 All analyses were performed using commercially available software (PASW, version 17; SPSS Inc, Chicago, Illinois).
All surgeon champions and SCNRs (n = 54) who attended the March 2009 quarterly meeting of the MSQC participated in a survey. The survey was conducted via a real-time audience response system. In addition, surgeon champion and SCNR attendance at all MSQC quarterly meetings was recorded.
The mean relative value unit (measure of case complexity) among MSQC hospitals was 13.9 in T1 vs 15.3 in T2 (P < .001). Similarly, the mean relative value unit among non-Michigan ACS NSQIP hospitals was 18.5 in T1 vs 20.2 in T2 (P < .001).
The unadjusted mortality rate among patients who underwent surgery in MSQC hospitals was 1.7% in T1 and T2. The unadjusted mortality rate among patients who underwent surgery in non-Michigan ACS NSQIP hospitals was 1.8% in T1 and T2. After risk adjustment, no significant differences in mortality rates were noted between T1 and T2 or between MSQC and non-Michigan ACS NSQIP hospitals.
In contrast to mortality, there was a significant reduction in the unadjusted morbidity rate among patients who underwent surgery in MSQC hospitals; specifically, the unadjusted morbidity rate for all cases was 10.7% in T1 (n = 21 212) but was 9.7% in T2 (n = 14 210), an absolute reduction of 9.0% (P = .002). No significant difference in morbidity was noted in T1 (n = 168 166) vs T2 (n = 112 111) among the non-Michigan ACS NSQIP cohort (12.4% vs 12.4%, P = .49). Overall, the odds of experiencing a complication in T2 compared with T1 were significantly less in the MSQC group (OR, 0.898) than in the non-Michigan ACS NSQIP group (OR, 1.000) (P = .004) (Figure 2). The C index for the logistic regression model was 0.82.
We noted a similar reduction in morbidity for the MSQC cohort when only elective (nonemergency) inpatient cases were considered. Among such patients, the unadjusted morbidity rate during T1 (n = 10528) was 13.8% but was 12.4% in T2 (n = 7210), an absolute reduction of 10.1% (P = .006). In contrast, non-Michigan ACS NSQIP hospitals demonstrated no significant difference in morbidity for elective inpatient operations between T1 (n = 88 973) and T2 (n = 60 926) (15.7% vs 15.4%, P = .11). The odds of experiencing a complication in T2 compared with T1 for elective inpatients were significantly less for the MSQC group (OR, 0.891) compared with the non-Michigan ACS NSQIP group (OR, 0.982) (P = .04) (Figure 2).
Similar observations were noted for emergency cases. Among patients having surgery in MSQC hospitals, the unadjusted morbidity rate for emergency cases during T1 (n = 2542) was 25.4%, but this figure dropped to 21.6% during T2 (n = 1805), an absolute reduction of 15.0% (P < .001). In contrast, no significant difference was noted in morbidity for emergency cases in non-Michigan ACS NSQIP hospitals during T1 (n = 23 269) vs T2 (n = 14 993) (24.3% vs 24.4%, P = .46). The odds of experiencing a complication in T2 compared with T1 for emergency patients were significantly less in the MSQC group (OR, 0.759) compared with the non-Michigan ACS NSQIP group (OR, 1.010) (P < .001) (Figure 2).
No significant differences in morbidity rates were noted for outpatient operations between T1 and T2 in the MSQC cohort (n = 13 690) or in the non-Michigan ACS NSQIP cohort (n = 96 240). Specifically, the unadjusted morbidity rate for MSQC outpatient cases was 2.1% during T1 vs 2.1% during T2 (P = .51). Similar observations were noted in the non-Michigan ACS NSQIP cohort; the unadjusted morbidity rate for outpatient cases was 2.3% during T1 vs 2.4% during T2 (P = .13). For outpatient procedures, the odds of experiencing a complication in T2 compared with T1 were not significantly different between the MSQC group (OR, 0.995) vs the non-Michigan ACS NSQIP group (OR, 1.051) (P = .67) (Figure 2).
We were interested in comparing the rates of improvement in MSQC and non-Michigan ACS NSQIP hospitals by defining the slope of a line describing ratio of observed to expected morbidity trends over time. As time from enrollment in the MSQC progressed, there was a significant trend toward a lower ratio of observed to expected morbidity (slope, −0.0008 per month; P = .008) (Figure 3). The ACS NSQIP hospitals started at a ratio just below 1 of observed to expected morbidity, and a trend toward a lower ratio of observed to expected morbidity over time was not noted in non-Michigan ACS NSQIP hospitals (slope, 0.001; P = .11). Examination of the scatterplot of standardized residuals against the standardized predicted value showed no evidence that these variables had different variances (heteroscedasticity). Durbin-Watson statistic was indeterminate for autocorrelation.
To obtain information about when quality improvement became most prominent in the course of the initiative, we constructed a statistical control chart using ratios of observed to expected morbidity for both groups. The resulting pattern is shown in Figure 4. The expected morbidity was calculated based on risk-adjustment models that included all (Michigan and non-Michigan) patients participating in the ACS NSQIP. Toward the end of the evaluation period for the MSQC, a rule violation occurred that indicated a special-cause variation had occurred rather than a common cause or expected variation in results. The rule violation reflected 8 or more consecutive points below the overall process mean, indicating an improvement in the morbidity rate.
Certain specific ACS NSQIP–defined complications showed statistically significant improvement between T1 and T2 in the MSQC cohort. We noted significant reduction in the incidence of sepsis, pneumonia, septic shock, cardiac arrest, and need for prolonged mechanical ventilation (Figure 5).
One hundred percent of surgeon champions and SCNRs reported high levels of collegiality in the MSQC. One hundred percent of respondents also reported that BCBSM/BCN had been a reliable partner in the overall effort. However, most respondents (77.8%) reported that financial support from BCBSM/BCN was necessary to ensure their participation in the MSQC. Few participants (16.7%) thought that hospitals were using MSQC data for competitive advantage. Only 1.9% of respondents reported that they were reluctant to discuss quality problems at MSQC meetings. The mean attendance in 2007 and 2008 for SCNRs was 87.2% and for surgeon champions was 56.4%.
Society demands improved quality in health care, with reduced cost. Various strategies have been proposed as a means to this end. These range from an incentive approach, as in various pay-for-performance efforts,19-21 to a more punitive approach, as in the nonpayment policy for hospital-acquired conditions recently enacted by the Centers for Medicare and Medicaid Services.22 There is no consensus on the most effective approach as a foundation for quality improvement or as a means to cost reduction. Indeed, some worry that the nonpayment policy by the Centers for Medicare and Medicaid Services might decrease the pace of quality improvement by driving underground the reporting of quality results.23
We describe herein a different pay-for-participation approach. In this model, a third-party payer has fully supported hospital participation in a surgical quality collaborative.
We believe that the pay-for-participation approach is central to the success of our collaborative quality improvement efforts. In such an environment, participants are likely to be more forthcoming about problem areas and more willing to share new ideas about what works because individual success or failure is not linked to judgment or reimbursement. Indeed, outcomes results in the MSQC are unavailable to BCBSM/BCN by mutual agreement. In contrast, the pay-for-performance environment would seem to foster competitiveness, billboards touting individual excellence, and a lack of willingness to share good ideas with competitor hospitals. We demonstrate herein that MSQC participants rank the experience as highly positive; after 4 years, 100.0% of respondents rated collegiality with the group as high, and 100.0% believed that BCBSM/BCN had been a reliable partner in the effort. The latter finding is remarkable in that statewide physician and hospital groups are often at odds with third-party payers over financial matters.
Why results improved dramatically in the MSQC needs further study. Improvement in MSQC hospitals was seen entirely among the inpatient (not outpatient) population and was prominent among emergency patients. We potentially need to refocus efforts on outpatient surgery or consider alternative outcome measures for outpatient surgical procedures, such as unplanned admission. The pattern of improvement was most prominent in the areas of sepsis, septic shock, pneumonia, prolonged ventilator duration, and cardiac arrest. Over the course of the initiative, we suggested that hospitals should adapt certain evidence-based processes, but we do not know the degree to which they were implemented. However, the hospitals in the MSQC that showed the most dramatic improvement are clearly known and can form the basis for a much more rigorous evaluation in which institutional, process-specific, and surgeon-specific factors accounting for good results can be identified. We maintain that this is more effective in a regional collaboration in which surgeons and nurses know each other and the environment is such that participants are willing to be honest and open about results. In the pay-for-participation model described herein, in which individual hospital results were not reported to BCBSM/BCN, only 1.9% of respondents indicated a reluctance to discuss poor results in the open MSQC forum. We doubt that this would occur in a pay-for-performance setting.
We have shown that a pay-for-participation approach underwritten by a third-party payer can bring hospitals together in a functional collaborative, is associated with improved surgical quality, and has large potential for cost savings. The state of Michigan has one dominant third-party payer, rendering organizational and financial issues underlying a collaborative simpler to navigate, but the concept seems portable. Inspired by the MSQC, surgeons in Tennessee have linked the Tennessee Hospital Association, Blue Cross Blue Shield of Tennessee, and 11 hospitals to form a collaborative based on the ACS NSQIP, as has upper New York State. Similar collaboratives are developing in Pennsylvania, Virginia, and Illinois. In other areas, competition between regional health plans may be intense, which could limit the spread of these collaborative groups. Therefore, we suggest that the federal government might incentivize regional third-party payers to join in pay-for-participation–oriented quality collaboratives. Alternatively, regional health care coalitions can convene and catalyze stakeholders to create collaborative quality improvement projects in the absence of a dominant third-party payer. Whatever the organization, it seems from our experience that the value of a quality collaborative will be far more than the sum of its individual parts.
Correspondence: Darrell A. Campbell Jr, MD, Department of Surgery, University of Michigan, 1500 E Medical Center Dr, Ann Arbor, MI 48109-0331 (firstname.lastname@example.org).
Accepted for Publication: August 18, 2009.
Author Contributions: Drs Campbell and Englesbe had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: Campbell, Englesbe, Phillips, Velanovich, Lloyd, Hutton, and Share. Acquisition of data: Campbell, Englesbe, Kubus, Hutton, and Arneson. Analysis and interpretation of data: Campbell, Englesbe, Kubus, and Shanley. Drafting of the manuscript: Campbell, Englesbe, Kubus, Phillips, and Lloyd. Critical revision of the manuscript for important intellectual content: Campbell, Englesbe, Shanley, Velanovich, Hutton, Arneson, and Share. Statistical analysis: Englesbe and Kubus. Obtained funding: Campbell and Share. Administrative, technical, and material support: Campbell, Englesbe, Phillips, and Velanovich. Study supervision: Campbell, Shanley, Hutton, and Arneson.
Financial Disclosure: Dr Campbell is project director of a grant titled “The Michigan Surgical Quality Collaborative,” for which he receives 20% salary support from Blue Cross Blue Shield of Michigan/Blue Care Network. Dr Share is an employee of Blue Cross Blue Shield of Michigan.
Funding/Support: This study was supported by grants from Blue Cross Blue Shield of Michigan (Dr Campbell) and from the American Surgical Association Foundation (Dr Englesbe).
Create a personal account or sign in to: