ICD-9 indicates International Classification of Diseases, Ninth Revision; MARC, empirical injury severities.
The graphs show the trends in the adjusted mortality rates for all patients (A), patients with blunt or penetrating trauma (B), low-risk patients (predicted probability of death <5%) (C), high-risk patients (predicted probability of death ≥ 5%) (D), and hospitals stratified by performance strata, adjusting for patient risk factors (E). Nonpublic reporting was initiated in 2008.
eAppendix. Sample Report Card
eTable 1. Summary Statistics by Year
eTable 2. Hospital Characteristics
Customize your JAMA Network experience by selecting one or more topics from the list below.
Glance LG, Osler TM, Mukamel DB, Meredith JW, Dick AW. Effectiveness of Nonpublic Report Cards for Reducing Trauma Mortality. JAMA Surg. 2014;149(2):137–143. doi:10.1001/jamasurg.2013.3977
An Institute of Medicine report on patient safety that cited medical errors as the 8th leading cause of death fueled demand to use quality measurement as a catalyst for improving health care quality.
To determine whether providing hospitals with benchmarking information on their risk-adjusted trauma mortality outcomes will decrease mortality in trauma patients.
Design, Setting, and Participants
Hospitals were provided confidential reports of their trauma risk–adjusted mortality rates using data from the National Trauma Data Bank. Regression discontinuity modeling was used to examine the impact of nonpublic reporting on in-hospital mortality in a cohort of 326 206 trauma patients admitted to 44 hospitals, controlling for injury severity, patient case mix, hospital effects, and preexisting time trends.
Main Outcomes and Measures
In-hospital mortality rates.
Performance benchmarking was not significantly associated with lower in-hospital mortality (adjusted odds ratio [AOR], 0.89; 95% CI, 0.68-1.16; P = .39). Similar results were obtained in secondary analyses after stratifying patients by mechanism of trauma: blunt trauma (AOR, 0.91; 95% CI, 0.69-1.20; P = .51) and penetrating trauma (AOR, 0.75; 95% CI, 0.44-1.28; P = .29). We also did not find a significant association between nonpublic reporting and in-hospital mortality in either low-risk (AOR, 0.84; 95% CI, 0.57-1.25; P = .40) or high-risk (AOR, 0.88; 95% CI, 0.67-1.17; P = .38) patients.
Conclusions and Relevance
Nonpublic reporting of hospital risk-adjusted mortality rates does not lead to improved trauma mortality outcomes. The findings of this study may prove useful to the American College of Surgeons as it moves ahead to further develop and expand its national trauma benchmarking program.
The release of the influential Institute of Medicine report on patient safety,1 citing medical errors as the 8th leading cause of death, has fueled the demand to use quality measurement as a catalyst for improving health care quality. Efforts by the Veterans Administration (VA),2 New York state,3 and hospitals in Northern New England4 showed that nonpublic reporting was associated with significant reductions in mortality and morbidity in patients undergoing cardiac and noncardiac surgery. More recently, the American College of Surgeons5 and the Society of Thoracic Surgeons6 have spearheaded national efforts to improve patient outcomes using performance benchmarking. The Centers for Medicare and Medicaid Services publicly reports mortality rates for patients hospitalized with acute myocardial infarctions, heart failure, and pneumonia,7 and it is expanding these efforts to include many other areas of health care.8 The need to control runaway health care spending has further magnified the need for quality measurement to ensure that health care quality is not sacrificed to save health care dollars.
Twenty-five years ago, the American College of Surgeons (ACS) created the National Trauma Databank (NTDB) as a “foundation for evidence-based practice, performance improvement, and research.”9 At its inception, this registry was not used to provide participating hospitals with information on their risk-adjusted outcomes. There is now mounting evidence that patient outcomes following traumatic injury are determined not only by the dose of trauma, but also by the hospital where the patient is treated.10-12 There is remarkable variability in trauma mortality outcomes across hospitals, with up to 4-fold differences in risk-adjusted mortality rates between the best- and worst-performing hospitals.10,13 This quality gap presents an opportunity to improve trauma outcomes using performance feedback to bridge the divide between lower-performance and higher-performance hospitals.
With funding from the Agency for Healthcare Research and Quality, and in collaboration with the ACS, we conducted a prospective trial to test whether nonpublic reporting leads to lower trauma mortality. Participating hospitals were provided with detailed reports of their risk-adjusted mortality outcomes. We designed this study to examine the feasibility and impact of using the data infrastructure in the NTDB to improve trauma outcomes using nonpublic report cards. In our analysis examining the impact of nonpublic reporting, we controlled for temporal trends and hospital effects, in addition to controlling for patient case mix and injury severity. The objective of this article was to report the findings of this trial.
The University of Rochester School of Medicine institutional review board approved this study; the need for written informed patient consent was waived. This study was designed to determine whether nonpublic reporting leads to a reduction in in-hospital mortality in injured patients using data from the NTDB. The NTDB was created by the ACS to serve as a national repository for trauma center data.14 The NTDB includes the following data elements: patient demographics, hospital demographics, International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnostic and injury codes, encrypted hospital identifiers, physiology values, and in-hospital mortality.
National Trauma Databank coding practices dictate how missing data, invalid data, and inconsistent data are handled once the data have been transmitted to the NTDB. Data reports submitted by individual hospitals are checked by the NTDB using software edit tools.15 Internal consistency is assessed by comparing the values for related data elements. For example, the intensive care unit length of stay must be less than the total hospital stay. Each hospital submitting data to the NTDB is given a screening report and has the opportunity to resubmit their data after correcting their errors.16
At the inception of this trial, we identified 2 separate hospital cohorts depending on whether hospitals were primarily using either ICD-9-CM codes or Abbreviated Injury Scale (AIS) codes to code patient injuries.10 We developed and validated 2 versions of the Trauma Mortality Prediction Model (TMPM)—one based on AIS injury codes17 and the other based on ICD-9-CM codes18—whose statistical performance was superior to existing standard injury models, Injury Severity Score (ISS) and ICISS (an International Classification of Diseases, Ninth Revision–based injury severity score). In 2008, the ACS National Trauma Data Standard was revised to mandate the use of ICD-9-CM codes to characterize injury severity and made AIS injury codes optional.19 In 2008, we sent hospitals report cards based on either TMPM–ICD-9 or TMPM-AIS (using 2006 data), depending on whether they coded injury data using either ICD-9-CM or AIS codes. Starting in 2009 and then in 2010, participating hospitals received annual report cards based on TMPM–ICD-9 because of the mandated change in coding practices. Because TMPM–ICD-9 and TMPM-AIS are based on 2 completely different sets of injury codes—ICD-9-CM and AIS codes—we limited our analysis to those hospitals that coded injuries using ICD-9-CM codes and received report cards based on TMPM–ICD-9 in 2008-2010 to avoid confounding the intervention effect (nonpublic reporting starting in 2008) with the changeover in injury coding (starting in 2007). Each hospital was provided with benchmarking information on their risk-adjusted in-hospital mortality for their entire patient cohort, as well as separate reports stratified by mechanism of trauma (blunt, gunshot wound, motor vehicle crash, pedestrian, and high risk).10 A sample report card is shown in the eAppendix (Supplement).
After excluding patients with burns, unspecified injuries, nontraumatic injuries, or missing mechanisms of injury, as well as patients who were dead on admission or transferred out to another hospital, our study sample included 330 700 patients in 44 hospitals (Figure 1). We excluded patients with missing demographic data, invalid ICD-9-CM codes, missing empirical injury severities (MARC values), and missing ICD-9-CM codes. The final analytic sample consisted of 326 206 patients admitted to 44 hospitals (Figure 1).
The aim of this analysis was to determine whether the introduction of nonpublic reporting was associated with lower in-hospital mortality after controlling for patient and hospital factors, as well as controlling for possible temporal trends toward lower mortality rates during the study. This analysis was performed using regression discontinuity modeling,20,21 which is an econometric technique that identifies the effect of an intervention (introduction of nonpublic reporting) in a prestudy and poststudy by estimating the intercept shift, controlling for patient and hospital factors and for preexisting temporal trends.
To perform regression discontinuity analysis, we estimated a patient-level logistic regression model to examine the association between in-hospital mortality and the initiation of nonpublic reporting. A dummy variable was used to indicate whether a patient was admitted before or after nonpublic reporting was initiated. We controlled for secular trends by including year of admission as a categorical variable, omitting data from 2008 (when report cards were initiated). We also included an interaction term between report card and year to examine whether the slope of the time trend changed after initiation of reporting. We controlled for patient risk factors using an enhanced version of TMPM–ICD-9: patient age, sex, injury severity of the 5 most severe injuries, transfer status, mechanism of injury (ie, blunt, gunshot wound, motor vehicle crash, stab injury, pedestrian, and low fall), motor component of the Glasgow Coma Scale, and systolic blood pressure.18 We also controlled for hospital-fixed effects by including a separate indicator variable for each hospital. By including hospital-fixed effects, we were able to identify whether nonpublic reporting led to a mortality reduction within hospitals by allowing each hospital to act as its own control. Missing values of the motor component of the Glasgow Coma Scale and systolic blood pressure were imputed using the Stata implementation of the multiple imputation by chained equations method described by van Buuren et al.22 Fractional polynomial analysis was used to determine the optimal specification of age.23
Several sensitivity analyses were performed to examine the robustness of the analysis of the impact of nonpublic reporting. We performed stratified analyses in which we limited the patient cohort to patients with either blunt or penetrating trauma and to either low-risk (predicted probability of death<5%1) or high-risk (predicted probability of death ≥5%) patients. We also performed stratified analyses in which we only included low-performance, average-performance, or high-performance hospitals. Hospital performance was estimated in 2006 data using hierarchical logistic regression, based on TMPM–ICD-9, in which hospitals were specified as a random effect. The empirical Bayes estimate of the hospital effect was exponentiated to yield an adjusted odds ratio (AOR).24 Hospitals whose AORs were significantly lower than 1 were classified as high-performance outliers, whereas hospitals with AORs significantly greater than 1 were classified as low-performance outliers.
Data management and statistical analyses were performed using Stata SE/MP version 11.0 (StataCorp). Robust variance estimators were used because patient observations were clustered by hospital.25 All statistical tests were 2-tailed and P values less than .05 were considered significant.
Patient characteristics before (2006-2007) and after (2008-2010) initiation of nonpublic reporting are shown in eTable 1 in the Supplement. Overall, the median age was 40 years, most patients were male (66%), 24% were transferred in from other hospitals, and most patients sustained injuries from either blunt trauma (42%) or motor vehicle crashes (23%). The observed mortality rate was 4.19%. No clinically significant changes in patient case mix were detected over the 5-year study. Hospital characteristics are shown in eTable 2 in the Supplement. Most hospitals were either level I (43%) or level II (43%) trauma centers, nearly half were university hospitals, and nearly all were nonprofit. Most hospitals had between 200 and 400 beds (39%) or greater than 400 beds (54%). All geographic regions in the United States were well represented: Northeast (16%), South (39%), Midwest (16%), and West (27%).
After controlling for patient characteristics, hospital factors, and preexisting time trends, nonpublic reporting did not have a significant impact on in-hospital mortality (AOR, 0.89; 95% CI, 0.68-1.16; P = .39) (Table 1 and Figure 2A). When we stratified patients by mechanism of trauma, we found similar findings: blunt trauma (AOR, 0.91; 95% CI, 0.69-1.20; P = .51) and penetrating trauma (AOR, 0.75; 95% CI, 0.44-1.28; P = .29) (Table 1 and Figure 2B). We also did not find a significant association between nonpublic reporting and in-hospital mortality in either low-risk (AOR, 0.84; 95% CI, 0.57-1.25; P = .40) or high-risk (AOR, 0.88; 95% CI, 0.67-1.17; P = .38) patients (Table 2 and Figure 2C and D).
We conducted a final sensitivity analysis in which we stratified our sample according to whether patients were treated at low-, average-, and high-performance hospitals. Similar to all the other analyses, we found that nonpublic reporting was not associated with improved outcomes in patients treated in low- (AOR, 1.17; 95% CI, 0.65-1.23; P = .61), average- (AOR, 0.89; 95% CI, 0.65-1.23; P = .49), or high-performance (AOR, 0.74; 95% CI, 0.34-1.57; P = .43) hospitals (Table 3 and Figure 2E).
We did not find significant evidence that providing hospitals with nonpublic reports of their risk-adjusted trauma mortality rates was associated with improvement in trauma mortality, even after controlling for patient case mix and preexisting time trends. We found similar findings when we limited our analysis to patients with either blunt trauma, penetrating trauma, or either a low risk for death or high risk for death. We also did not find evidence that nonpublic reporting had a differential effect depending on whether a hospital was a low-, average-, or high-performance hospital.
There are historical precedents to suggest that nonpublic reporting could be expected to lead to improved outcomes. The VA National Surgical Quality Improvement Program (NSQIP) was established by congressional mandate in response to concerns about the quality of surgical care in VA hospitals.26 The VA prospectively collected clinical data on patient risk and outcomes and provided VA hospitals with confidential risk-adjusted comparative outcomes information. Between 1994 and 2004, unadjusted surgical mortality rates decreased by 37% and complication rates by 42%.26 Under the leadership of the ACS, the NSQIP was expanded to include hospitals outside of the VA and currently includes more than 400 non-VA hospital sites.27 Hall and colleagues5 reported that adjusted mortality rates for a constant patient population hypothetically undergoing surgery in ACS NSQIP hospitals in 2005, 2006, and 2007 improved by 26% over a 2-year period and that complications decreased by 13% over the same period.
More recent evidence suggests that hospital benchmarking may be less effective than was originally believed. Medicare’s Hospital Compare, the largest and most ambitious public reporting initiative to date, publicly reports risk-adjusted mortality for all Medicare patients with acute myocardial infarction, heart failure, and pneumonia since 2005. Controlling for preexisting time trends in mortality, there is no evidence that this public reporting initiative resulted in a mortality reduction for acute myocardial infarction and pneumonia, and only a modest 3% relative risk reduction for heart failure.7
The question arises as to why we found no association between the SMARTT (Survival Measurement and Reporting Trial for Trauma) reporting initiative and mortality. It is possible that performance benchmarking is not as effective as once believed. Although the early results from the VA NSQIP were very impressive, the effect of the ACS NSQIP on mortality outcomes were less dramatic and did not account for the fact that the benefit observed with nonpublic reporting may have been owing instead to preexisting temporal trends and unrelated to performance feedback. Furthermore, the negative findings of the more methodologically rigorous and larger Hospital Compare study in nonsurgical patients stand in sharp contrast to the earlier NSQIP studies.
It is also possible that benchmarking alone is not sufficient and that reporting initiatives need to be tied to financial incentives. However, evaluations of the Premier and Centers for Medicare and Medicaid Services Hospital Quality Incentive Demonstration program did not find that the program was associated with decreases in the mortality rates for patients hospitalized with acute myocardial infarction, heart failure, or pneumonia.28,29 Finally, it is also possible that providing hospitals with benchmarking information, with or without financial incentives, is not enough to improve outcomes without also providing hospitals with information on how to improve patient outcomes. In the VA, hospitals have the option of inviting the NSQIP to conduct consultative structured site visits, and best practices from high-performance hospitals are disseminated to all hospitals in the VA and ACS NSQIP.5,30 However, the evidence that improved adherence to best practices is associated with better outcomes is relatively modest in both surgical and nonsurgical patients.31,32
Our study has several potential limitations. First, we did not know to what extent participating hospitals actually used the benchmarking information to guide their quality improvement efforts. Second, our hospital study cohort was limited to a subset of hospitals in the NTDB with a low incidence of missing data and to hospitals that used ICD-9-CM codes throughout the study period; therefore, it is not necessarily representative of all hospitals caring for trauma patients. Although we found no evidence of important and sustained effects, a larger hospital sample would have provided greater statistical precision. Third, we used ICD-9-CM diagnostic codes as the basis for injury coding, as opposed to clinical AIS injury codes, because of the change in the national standard for trauma coding that occurred during the study. However, the ICD-9–based trauma mortality prediction model used to produce the quality reports in our analyses has been previously validated and shown to have excellent statistical properties.18 Fourth, because the NTDB does not have reliable comorbidity data, we did not include comorbidities in our benchmarking reports, nor did we include them in our analyses. However, we have previously shown, using all-payer administrative data, that omitting comorbidity data from risk adjustment does not have a substantial impact on hospital quality measurement in trauma.13 Finally, our analysis was limited to in-hospital mortality and did not examine complications, functional outcomes, readmissions, or long-term outcomes because of limitations of the NTDB.
Our findings have potentially important implications for the ACS as it continues to expand the ACS Trauma Quality Improvement Program (TQIP). This national trauma benchmarking program grew out of a pilot study of 23 centers in 2008 to include more than 160 trauma centers.33,34 The TQIP is modeled after the ACS NSQIP and assumes that TQIP benchmarking reports will serve as a catalyst for performance improvement and lead to better patient outcomes. Our findings point to the limitations of performance feedback as a mechanism for improving trauma outcomes. However, current efforts to shift financial risk to hospital and physician groups under the Affordable Health Care Act35 may provide very strong incentives for hospitals to decrease costly complications and improve patient outcomes. As we move toward a future health care model where hospitals and physicians are increasingly held accountable for patient and financial outcomes, detailed benchmarking reports from the ACS TQIP may provide critical information that trauma centers can use to identify and more effectively target quality problems.
In summary, our study suggests that nonpublic reporting of hospital risk-adjusted mortality rates does not lead to improved trauma mortality outcomes. It is possible that adding other interventions to performance feedback, such as structured site visits, efforts to identify and disseminate best practices, and meaningful financial incentives, will lead to improved trauma outcomes. The findings of this study may prove useful to the ACS leadership as it moves ahead to further develop and expand its national trauma benchmarking program. It may be possible for the ACS TQIP, using a much larger hospital sample, to reexamine the value of nonpublic reporting, possibly in combination with other quality-improvement strategies.
Corresponding Author: Laurent G. Glance, MD, Department of Anesthesiology, University of Rochester Medical Center, 601 Elmwood Ave, PO Box 604, Rochester, NY 14642 (email@example.com).
Accepted for Publication: April 19, 2013.
Published Online: December 11, 2013. doi:10.1001/jamasurg.2013.3977.
Author Contributions: Dr Glance had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: All authors.
Acquisition of data: Glance.
Analysis and interpretation of data: Glance, Osler, Mukamel, Dick.
Drafting of the manuscript: Glance.
Critical revision of the manuscript for important intellectual content: All authors.
Statistical analysis: Glance, Osler, Mukamel, Dick.
Obtained funding: Glance, Mukamel.
Administrative, technical, or material support: Glance, Meredith.
Study supervision: Glance.
Conflict of Interest Disclosures: None reported.
Funding/Support: This project was supported by grant RO1 HS 16737 from the Agency for Healthcare Research and Quality.
Role of the Sponsor: The Agency for Healthcare Research and Quality had no role in the design and conduct of the study; collection, management, analysis, or interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Disclaimer: The views presented in this manuscript are those of the authors and may not reflect those of the Agency for Healthcare Research and Quality. These data were obtained from the American College of Surgeons National Trauma Data Bank, which is not responsible for any analyses, interpretations, or conclusions.
Create a personal account or sign in to: