Mean risk-standardized mortality rates were 18.8% (SD, 2.1%; range, 10.4%-27.5%) in 1995 and 15.8% (SD, 1.7%; range, 10.6%-21.6%) in 2006. The size of each bin reflects the number of hospitals that filled in a particular interval of risk-standardized mortality rate as well as the distributions (ranges) of rates in 1995 and 2006. Because the number of bins in each year is the same (n = 35), the 1995 bin is wider than the 2006 bin to reflect the change in risk-standardized mortality rate distributions.
Krumholz HM, Wang Y, Chen J, Drye EE, Spertus JA, Ross JS, Curtis JP, Nallamothu BK, Lichtman JH, Havranek EP, Masoudi FA, Radford MJ, Han LF, Rapp MT, Straube BM, Normand ST. Reduction in Acute Myocardial Infarction Mortality in the United StatesRisk-Standardized Mortality Rates From 1995-2006. JAMA. 2009;302(7):767-773. doi:10.1001/jama.2009.1178
Author Affiliations: Section of Cardiovascular Medicine (Drs Krumholz, Wang, Chen, Drye, and Curtis), Robert Wood Johnson Clinical Scholars Program (Dr Krumholz), Section of Health Policy and Administration, School of Public Health (Dr Krumholz), and Section of Chronic Disease Epidemiology, School of Public Health (Dr Lichtman), Yale University School of Medicine, New Haven, Connecticut; Center for Outcomes Research and Evaluation, Yale-New Haven Hospital, New Haven (Drs Krumholz and Wang); University of Missouri at Kansas City School of Medicine and Mid America Heart Institute, Kansas City (Dr Spertus); Department of Geriatrics and Adult Development, Mount Sinai School of Medicine, New York, New York (Dr Ross); Health Services Research Enhancement Award Program and Geriatrics Research, Education, and Clinical Center, James J. Peters VA Medical Center, Bronx, New York (Dr Ross); Health Services Research and Development Center of Excellence, Ann Arbor VA Medical Center, and Division of Cardiovascular Disease, Department of Internal Medicine, University of Michigan Medical School, Ann Arbor (Dr Nallamothu); Denver Health Medical Center and the University of Colorado at Denver and Health Sciences Center, Denver (Drs Havranek and Masoudi); New York University School of Medicine, New York, New York (Dr Radford); Centers for Medicare & Medicaid Services, Baltimore, Maryland (Drs Han, Rapp, and Straube); Department of Health Care Policy, Harvard Medical School, Boston, Massachusetts (Dr Normand); and Department of Biostatistics, Harvard School of Public Health, Boston (Dr Normand).
Context During the last 2 decades, health care professional, consumer, and payer organizations have sought to improve outcomes for patients hospitalized with acute myocardial infarction (AMI). However, little has been reported about improvements in hospital short-term mortality rates or reductions in between-hospital variation in short-term mortality rates.
Objective To estimate hospital-level 30-day risk-standardized mortality rates (RSMRs) for patients discharged with AMI.
Design, Setting, and Patients Observational study using administrative data and a validated risk model to evaluate 3 195 672 discharges in 2 755 370 patients discharged from nonfederal acute care hospitals in the United States between January 1, 1995, and December 31, 2006. Patients were 65 years or older (mean, 78 years) and had at least a 12-month history of fee-for-service enrollment prior to the index hospitalization. Patients discharged alive within 1 day of an admission not against medical advice were excluded, because it is unlikely that these patients had sustained an AMI.
Main Outcome Measure Hospital-specific 30-day all-cause RSMR.
Results At the patient level, the odds of dying within 30 days of admission if treated at a hospital 1 SD above the national average relative to that if treated at a hospital 1 SD below the national average were 1.63 (95% CI, 1.60-1.65) in 1995 and 1.56 (95% CI, 1.53-1.60) in 2006. In terms of hospital-specific RSMRs, a decrease from 18.8% in 1995 to 15.8% in 2006 was observed (odds ratio, 0.76; 95% CI, 0.75-0.77). A reduction in between-hospital heterogeneity in the RSMRs was also observed: the coefficient of variation decreased from 11.2% in 1995 to 10.8%, the interquartile range from 2.8% to 2.1%, and the between-hospital variance from 4.4% to 2.9%.
Conclusion Between 1995 and 2006, the risk-standardized hospital mortality rate for Medicare patients discharged with AMI showed a significant decrease, as did between-hospital variation.
Over the last 2 decades, health care professional, consumer, and payer organizations have sought to improve outcomes for patients hospitalized with acute myocardial infarction (AMI). In 1990, the American College of Cardiology and the American Heart Association jointly published the first in a series of clinical practice guidelines for AMI that include a significant emphasis on evidence-based hospital care.1 The focus on quality was augmented by initiatives supported by other public and private groups that targeted aspects of care including time to reperfusion for ST-segment elevation myocardial infarction and the use of evidence-based therapies during hospitalization, discharge, and postdischarge care.2- 6 During this period of marked emphasis on improving the quality of hospital care for patients with AMI, the use of evidence-based medications increased7 and overall coronary heart disease deaths decreased.8
In 1992, the Health Care Financing Administration, now the Centers for Medicare & Medicaid Services (CMS), initiated an ongoing national effort, aided by quality improvement organizations, to measure and improve hospital care for patients with AMI.9 The intent of the initiative was “to move from dealing with individual clinical errors to helping providers to improve the mainstream of care.” CMS sought to focus attention on “persistent differences between the observed and the achievable in both care and outcomes and less on occasional, unusual deficiencies of care.” Thus, the goal of these efforts was to improve the performance of all hospitals, not just those that performed poorly.
For this period, however, little is known about whether hospitals were achieving better short-term mortality rates for AMI or if there was a reduction in between-hospital variation in short-term mortality rates. Prior studies of AMI have primarily focused on registry populations or community cohorts, with particular attention to patient-level analyses.6,10,11 Whether hospital results across the United States, as measured by short-term mortality rates, have changed favorably over time and whether improvements in outcomes have occurred across the spectrum of institutions is not known.
To investigate these issues, we examined hospital-level 30-day risk-standardized mortality rates (RSMRs) after hospitalization for AMI in all nonfederal acute care hospitals in the United States between 1995 and 2006, using a validated model that standardizes for differences in patient risk.2 To determine how the experience with AMI compared with the experience with other conditions, we also compared trends for AMI with results for non-AMI hospitalizations.
The study sample was drawn from CMS Medicare Provider Analysis and Review (MEDPAR) files. The MEDPAR files have beneficiary-specific information on each hospitalization for fee-for-service Medicare enrollees and include demographics, principal and secondary diagnosis codes, and procedure codes. Patients discharged with a principal diagnosis of AMI (International Classification of Diseases, Ninth Revision, Clinical Modification codes 410.xx [except 410.x2]) from acute care hospitals from January 1, 1995, to December 31, 2006, were included in the initial sample. We restricted the sample to include patients 65 years or older who had at least a 12-month history of fee-for-service enrollment prior to the index hospitalization; we excluded patients discharged alive within 1 day of admission not against medical advice, because it is unlikely that these patients had sustained an AMI. Patients transferred from one acute care hospital to another were required to have had a principal discharge diagnosis of AMI at both hospitals (more than 79% of all transferred patients or 97% of all hospitalizations met this requirement, based on 2000 data). We then linked hospitalizations into an episode of care and attributed the outcome to the “index” admitting hospital.2 To compare with other conditions, we created a cohort of non-AMI hospitalizations in MEDPAR that included patients 65 years or older who were discharged from acute care hospitals and who did not have a principal diagnosis of AMI.
The primary patient end point was 30-day all-cause mortality, defined as death from any cause 30 days following the index admission date. A standardized period of follow-up, rather than in-hospital mortality, was used to ensure that secular trends in length of stay would not influence the assessment. The primary hospital end point was 30-day all-cause RSMR. Mortality information was obtained from the Medicare Enrollment files by linking unique patient identifiers. Secondary outcomes included in-hospital mortality (defined as death during the index hospitalization), length of stay (defined as the difference between the discharge date and admission date plus 1 day), and discharge disposition.
We created independent variables from those contained in the validated CMS AMI hospital-specific 30-day all-cause mortality measure.2 A total of 27 variables are derived from inpatient and outpatient administrative claims, including 2 demographic variables (age as a continuous variable; male sex), 2 AMI location variables, 8 cardiovascular history variables, and 15 other variables that identify additional coexisting illnesses. The coexisting illness variables are identified from claims submitted in the year before the index hospitalization and from the index admission for those conditions that could not represent a complication of the admission.
We conducted bivariate analyses to describe the changes in patient characteristics and discharge status across 2-year periods using the Mantel-Haenszel χ2 test for categorical variables and the Cuzick nonparametric test for continuous variables.12 We used 2-year periods for descriptive purposes only.
Using the previously described validated method,2 we estimated a risk model relating the log-odds of mortality from any cause 30 days from admission to patient risk factors for the AMI cohort. The model provides information to compute standardized hospital-specific estimates as well as quantitative summaries of between-hospital variation after adjusting for case mix. Using the regression coefficients from the risk models, we calculated an RSMR for each hospital that treated at least 1 patient for AMI during each year. To correct for within-hospital clustering of patients and for varying hospital volumes, we used an adjusted number of observed mortalities in the ratio rather than the observed number. We also used this model to calculate an RSMR for all other conditions, as a contrast with the AMI result.
We used between-hospital variation in the RSMRs to quantify changes in hospital quality of care for AMI. First, we computed the annual between-hospital variation in 30-day mortality risk after using an odds ratio to account for patient risk factors. This ratio reflected the odds of dying if treated at a hospital 1 SD above the national average relative to the odds of dying if treated at a hospital 1 SD below the national average. Second, we estimated the standard deviation and interquartile range of the RSMRs for each year. Third, because the mean mortality rates change annually, we calculated annual coefficients of variation of the RSMRs. Decreases in each of these quantities over time imply less heterogeneity in hospital mortality rates.
We classified hospitals into Census Bureau regions and divisions based on their locations, then calculated the division-specific aggregated weighted RSMRs for 1995 and 2006. We identified 3054 hospitals in operation from 1995 to 2006 that had at least 1 patient with AMI each year during that period and compared the change in RSMR for these hospitals.
All statistical testing was 2-sided, at a significance level of .05. We conducted all analyses using SAS version 9.1.3 (SAS Institute Inc, Cary, North Carolina) and Stata 9.0 (StataCorp, College Station, Texas) and estimated the hierarchical models using the GLIMMIX procedure in SAS. The Yale University institutional review board determined that the study did not require approval or waiver.
The final study sample included 3 195 672 discharges in 2 755 370 patients with a mean age of 78.1 (SD, 7.7) years; 51.0% were women, 89.3% white, 6.7% black, and 4.0% other race.
Table 1 summarizes how patient characteristics changed over time. The number of hospitalizations each year ranged from 282 354 in 1995 to 223 172 in 2006. The mean age increased from 77.0 (SD, 7.2) in 1995 to 79.1 (SD, 8.0) in 2006, while the proportion of women changed only slightly. The prevalence of coexisting illnesses including hypertension, diabetes, renal disease, and chronic obstructive pulmonary disease increased over time, as did the likelihood of patients having a history of AMI, percutaneous coronary intervention, and coronary artery bypass graft surgery.
There were marked changes in discharge disposition and length of stay for patients with AMI. From 1995-1996 to 2005-2006, there was a relative 87.1% increase (from 9.3% to 17.4%) in the likelihood of patients being discharged to skilled nursing or intermediate care facilities. In contrast, the percentage of patients discharged to home decreased 8.1% (from 45.7% to 42.0%) (Table 1). The mean length of stay decreased from 7.9 (SD, 6.3) days in 1995 to 7.0 (SD, 6.0) days in 2006 (Table 2).
There were more than 4000 hospitals in the study sample, ranging from 5010 in 1995 to 4357 in 2006, and the median volume of AMI discharges decreased from 33 to 22 in that period (Table 2). The median licensed bed size of the hospitals ranged from 108 in 1995 to 110 in 2006. The percentage of teaching hospitals ranged from 17.6% in 1995 to 19.3% in 2006. The percentage of hospitals that had bypass surgery facilities ranged from 17.4% in 1995 to 22.2% in 2006.
The observed 30-day all-cause and in-hospital mortality rates decreased over the study period (Table 2). The 30-day mortality rate decreased from 18.9% in 1995 to 16.1% in 2006, and in-hospital mortality decreased from 14.6% to 10.1%. In contrast, the 30-day mortality rate for all other conditions was 9.0% in 1995 and 8.6% in 2006.
The odds of dying if treated at a hospital 1 SD above the national average relative to the odds of dying if treated at a hospital 1 SD below the national average were 1.63 (95% CI, 1.60-1.65) in 1995 and 1.56 (95% CI, 1.53-1.60) in 2006. The RSMR decreased from 18.8% in 1995 to 15.8% in 2006 (odds ratio, 0.76; 95% confidence interval [CI], 0.75-0.77). The Figure shows the distribution of RSMRs among the hospitals in 1995 compared with 2006. The distributions of RSMRs and ranges in each year are shown in Table 2. The change in 30-day RSMRs for all other conditions was from 9.2% in 1995 to 8.8% in 2006.
A reduction in between-hospital heterogeneity in mortality was also observed: the coefficient of variation decreased from 11.2% in 1995 to 10.8% in 2006, the interquartile range from 2.8% to 2.1%, and the variance from 4.4% to 2.9%.
Nationally, we found the West South Central region to have the largest absolute reduction in RSMR (3.6% [from 19.8% in 1995 to 16.2% in 2006]) and the Pacific region to have the smallest (1.9% [from 17.8% in 1995 to 15.9% in 2006]) (Table 3).
Of the 3054 hospitals in operation throughout the study period, the change in RSMR was similar to our findings with all hospitals, with a reduction in RSMR from 19.0% to 16.1%.
Our study reveals a marked reduction in hospital-level 30-day RSMRs in the United States from 1995 through 2006. In this period, the average hospital-specific 30-day RSMR decreased approximately 3%, a nearly one-sixth relative reduction in short-term mortality. Moreover, we observed a reduction in crude mortality that occurred during a period in which the AMI population had an increase in age and comorbid conditions. Among Medicare beneficiaries, for every 33 patients admitted in 2006 compared with 1995, there was 1 additional patient alive at 30 days. Moreover, the variability of hospitals' 30-day mortality rates was reduced. Moreover, the variability of hospitals’ 30-day mortality rates was reduced, while the overall change in 30-day mortality for non-AMI admissions did not change substantially.
The reduction in hospital-level heterogeneity in 30-day mortality is consistent with the hypothesis that quality improvement efforts contributed to this reduction. In the mid 1990s there were 39 hospitals at the high end of the mortality distribution, with RSMRs exceeding 24% (99th percentile) in the poorest-performing hospitals. By 2006 there was no hospital in this group, and the worst 1% of hospitals had a 30-day RSMR of 19.5%. The change resulted from a shift in the entire spectrum of performance among hospitals and a decrease in the variation in performance.
In 1997, Braunwald13 described 2 distinct eras of innovation that reduced mortality for patients with AMI. The first era, beginning in the 1960s, was initiated by the introduction of cardiac care units and defibrillation. The second era, starting in the 1980s, began with the publication of studies revealing that interventional and pharmacological strategies could markedly reduce the risk of AMI.
In 1992, Jencks and Wilensky9 heralded a new era of quality improvement within the Health Care Financing Administration in which efforts to improve care for Medicare beneficiaries shifted from a focus on individual errors to the strong support of efforts to improve mainstream care by monitoring performance and instituting systems to elevate practice, with particular emphasis on initiatives targeting high-impact conditions such as AMI. This most recent era of change also reveals a marked improvement in outcomes and a decrease in variation across the spectrum of institutions. Mortality rates following AMI have concurrently decreased, coinciding with major efforts to improve the quality of AMI care. Such improvement efforts have identified gaps in treatment and are aimed at better realizing the benefits of medical advances by delivering the appropriate treatments to eligible patients.
Our findings are consistent with some other, more limited reports of trends in outcomes of patients with AMI, although their focus is on patient outcomes and none includes a hospital-level analysis. Masoudi et al,14 based on data from 4 states, reported that compared with results in 1992-1993, patients in 2000-2001 had a 13% lower adjusted risk of 1-year mortality. Rosamond et al,15 using data from the Atherosclerosis Risk in Communities study, reported that from 1987 through 1994, the 4 communities in the study experienced an annual reduction in mortality of approximately 5%. Investigators studying Worcester, Massachusetts, reported a decrease in in-hospital mortality from 1975-1995.16 Industry-sponsored registries also have noted reductions in mortality rates, although they reflect on the sites that enrolled and the cases that they provided6,10,11 and as such are not necessarily representative of practices at all institutions. However, to our knowledge, our report is the first to provide a national perspective on hospital performance over this recent period.
A limitation of this study is that it cannot prove what caused the observed changes. Through statistical adjustment, we can reduce the possibility that changes in the patient population accounted for the change. The shift in performance and the narrowing of variability is consistent with a role for quality, but other factors may have contributed. Moreover, much of the quality efforts focused on process measures and not on overall short-term outcomes, though many such improvements would be expected to affect 30-day mortality. For example, in 1992-1993, only about 60% of Medicare beneficiaries with AMI received aspirin within 2 days of admission,17 whereas the rate now exceeds 90%. In addition, the period was associated with notable technological advances, including the introduction of new medications and a marked increase in the use of procedures.
The changing definition of AMI also could have influenced the pattern of hospital mortality. Troponin testing came into widespread use at the end of the 1990s. Most studies suggest that the introduction of troponin assays has increased the number of cases qualifying for this diagnosis and that the added cases have a relatively higher risk of adverse outcomes. An analysis of a Medicare cohort indicated that older persons with AMI diagnosed using only troponin assay have a similar or slightly higher risk for short-term mortality compared with those diagnosed using creatine kinase assay.18 That study is consistent with our finding of increased mortality rates around the time that troponin tests were introduced, which was followed again by gradual decreases. An Atherosclerosis Risk in Communities (ARIC) study, based on medical chart review of AMI cases between 1987 and 2002, found that severity decreased over time, but the severity measures in that study, such as shock during the admission, could have been influenced by quality of care.19 Roger et al20 found that the new AMI criteria, when applied in chart review to patients presenting with an elevated troponin level, would increase the number of AMIs and reduce risk, but interestingly, physicians considered only half of the patients who met only the troponin criterion to have had an AMI, perhaps resulting in higher risk among those coded as having an AMI. The application of clinical judgment to the diagnosis may account for the absence of an increase in hospitalization rates after the new definition was introduced.
During the study period, the enrollment in Medicare managed care varied within hospitals and regions. The effect of this variation is difficult to gauge. Our focus on patients admitted with AMI, controlling for age and comorbidity, likely mitigates bias introduced by variation in managed care populations and differences in the baseline health of these populations.
Another limitation is the use of administrative claims data. As such, there is no information about medications or process measures. However, the Medicare database is the only truly national source of information that can be used to address trends in mortality with a standardized period of follow-up. A recent administrative claims-based model produced estimates of hospital-level RSMRs in close approximation to estimates that would have been produced with a medical record–based model,2 and the predictive values of the variables in the models have remained relatively constant over time. Also, the codes for AMI have a high sensitivity and specificity for the identification of patients admitted with this condition.21 The use of the Medicare administrative data also allowed a focus on 30-day outcomes rather than in-hospital events, which would have overestimated the improvement in hospital performance over time.
A change in coding practices for the concomitant conditions also could have affected the results. However, the observed mortality demonstrated a reduction over time, suggesting that the RSMR finding was not simply a result of risk adjustment. In addition, chart review has revealed increases in age and comorbidity over time in AMI cohorts, suggesting that this trend does not merely reflect changes in coding.14 Lastly, the association of the comorbid conditions and 30-day mortality has not changed over the period, suggesting that the same type of conditions are being coded.2
Lastly, this study is limited to Medicare patients and cannot represent trends in younger populations of patients with AMI. Unfortunately, we lack national data on younger patients that would include 30 days of follow-up. Nevertheless, Medicare patients represent more than half of all patients with AMI.
Between 1995 and 2006, the RSMR for patients admitted with AMI showed a marked and significant decrease, as did between-hospital variation. Although the cause of the reduction cannot be determined with certainty, this finding may reflect the success of the many individuals and organizations dedicated to improving care during this period.
Corresponding Author: Harlan M. Krumholz, MD, SM, Section of Cardiovascular Medicine, Yale University School of Medicine, 1 Church St, Ste 200, New Haven, CT 06510 (email@example.com).
Author Contributions: Drs Wang and Normand had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Krumholz, Drye, Straube, Normand.
Acquisition of data: Krumholz, Han, Straube.
Analysis and interpretation of data: Krumholz, Wang, Chen, Drye, Spertus, Ross, Curtis, Nallamothu, Lichtman, Havranek, Masoudi, Radford, Rapp, Normand.
Drafting of the manuscript: Krumholz, Normand.
Critical revision of the manuscript for important intellectual content: Krumholz, Wang, Chen, Drye, Spertus, Ross, Curtis, Nallamothu, Lichtman, Havranek, Masoudi, Radford, Han, Rapp, Straube, Normand.
Statistical analysis: Wang, Normand.
Obtained funding: Krumholz, Drye, Han, Normand.
Administrative, technical, or material support: Rapp, Straube.
Study supervision: Krumholz, Han, Straube.
Financial Disclosures: Drs Krumholz, Wang, Chen, Drye, Curtis, and Normand reported that they developed and maintain risk-standardized mortality rates for acute myocardial infarction under contract with the Colorado Foundation for Medical Care. Dr Krumholz reported that he chairs a scientific advisory board for UnitedHealthcare. Dr Masoudi reported that he has contracts with the American College of Cardiology, the Colorado Foundation for Medical Care, and the Oklahoma Foundation for Medical Quality, and that he has served on an advisory board for UnitedHealthcare.
Funding/Support: The analyses on which this article is based were performed under contract HHSM-500-2005-CO001C, “Utilization and Quality Control Quality Improvement Organization for the State (commonwealth) of Colorado,” funded by the Centers for Medicare & Medicaid Services (CMS), an agency of the US Department of Health and Human Services.
Role of the Sponsor: The CMS reviewed and approved the use of its data for this work and approved submission of the manuscript but had no role in the design and conduct of the study; the collection, management, analysis, and interpretation of the data; or the preparation of the manuscript.
Disclaimer: The content of this article does not necessarily reflect the views or policies of the US Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the US government.