Blue dots represent hospital performance on 30-day AMI mortality for baseline poor performers; orange dots represent hospital performance on 30-day AMI mortality for other hospitals; corresponding fitted lines are provided for trend.
Blue dots represent hospital performance on 30-day HF mortality for baseline poor performers; orange dots represent hospital performance on 30-day HF mortality for other hospitals; corresponding fitted lines are provided for trend.
eTable. Predictors of Improvement
eFigure 1. Baseline Poor Performers vs Baseline Top Performers (Quartiles), Mortality for AMI
eFigure 2. Baseline Poor Performers vs Baseline Top Performers (Quartiles), Mortality for HF
eFigure 3. Baseline Poor Performers vs Baseline Top Performers (Deciles), Mortality for AMI
eFigure 4. Baseline Poor Performers vs Baseline Top Performers (Deciles), Mortality for HF
eFigure 5. Dual Poor Performers, AMI Mortality
eFigure 6. Dual Poor Performers, HF Mortality
Customize your JAMA Network experience by selecting one or more topics from the list below.
Chatterjee P, Joynt Maddox KE. US National Trends in Mortality From Acute Myocardial Infarction and Heart Failure: Policy Success or Failure? JAMA Cardiol. 2018;3(4):336–340. doi:10.1001/jamacardio.2018.0218
How have mortality rates at baseline poor-performing hospitals for acute myocardial infarction and heart failure changed in response to policies of the past decade, including public reporting and value-based payment programs?
In this cross-sectional study, for acute myocardial infarction, 30-day mortality among baseline poor performers was higher at baseline but improved more over time compared with other hospitals (18.6% in 2009 to 14.6% in 2015 vs 15.7% to 14.0%). In contrast, for heart failure, baseline poor performers improved over time (13.5% to 13.0%), but mortality among all other heart failure hospitals increased (10.9% to 12.0%).
Despite being subject to identical policy pressures, mortality trends for acute myocardial infarction and heart failure differed markedly between 2009 and 2015.
Hospitals in the United States have been subject to mandatory public reporting of mortality rates for acute myocardial infarction (AMI) and heart failure (HF) since 2007 and to value-based payment programs for these conditions since 2011. However, whether hospitals with initially poor baseline performance have improved relative to other hospitals under these programs, and whether patterns of improvement differ by condition, is unknown. Understanding trends within public reporting and value-based payment can inform future efforts in these areas.
To examine patterns in 30-day mortality from AMI and HF and determine whether they differ for baseline poor performers (worst quartile in 2009 and 2010 in public reporting, prior to value-based payment) compared with other hospitals.
Design, Setting, and Participants
Retrospective cross-sectional study at US acute care hospitals from 2009 to 2015 that included 2751 and 3796 hospitals with publicly reported mortality data for AMI and HF, respectively.
Public reporting and value-based purchasing.
Main Outcomes and Measures
Hospital-level risk-adjusted 30-day mortality rates.
We identified 422 and 600 baseline poor-performing hospitals for AMI and HF, respectively. Baseline poor performers for AMI were more often public and for-profit and less often teaching hospitals. Baseline poor performers for HF were less often large hospitals. For AMI, 30-day mortality among baseline poor performers was higher at baseline but improved more over time compared with other hospitals (18.6% in 2009 to 14.6% in 2015; −0.74% per year; P < .001 vs 15.7% in 2009 to 14.0% in 2015; −0.26% per year; P < .001; P for interaction <.001). In contrast, for HF, baseline poor performers improved over time (13.5%-13.0%; −0.12% per year; P < .001), but mean mortality among all other HF hospitals increased during the study period (10.9%-12.0%; 0.17% per year; P < .001; P for interaction, <.001).
Conclusions and Relevance
Despite being subject to identical policy pressures, mortality trends for AMI and HF differed markedly between 2009 and 2015.
Hospitals in the United States have been subject to a number of major policy initiatives focused on acute myocardial infarction (AMI) and heart failure (HF) during the past decade. Quality of care for these conditions has been publicly reported since 2004, and 30-day mortality rates have been reported since 2007.1 Penalties for 30-day readmissions have been levied under the Hospital Readmissions Reduction Program since 2012,2 and penalties or bonuses for performance on a broad set of metrics have been paid under the Hospital Value-Based Purchasing Program since 2012.3 The intent of these programs was to improve quality and outcomes, particularly for hospitals with poor performance at baseline.
However, there has been substantial controversy regarding whether public reporting and value-based payment programs have collectively had positive or negative effects on the most important outcome for these cardiovascular conditions, 30-day mortality.4-7 Further, it is unknown whether these efforts have been associated with improvement among baseline poor performers, who were subject to the initial negative public reports and penalties and thus had the strongest incentive to improve. Such information is critical as policy makers and clinical leaders identify strategies to drive continued improvement in cardiovascular care.
We obtained publicly available risk-adjusted 30-day mortality rates for AMI and HF for US hospitals from 2009 to 2015 using the Centers for Medicare and Medicaid Services’ Hospital Compare.1 We defined baseline poor-performing hospitals as those in the worst quartile of mortality in both 2009 and 2010 for AMI or HF, respectively. We compared hospital characteristics between baseline poor performers and other hospitals using the American Hospital Association’s Annual Survey and the Impact File. We created scatterplots with fitted trend lines of risk-adjusted mortality rates and calculated P values for trend using bivariate regression models. We then performed F tests to compare measures of variance during the study period and created logistic models to determine predictors for improvement using covariates of hospital size, profit status, region, rurality, membership in a system, presence of a medical or surgical intensive care unit (ICU), presence of a cardiac ICU, presence of a cardiac catheterization laboratory, and the proportion of Medicare and Medicaid patients split into quartiles. We included disproportionate share hospital index as a descriptive variable but did not include it as a covariate owing to collinearity with the proportion of Medicaid patients.
Finally, we performed several sensitivity analyses. We repeated analyses after defining baseline poor performers by the bottom decile of performance instead of bottom quartile. We also repeated analyses comparing the bottom quartile of hospitals with the top quartile of hospitals for both conditions. Finally, we examined trends for hospitals that performed poorly on both AMI and HF at baseline and compared dual poor performers with other hospitals.
All models were weighted by denominator values for the number of cases of AMI or HF, respectively, and standard errors were clustered at the hospital level. Given multiple comparisons, we considered 2-sided P values less than .01 statistically significant. This study was considered exempt by the University of Pennsylvania’s institutional review board. Patient consent was waived owing to the use of hospital-level data.
We identified 2751 and 3796 hospitals with publicly reported mortality data for AMI and HF, respectively. Of these, 422 and 600 hospitals were identified as baseline poor performers (Table). Poor performers for AMI were more often public and for-profit and less often teaching hospitals. Poor performers for HF were less often large hospitals.
Mean AMI mortality among baseline poor performers was higher at baseline than all other hospitals at 18.6% but declined significantly to 14.6% in 2015 (−0.74% per year; P < .001; Figure 1A). Reductions in mortality among other hospitals were smaller during the same period but still significant (15.7% to 14.0%; −0.26% per year; P < .001; P for interaction <.001, Figure 1B), and consequently, the overall variance among the broad group of hospitals declined (SD, 1.69% to 1.25%; P < .001; Figure 1C).
Patterns differed for HF. Mortality among baseline poor performers decreased from 13.5% to 13.0% (−0.12% per year; P < .001; Figure 2A), but during the same period, mortality among all other hospitals increased from 10.9% to 12.0% (0.17% per year; P < .001; P for interaction <.001; Figure 2B). Overall variance across the sample decreased minimally (SD, 1.52% to 1.47%; P < .001; Figure 2C).
Of the characteristics listed in the Table, the presence of an ICU was associated with improvement among baseline poor performers for HF (odds ratio, 1.47; 95% CI, 1.08-2.00; P = .01, eTable in the Supplement), but there were no other measured characteristics significantly associated with improvement over time.
These results were consistent across sensitivity analyses. When we limited the sample to top and bottom quartiles of performers, we found similar results (eFigures 1-2 in the Supplement). Defining baseline poor performers by deciles instead of quartiles also yielded similar findings (eFigures 3-4 in the Supplement). Finally, when we examined hospitals that performed poorly on both AMI and HF at baseline, we found the same trends, with dual poor performers demonstrating more improvement in both domains, although more so in AMI than HF (eFigures 5-6 in the Supplement).
Despite being subject to identical policy pressures, mortality trends for AMI and HF differed markedly between 2009 and 2015. Acute myocardial infarction mortality among both baseline poor performers and other hospitals fell significantly, whereas a small improvement among baseline poor performers in HF was offset by an increase in mortality in the remainder of hospitals.
It remains unclear why these trends should differ despite being the focus of the same policies. One possibility is that care improvements spurred by these policies have had a differential effect on the 2 conditions. For example, the growth of hospital efforts to adopt defined clinical pathways may have driven gains in AMI mortality more than HF mortality. Heart failure outcomes may be less sensitive to such pathway-based care, particularly for patients with preserved ejection fraction for whom mortality-reducing interventions remain elusive.8 Patients with HF also tend to be older and more medically complex, such that their mortality is less often cardiovascular in nature. As such, HF mortality may be more sensitive to the quality of outpatient longitudinal, multidisciplinary care rather than changes in inpatient treatment.
Another possibility is that trends in 30-day mortality during the past decade have not been the result of policy interventions aimed at inpatient care at all but are rather reflective of differential quality improvement in the outpatient setting. Because more HF care can be delivered to outpatients, it is feasible that patients admitted with HF have grown progressively sicker over the past decade. Alternatively, an increasing number of AMI survivors may go on to have severe forms of ischemic cardiomyopathy, potentially contributing to rising HF mortality.9
Our study has limitations. First, during the study period, there were several changes made to the methods used by the Centers for Medicare and Medicaid Services to calculate 30-day mortality. However, we would expect all hospitals to have been affected equally by such policy changes, and our inclusion of all participating US hospitals lessens the likelihood of such selection effects. Second, we are unable to assess the contribution of regression of the mean to these overall trends.
In conclusion, we found divergent national mortality trends for AMI and HF despite similar policy efforts directed at both conditions. There is much more to be learned about contributors to the patterns we have demonstrated, but such an understanding is critical to efforts to use policy to drive further improvements in outcomes and may warrant new focus from researchers and policy makers.
Corresponding Author: Paula Chatterjee, MD, MPH, University of Pennsylvania, 423 Guardian Dr, Room 1318, Philadelphia, PA 19104 (firstname.lastname@example.org).
Accepted for Publication: January 26, 2018.
Published Online: March 14, 2018. doi:10.1001/jamacardio.2018.0218
Author Contributions: Dr Chatterjee had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Concept and design: Both authors.
Acquisition, analysis, or interpretation of data: Both authors.
Drafting of the manuscript: Both authors.
Critical revision of the manuscript for important intellectual content: Both authors.
Statistical analysis: Chatterjee.
Supervision: Joynt Maddox.
Conflict of Interest Disclosures: Both authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Dr Joynt Maddox does intermittent contract work for the Office of the Assistant Secretary for Planning and Evaluation of the US Department of Health and Human Services.
Funding/Support: Dr Joynt Maddox is supported by National Heart, Lung, Blood Institute grant K23-HL109177-03.
Role of the Funder/Sponsor: The funding source had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Additional Contributions: Wharton Research Data Services, a university-based platform that allows faculty centralized access to databases and software tools, was used in preparing this manuscript. No compensation was received for these contributions.
Create a personal account or sign in to: