RSMR indicates risk-standardized mortality rate; RSRR, risk-standardized readmission rate. Blue lines are the cubic spline smooth regression lines with RSRR as the dependent variable and RSMR as the independent variable. Tinted areas around the cubic spline regression lines indicate 95% confidence bands. The Pearson correlation coefficient for acute myocardial infarction (n = 4506) is 0.03 (95% CI, −0.002 to 0.06); for heart failure (n = 4767), −0.17 (95% CI, −0.20 to −0.14); and for pneumonia (n = 4811), 0.002 (95% CI, −0.03 to 0.03).
Krumholz HM, Lin Z, Keenan PS, Chen J, Ross JS, Drye EE, Bernheim SM, Wang Y, Bradley EH, Han LF, Normand ST. Relationship Between Hospital Readmission and Mortality Rates for Patients Hospitalized With Acute Myocardial Infarction, Heart Failure, or Pneumonia. JAMA. 2013;309(6):587-593. doi:10.1001/jama.2013.333
Author Affiliations: Section of Cardiovascular Medicine (Drs Krumholz and Drye), Robert Wood Johnson Clinical Scholars Program (Drs Krumholz and Ross), and Section of General Internal Medicine (Drs Ross and Bernheim), Department of Internal Medicine, Yale University School of Medicine; Center for Outcomes Research and Evaluation, Yale-New Haven Hospital (Drs Krumholz, Lin, Ross, Drye, Bernheim, and Wang), and Section of Health Policy and Administration, Yale School of Public Health (Drs Krumholz and Bradley), New Haven, Connecticut; Department of Biostatistics, Harvard School of Public Health (Drs Wang and Normand) and Department of Health Care Policy, Harvard Medical School (Dr Normand), Boston, Massachusetts; and Centers for Medicare & Medicaid Services, Baltimore, Maryland (Dr Han). Drs Keenan and Chen were affiliated with Yale University (Section of Health Policy and Administration, School of Public Health and Section of Cardiovascular Medicine, Department of Medicine, respectively) during the time the work was conducted. Dr Keenan is now at the Centers for Medicare & Medicaid Services and Dr Chen is now at the Mid-Atlantic Permanente Research Institute, Rockland, Maryland.
Importance The Centers for Medicare & Medicaid Services publicly reports hospital 30-day, all-cause, risk-standardized mortality rates (RSMRs) and 30-day, all-cause, risk-standardized readmission rates (RSRRs) for acute myocardial infarction, heart failure, and pneumonia. The evaluation of hospital performance as measured by RSMRs and RSRRs has not been well characterized.
Objective To determine the relationship between hospital RSMRs and RSRRs overall and within subgroups defined by hospital characteristics.
Design, Setting, and Participants We studied Medicare fee-for-service beneficiaries discharged with acute myocardial infarction, heart failure, or pneumonia between July 1, 2005, and June 30, 2008 (4506 hospitals for acute myocardial infarction, 4767 hospitals for heart failure, and 4811 hospitals for pneumonia). We quantified the correlation between hospital RSMRs and RSRRs using weighted linear correlation; evaluated correlations in groups defined by hospital characteristics; and determined the proportion of hospitals with better and worse performance on both measures.
Main Outcome Measures Hospital 30-day RSMRs and RSRRs.
Results Mean RSMRs and RSRRs, respectively, were 16.60% and 19.94% for acute myocardial infarction, 11.17% and 24.56% for heart failure, and 11.64% and 18.22% for pneumonia. The correlations between RSMRs and RSRRs were 0.03 (95% CI, −0.002 to 0.06) for acute myocardial infarction, −0.17 (95% CI, −0.20 to −0.14) for heart failure, and 0.002 (95% CI, −0.03 to 0.03) for pneumonia. The results were similar for subgroups defined by hospital characteristics. Although there was a significant negative linear relationship between RSMRs and RSRRs for heart failure, the shared variance between them was only 2.9% (r2 = 0.029), with the correlation most prominent for hospitals with RSMR <11%.
Conclusion and Relevance Risk-standardized mortality rates and readmission rates were not associated for patients admitted with an acute myocardial infarction or pneumonia and were only weakly associated, within a certain range, for patients admitted with heart failure.
Measuring and improving hospital quality of care, particularly outcomes of care, is an important focus for clinicians and policy makers. The Centers for Medicare & Medicaid Services (CMS) began publicly reporting hospital 30-day, all-cause, risk-standardized mortality rates (RSMRs) for patients with acute myocardial infarction (AMI) and heart failure (HF) in June 2007 and for pneumonia in 2008. In June 2009, the CMS expanded public reporting to include hospital 30-day, all-cause, risk-standardized readmission rates (RSRRs) for patients hospitalized with these 3 conditions.1- 8 The National Quality Forum approved these measures and an independent committee of statisticians nominated by the Committee of Presidents of Statistical Societies endorsed the validity of the methods.9 The mortality and readmission measures have been proposed for use in federal programs to modify hospital payments based on performance.10,11
Some researchers have raised concerns that hospital mortality rates and readmission rates might have an inverse relationship, such that hospitals with lower mortality rates are more likely to have higher readmission rates.12,13 Interventions that improve mortality might also increase readmission rates by resulting in a higher-risk group being discharged from the hospital. Conversely, the 2 measures could provide redundant information. If these measures have a strong positive association, then inference could be made that they reflect similar processes and it may not be necessary to measure both. Limited information exists about this relationship, an understanding of which is critical to measurement of quality,12 and yet questions surrounding an inverse relationship have led to public concerns about the measures.14
In this study, we investigated the association between hospital-level 30-day RSMRs and RSRRs for Medicare fee-for-service beneficiaries admitted with AMI, HF, or pneumonia, which are the measures that are publicly reported. We further determined the relationships between these measures for subgroups of hospitals to evaluate if the relationships varied systematically. We also used highest and lowest performance quartiles to examine the percentage of hospitals that had similar performance on both measures for each condition. We hypothesized that these measures convey distinct information and are not strongly correlated and that many hospitals perform better on both measures and worse on both measures, indicating that performance on one measure does not dictate performance on the other.
The study cohorts included hospitalizations of Medicare beneficiaries aged 65 years or older with a principal discharge diagnosis of AMI, HF, or pneumonia as potential index hospitalizations. We used International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes to identify AMI, HF, and pneumonia discharges between July 1, 2005, and June 30, 2008.15 We used Medicare hospital inpatient, outpatient, and physician Standard Analytical Files to identify admissions, readmissions, and inpatient and outpatient diagnosis codes and assigned each hospitalization to a disease cohort based on the principal discharge diagnosis. We determined mortality and enrollment status from the Medicare Enrollment Database.
We defined the study samples consistent with CMS methods.2,4- 8 We restricted the samples to patients enrolled in fee-for-service Medicare Parts A and B for 12 months before their index hospitalizations to maximize our ability to risk adjust. We excluded patients who left the hospital against medical advice and those who had a length of stay of more than 1 year. The mortality cohorts included 1 randomly selected admission per patient annually. In the mortality cohorts, patients transferred to another acute care hospital were excluded if their principal discharge diagnosis was not the same at both hospitals, as were admissions of individuals enrolled in hospice at admission or at any time in the previous 12 months.
To construct the cohort for the analyses of RSRRs, we included hospitalizations for patients who were discharged alive and who continued in fee-for-service plans for at least 30 days following discharge. Any readmission before a patient's death was counted. Multiple index hospitalizations per patient were included if another index hospitalization occurred at least 30 days after discharge from the prior index hospitalization. The readmission and mortality samples thus, by design, include different, partially overlapping subsets of Medicare patients.
We obtained institutional review board approval through the Yale University Human Investigation Committee.
We estimated hospital 30-day, all-cause RSMRs and RSRRs for Medicare patients hospitalized with AMI, HF, and pneumonia at all nonfederal acute care hospitals during 2005-2008 using methods endorsed by the National Quality Forum and used by the CMS in public reporting. We defined 30-day mortality as death due to any cause within 30 days of the date of admission and readmission as the occurrence of 1 or more hospitalizations in any acute care hospital in the United States that participated in fee-for-service Medicare for any cause within 30 days of discharge from an index hospitalization. In the mortality analysis, we linked transfers into a single episode of care with outcomes attributed to the first (transfer-out) hospital. In the readmission analysis, we attributed readmissions to the hospital that discharged the patient to a nonacute setting.
To examine whether the relationship between RSMR and RSRR was consistent among subgroups of hospitals, we stratified the sample by hospital region, safety-net status, and urban or rural status. We used annual survey data from the American Hospital Association to categorize public or private hospitals as safety-net hospitals if their Medicaid caseload was greater than 1 SD above the mean Medicaid caseload in their respective state, as done in previous analyses of access and quality in safety-net hospitals.16,17 We used hospital zip codes to classify hospitals as urban or rural.18,19
We used hierarchical logistic regression models to estimate RSMRs and RSRRs for each hospital. The RSMR models were estimated using a logit link, with the first level adjusted for age, sex, and 25 clinical covariates for AMI, 21 clinical covariates for HF, and 28 clinical covariates for pneumonia. In a similar procedure, the RSRR models were adjusted at the first level for age, sex, and 29 clinical covariates for AMI, 35 clinical covariates for HF, and 38 clinical covariates for pneumonia. We coded covariates from inpatient and outpatient claims during the 12 months before the index admission. The second level of the mortality and readmission models permitted hospital-level random intercepts to vary to identify hospital-specific random effects and account for clustering of patients within the same hospital.20 With this approach, we separated within-hospital from between-hospital variation after adjusting for patient characteristics.
We calculated means and distributions of hospital RSMRs and RSRRs. We quantified the linear and nonlinear relationship between the 2 estimators. To do so, we determined the Pearson correlation between the estimated RSMRs and RSRRs weighted by the hospital average of RSMR and RSRR volume. The estimators were weighted because each has its own measure of uncertainty, even after shrinkage, which reflects the observed number of cases on which the estimate is based as well as how much within-hospital clustering exists. To identify potential nonlinear relationships between RSMRs and RSRRs for the 3 conditions, we also fitted generalized additive models using RSRR as the dependent variable and a cubic spline smoother of RSMR as the independent variable. We also stratified correlations by the hospital characteristics described.
For each condition, we also classified all hospitals by both RSMR and RSRR as identified by placement within quartiles. We considered hospitals to be higher performers if they were in the lowest quartile of mortality for RSMR and RSRR and lower performers if they were in the highest quartile.
We conducted correlation analyses and calculated means and performance categories using SAS software, version 9.1 (SAS Institute Inc). We used the mgcv package in R to fit generalized additive models.
For AMI, the sample for final analysis consisted of 4506 hospitals with 590 809 admissions for mortality and 586 027 readmissions; for HF, 4767 hospitals with 1 161 179 admissions for mortality and 1 430 030 readmissions; and for pneumonia, 4811 hospitals with 1 225 366 admissions for mortality and 1 297 031 readmissions (Table 1).
The median RSMR was 16.57% for AMI, 11.06% for HF, and 11.46% for pneumonia. The RSMRs ranged from 10.90% to 24.90% for AMI, from 6.60% to 19.85% for HF, and from 6.36% to 21.58% for pneumonia (Table 1). The size of the interquartile ranges was 1.69% for AMI, 1.70% for HF, and 2.29% for pneumonia. The median RSRR was 19.87% for AMI, 24.42% for HF, and 18.09% for pneumonia. The RSRRs ranged from 15.26% to 29.40% for AMI, from 15.94% to 34.35% for HF, and from 13.05% to 27.57% for pneumonia. The size of the interquartile ranges was 0.92% for AMI, 2.25% for HF, and 1.98% for pneumonia.
The Pearson correlation between RSMRs and RSRRs was 0.03 (95% CI, −0.002 to 0.06) for AMI, −0.17 (95% CI, −0.20 to −0.14) for HF, and 0.002 (95% CI, −0.03 to 0.03) for pneumonia (Figure and Table 2). The linear association was statistically significant only for HF. Results from generalized additive models were consistent with these findings, with no apparent relationship between RSMRs and RSRRs for AMI and pneumonia. Although we observed a significant negative linear relationship between RSMRs and RSRRs for HF, the shared variance between RSMRs and RSRRs was only 2.9% (r2 = 0.029). For HF, the relationship was most prominent in the lower range of the RSMR (ie, hospitals with an RSMR <11%).
In subgroup analyses, the correlations between RSMRs and RSRRs did not differ substantially in any of the subgroups of hospital types, including hospital region, safety net status, and urban/rural status (Table 2).
For AMI, 381 hospitals (8.5%) were in the top-performing quartile of both measures, with lower RSMRs and RSRRs; for HF, 259 hospitals (5.4%) were in the top-performing quartile; and for pneumonia, 307 hospitals (6.4%) were in the top-performing quartile for RSMRs and RSRRs. For AMI, 302 hospitals (6.7%) were in the bottom-performing quartile of both measures, with higher RSMRs and RSRRs; for HF, 252 hospitals (5.3%) were in the bottom-performing quartile; and for pneumonia, 344 hospitals (7.2%) were in the bottom-performing quartile for RSMRs and for RSRRs (Table 3).
In a national study of the CMS publicly reported outcomes measures, we failed to find evidence that a hospital's performance on the measure for 30-day RSMR is strongly associated with performance on 30-day RSRR. These findings should allay concerns that institutions with good performance on RSMRs will necessarily be identified as poor performers on their RSRRs. For AMI and pneumonia, there was no discernible relationship, and for HF, the relationship was only modest and not throughout the entire range of performance. At all levels of performance on the mortality measures, we found both high and low performers on the readmission measures.
This study represents the first comprehensive examination of the relationship between these measures within hospitals. A letter to the editor in a major medical journal identified a potential concern in the relationship between the 2 measures for patients with HF.12 Our analysis, which is consistent with their report, markedly extends the content of this letter and puts it in perspective with the other measures. The association between the mortality and readmission rates was present only for the HF measure and for this condition is quite modest and exists for only a limited range of the measure. Moreover, we show that hospitals can do well on both measures, with many hospitals having low RSMRs and RSRRs.
Studies that have produced findings that might be interpreted as suggesting that there should be an inverse relationship between the measures are not truly discordant with our results. For example, Heidenreich et al,21 in a study of hospitals within the Veteran Affairs health care system, reported that at the patient level, mortality after an admission for HF declined from 2002 to 2006 while readmission increased. They did not, however, examine changes in individual hospital performance nor investigate the relationship between RSMRs and RSRRs at the hospital level or for other conditions.
As a rationale for our study, there are several plausible reasons to think that there might be a relationship between the measures. Hospitals with lower mortality rates may have been discharging patients who had a greater severity of illness, compared with hospitals with higher mortality rates, in ways that are not accounted for in the risk models. Also, hospitals with higher mortality rates might have had patients die before they could be readmitted, such that high mortality rates caused lower readmission rates. However, our empirical analysis failed to validate this concern. If higher mortality rates did lead to a healthier cohort of survivors and a lower risk of readmission, we would expect to have seen a strong inverse relationship between 30-day RSMR and RSRR across the 3 conditions, as the effect would not have been related to a single diagnosis.
Among the measures, HF alone had an association between mortality and readmission, but the shared variance was small and many hospitals performed well for both HF measures. Many others performed poorly on both measures. Moreover, the relationship was most pronounced among the hospitals at the lower range of the RSMR. The observation that any relationship was noted only for 1 condition suggests that this is not a robust finding that could be applicable across conditions.
The findings are consistent across types of hospitals. For AMI and pneumonia, there is no evidence of a relationship across categories of hospitals defined by teaching status, rural or urban location, or for-profit status. For HF, there is also general consistency, though the inverse relationship is stronger for teaching, for-profit, and urban hospitals.
These findings also suggest that mortality and readmission measures convey distinct information. This observation has face validity because the factors that may be important in mortality, including rapid triage as well as early intervention and coordination in the hospital, may not be those that dominantly affect readmission risk. For readmission, factors related to the stress of the hospitalization, transition from inpatient to outpatient care, patient education and support, the availability of outpatient support, and the admission thresholds might play a more important role. In addition, the periods for the 2 measures are different, which may contribute to their differences. Although both measures cover 30 days, the starting times of the outcome periods are different. The period for the mortality measure begins at admission and more than half of the outcomes occur before discharge. The period for the readmission measure begins at hospital discharge and all the events occur after the index hospitalization.
Our results are also consistent with research on predictors of mortality and readmission. Factors that are strong predictors of mortality tend to be weak predictors of readmission, if there is any relationship at all.2,4- 8,22,23 Mortality risk models with medical record information or claims data have good discrimination and indicate that, in total, clinical factors have a dominant influence on mortality risk. For readmission, models using medical record information or administrative claims have much weaker predictive ability and discrimination, suggesting that readmission risk is not simply the inverse of mortality risk. Some of those unmeasured factors may relate to quality of care. We note that the discrimination and predictive measures characterize model performance at the patient level, whereas our findings are focused on the hospital level—the correlation of the estimated hospital-specific RSMRs and RSRRs. The patient-level risk of readmission is higher than that for mortality but the interquartile ranges are similar in size.
This study has several limitations. First, we assessed overall patterns and cannot exclude the possibility that in some hospitals, performance on one of the measures influences performance on the other. Second, there may be a concern that hierarchical modeling obscures a relationship because many hospitals have lower volume. Despite the use of hierarchical models, which reduce the risk of spurious results, the spread in rates among the hospitals was substantial, indicating that our findings are not the result of minimal variation in performance rates. Moreover, our findings were consistent in large as well as small hospitals as designated by bed size. In addition, we sought to evaluate the measures in use to address a policy-relevant question. Third, our study did not investigate the validity of the measures. We did design the mortality and readmission measures in this study to be measures of quality; the National Quality Forum, which has a rigorous and thorough vetting process with many levels of evaluation, approved them for that purpose; the CMS publicly reports them as quality measures; and the Affordable Care Act incorporates them into incentive programs as quality measures. Nevertheless, some critics may not consider the measures to reflect quality of care, and our study is designed to determine the relationship between the mortality and readmission measures, not to further evaluate their validity.
From a policy perspective, the independence of the measures is important. A strong inverse relationship might have implied that institutions would need to choose which measure to address. Our findings indicate that many institutions do well on mortality and readmission and that performance on one does not dictate performance on the other.
Corresponding Author: Harlan M. Krumholz, MD, SM, Department of Internal Medicine/Section of Cardiovascular Medicine, Yale University School of Medicine, 1 Church St, Ste 200, New Haven, CT 06510 (email@example.com).
AuthorContributions: Dr Lin had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Krumholz.
Acquisition of data: Krumholz.
Analysis and interpretation of data: Krumholz, Lin, Keenan, Chen, Ross, Drye, Bernheim, Wang, Bradley, Han, Normand.
Drafting of the manuscript: Krumholz.
Critical revision of the manuscript for important intellectual content: Krumholz, Lin, Keenan, Chen, Ross, Drye, Bernheim, Wang, Bradley, Han, Normand.
Statistical analysis: Lin, Normand.
Obtained funding: Krumholz.
Administrative, technical, or material support: Krumholz.
Study supervision: Krumholz.
Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Drs Krumholz and Ross report that they are recipients of a research grant from Medtronic through Yale University. Dr Krumholz chairs a cardiac scientific advisory board for UnitedHealth. Dr Ross is a member of a scientific advisory board for FAIR Health. Dr Normand reports being a member of the Board of Directors of Frontier Science & Technology Research Foundation. Drs Drye, Krumholz, Lin, Bernheim, Wang, and Normand report that they receive contract funding from CMS to develop and maintain quality measures. No other disclosures were reported.
Funding/Support: Dr Krumholz is supported by grant U01 HL105270-03 (Center for Cardiovascular Outcomes Research at Yale University) from the National Heart, Lung, and Blood Institute. Dr Chen is supported by Career Development Award K08 HS018781-03 from the Agency for Healthcare Research and Quality. Dr Ross is supported by grant K08 AG032886-05 from the National Institute on Aging and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program. The analyses on which this publication is based were performed under contract HHSM-500-2008-0025I/HHSM-500-T0001, Modification No. 000007, entitled “Measure Instrument Development and Support,” funded by CMS, an agency of the US Department of Health and Human Services.
Role of the Sponsors: The funding sponsors had no role in the design and conduct of the study; the collection, management, analysis, and interpretation of the data; or preparation of the manuscript. The CMS reviewed and approved the use of its data for this work and approved submission of the manuscript.
Disclaimer: The content of this publication does not necessarily reflect the views or policies of the US Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the US government. The authors assume full responsibility for the accuracy and completeness of the ideas presented.