Customize your JAMA Network experience by selecting one or more topics from the list below.
High-quality health care is an important goal for all societies. Most people would agree that societies should reward health care providers or organizations providing high-quality care while those whose care can improve should be actively encouraged to get better. Rewarding excellence and encouraging improvement requires the ability to measure health care quality.
Two important attributes of a quality indicator are its ability to be measured accurately (reliability) and its ability to actually measure health care quality (validity). At first glance, early unplanned hospital readmissions would seem to meet both of these criteria. Early unplanned hospital readmissions can be definitively measured accurately using routinely collected health administrative data. Most people would think that early unplanned readmissions must reflect, in some way, a deficit of care during the index hospitalization. Therefore, it is not unexpected that unplanned early hospital readmission is a commonly used health quality indicators.
However, a closer examination of early unplanned hospital readmissions reveals some deficiencies that undermine its utility as a health quality indicator. Many patient and hospitalization characteristics influence the likelihood of early unplanned readmission.1-5 Because these characteristics can vary extensively between health care providers, readmission rates must be adjusted for such characteristics in order to fairly compare early unplanned readmission rates between providers. This adjustment is done using multivariable models with data from health administrative databases.
Unfortunately, health administrative databases frequently lack the information necessary for complete adjustment. They accurately capture some important factors (eg, patient age and index hospitalization length of stay). Acute diagnoses are captured using administrative codes (with varying degrees of accuracy), but there is no measure of disease severity. Patient physiological reserve is usually captured by comorbidities, again captured using administrative codes that can be inaccurate.6 Factors one would expect to be strongly associated with hospital readmission risk, such as patient functional status or social supports at home, are absent from these administrative database models. It is not surprising, therefore, that these models typically have only moderate abilities to predict readmission risk with C statistics topping out at around 0.7.7 This means that if you randomly chose a person with and without an early urgent readmission, the predicted risk of readmission in the readmitted person would be higher only 70% of the time.
Imperfect predictive models can have unexpected effects when they are used for health performance measurement. If the distribution of important factors missing from the model is independent of model’s predicted risk (“nondifferential misclassification”), differences in adjusted rates are valid. Barnett and colleagues8 illustrate the impact that “differential misclassification” can have when comparing interhospital readmission rates adjusted using a relatively weak model. They identified 8067 admissions in 3470 Medicare-eligible people participating in the community-based Health and Retirement Study who were hospitalized around the time of the survey. They categorized the 1896 hospitals into quintiles based on publicly reported hospital-wide readmission rates. Of 22 factors found to be significantly and independently associated with unplanned readmission risk, the distribution for 17 of these characteristics differed significantly between hospitals in the lowest vs the highest readmission quintiles; 16 of these characteristics showed higher-risk characteristics in hospitals having the higher readmission rate. This differential misclassification resulted in hospitals with the higher adjusted readmission risk treating patients who were sicker, poorer, and less educated. After adjusting for these additional patient factors associated with readmission risk, the difference between lowest- and highest-quintile hospitals decreased from a statistically significant 4.41% to a statistically insignificant 2.29%. These data indicate how incomplete adjustment and differential misclassification can lead to unreliable comparisons.
The second limitation of using unplanned readmissions as a health indicator reflects their capability of measuring quality of care. If each event was due to poor quality of care, unplanned early readmissions would be excellent health quality indicators. However, the median proportion of early unplanned readmissions that are classified as potentially avoidable is only around 25%.9 This means that most readmissions are due to events that are beyond a clinician’s control that can be the natural history of a patient’s disease or social (eg, issues due to housing, social structure, lack of access to medications). Given this, all-cause unplanned early readmissions (that which we can actually measure) are surrogate markers for avoidable hospital readmissions (the quality indicator we actually want to measure). The loose association between all-cause unplanned early readmission and avoidable readmissions indicate that much of the variation in measured readmission rates is due to statistical “noise” composed of random events and environmental factors. This likely explains why hospital readmission rankings can vary extensively with small changes to calculation methodologies.10
These issues highlight the challenge of finding accurate and measurable health care quality indicators. In particular, we must be able to account for potential confounders when these indicators are measured. This can be gauged by examining the model’s accuracy by assessing its discrimination and calibration, with highly accurate models providing a more reliable provider comparison. The analysis by Barnett et al8 illustrates that the differential misclassification of important covariates can threaten the results of health provider comparisons that are made using weaker statistical models. In addition, we must have some evidence that the measured outcome truly reflects health care quality. Issues with either of these criteria should increase our skepticism of the health care quality indicator in question and any analyses based on that indicator.
Corresponding Author: Carl van Walraven, MD, MSc, Department of Medicine and Epidemiology & Community Medicine, University of Ottawa, Senior Scientist, Ottawa Hospital Research Institute, ASB1-003 1053 Carling Ave, Ottawa, ON K1Y 4E9, Canada (firstname.lastname@example.org).
Published Online: September 14, 2015. doi:10.1001/jamainternmed.2015.4727.
Conflict of Interest Disclosures: None reported.
van Walraven C. The Utility of Unplanned Early Hospital Readmissions as a Health Care Quality Indicator. JAMA Intern Med. 2015;175(11):1812–1814. doi:10.1001/jamainternmed.2015.4727
Create a personal account or sign in to: