[Skip to Content]
[Skip to Content Landing]
Views 1,080
Invited Commentary
Statistics and Research Methods
July 17, 2019

Exact Science and the Art of Approximating Quality in Hospital Performance Metrics

Author Affiliations
  • 1Department of Internal Medicine, University of Iowa, Iowa City
JAMA Netw Open. 2019;2(7):e197321. doi:10.1001/jamanetworkopen.2019.7321

All exact science is dominated by the idea of approximation.

Bertrand Russell

Measurement of hospital performance is a central feature of US health care policy, designed to promote quality improvement through public reporting and financial penalties. The Hospital Compare feature serves as the US Centers for Medicare & Medicaid Services (CMS) platform for disseminating performance metrics as part of the CMS Hospital Inpatient Quality Reporting program.1 The Centers for Medicare & Medicaid Services began publicly reporting 30-day risk-standardized mortality rates (RSMRs) for acute myocardial infarction and heart failure for the nation’s short-term acute care and critical access hospitals in 2007, adding reports for pneumonia, chronic obstructive pulmonary disease, ischemic stroke, and coronary artery bypass graft surgery during the next 8 years.2 Similar measures are used for the CMS Hospital Value-Based Purchasing program, which imposes financial penalties for underperforming hospitals.3 Likewise, risk-standardized readmission rates measure hospital-level rates of excess readmissions for several medical conditions.4 The Hospital Readmissions Reduction Program, another CMS value-based purchasing program, reduces payments to hospitals with excess readmissions.

To account for differences in patient mix among hospitals, risk models are used to statistically adjust for patient factors that are clinically relevant and related to patient outcomes (eg, age, sex, comorbid diseases). Current risk-adjustment models used by CMS incorporate diagnoses and demographic characteristics from inpatient, outpatient, and physician Medicare claims incurred during the 12 months prior to and including the index admission. The risk-adjustment model is evaluated annually to incorporate advances in statistical methods, changes in clinical practice, or updates in medical coding. During the past decade, refinements to the estimation of RSMRs include the use of additional checks for data reliability, modification of patient inclusion and exclusion criteria (eg, excluding patients receiving palliative care), and refinements to patient risk assessment. Similar statistical strategies are used to calculate hospital risk-standardized readmission rates.

Krumholz et al5 demonstrated a 3-pronged approach to further improve risk-adjustment models for RSMR estimation, including: (1) using the present on admission indicator mandated for Medicare claims since 2014, (2) distinguishing diagnoses incurred during the 12 months prior to admission from those recorded as present at the time of the admission, and (3) using individual International Classification of Diseases, Ninth Revision, Clinical Modification diagnosis codes for risk adjustment rather than codes aggregated to clinical condition categories. The first modification addresses the threat of misclassifying a complication as a preexisting condition (ie, only comorbidities that represent the patient’s health status prior to or at the time of admission and not complications that arise during the course of the hospitalization should be included in risk adjustment), the second modification reflects the recency with which patients have sought treatment for specific diagnoses, and the third provides greater detail about specific patient diagnoses than previous models. These modifications add to the influential body of literature generated by investigators at the Yale–New Haven Health Services Corporation/Center for Outcomes Research & Evaluation, who have overseen the calculation of hospital RSMRs for the Hospital Inpatient Quality Reporting program since its inception.

Overall, Krumholz et al5 found that each modification incrementally improved model fit, measured as the C statistic, while the use of all 3 modifications provided the best discrimination and calibration. The use of all 3 modifications also provided greater variation in hospital RSMRs and identified more hospitals as underperforming or overperforming than current methods.5 This is critically important because the lack of variation in RSMRs has been noted as a limitation of current risk-adjustment models.6 Importantly, the modifications Krumholz et al5 suggested do not impose any additional cost on the estimation of patient risk beyond current efforts and are relatively easy to implement.

The continued diligence to refine risk-adjustment methods for assessing hospital performance has no doubt improved the ability to assess true differences between hospitals. Nevertheless, limitations remain. In the case of RSMRs, data on patient stability at the time of hospital arrival (eg, blood pressure, respiratory rate) or functional status are not readily available, and RSMR estimates may be influenced to the extent that such factors vary systematically between hospitals. While incorporating measures of admission vital signs or physical function for the more than 4000 hospitals currently included in Hospital Compare is not feasible at the present time, the continued movement toward centralized data research networks that incorporate clinical detail from electronic medical records may make this a reality in our lifetime. Until then, an important implication of the article by Krumholz et al5 is that there may yet be untapped potential to capture severity in claims data. The disaggregation of diagnoses to provide greater specificity and the incorporation of comorbid diagnosis recency used by Krumholz et al5 are just 2 such approaches.

Other factors that have been considered for risk adjustment include patient socioeconomic status and patient health behaviors (eg, adherence to discharge instructions, social support systems). Considerable debate surrounds whether these measures are appropriate for risk adjustment. While socially disadvantaged patients indeed have worse outcomes than other patients and failure to account for the uneven distribution of disadvantaged patients across hospitals may unfairly penalize hospitals that care for large proportions of such patients, that does not necessarily mean that risk models should control for social disadvantage. Doing so effectively gives hospitals a pass for providing low-quality care to disadvantaged patients. Determining whether or not social factors should be adjusted for depends, in part, on the pathway by which the social factor affects outcomes. In the case of patient health behaviors, there is disagreement on the extent to which hospitals should be held responsible for patient behavior after discharge. Regardless, data on patient health behaviors are not readily available.

The article by Krumholz et al5 is also a timely reminder that hospital performance measures are, at best, approximations of underlying quality, despite scientific and statistical rigor. During the past decade, the use of hospital performance metrics to encourage quality improvement has found its way into mainstream health care policy. The Hospital Inpatient Quality Reporting program began as voluntary in 2002 and evolved into a mandatory program with financial penalties for nonparticipation through the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 and Deficit Reduction Act of 2005. Although the evidence that public reporting has influenced hospital selection is inconsistent,7 performance metrics that influence reimbursement through the CMS Hospital Value-Based Purchasing program or Hospital Readmissions Reduction Program have clear financial implications. While RSMRs are only a single factor that influence reimbursement in the CMS Hospital Value-Based Purchasing program (measures such as patient satisfaction and processes of care are also considered), inadequate control of differences in patient risk across hospitals may unfairly punish hospitals designated as poor performers on the basis of RSMRs. Thus, the importance of fairly estimating hospital RSMRs (and companion risk-standardized readmission rates) cannot be overstated. Moreover, payment models similar to the Hospital Value-Based Purchasing program are being considered for payments to physicians, nursing homes, and home health agencies.8 As a result, there are increasing incentives for hospitals and patients to take hospital performance metrics seriously and increasing pressure on policy makers to find ever more accurate methods for assessing hospital quality.

Back to top
Article Information

Published: July 17, 2019. doi:10.1001/jamanetworkopen.2019.7321

Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2019 Vaughan Sarrazin MS et al. JAMA Network Open.

Corresponding Author: Mary S. Vaughan Sarrazin, PhD, Department of Internal Medicine, University of Iowa, 200 Hawkins Dr, C44-GH, Iowa City, IA 52242 (mary-vaughan-sarrazin@uiowa.edu).

Conflict of Interest Disclosures: None reported.

References
1.
US Centers for Medicare & Medicaid Services. Hospital Compare. https://www.medicare.gov/hospitalcompare/search.html. Accessed June 6, 2019.
2.
Yale–New Haven Health Services Corporation/Center for Outcomes Research & Evaluation (YNHHSC/CORE). Overview: mortality measures. http://www.qualitynet.org/dcs/ContentServer?c=Page&pagename=QnetPublic%2FPage%2FQnetTier2&cid=1163010398556. Accessed May 12, 2019.
3.
US Centers for Medicare & Medicaid Services. Hospital Value-Based Purchasing. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/Hospital-Value-Based-Purchasing-.html. Accessed June 6, 2019.
4.
Yale–New Haven Health Services Corporation/Center for Outcomes Research & Evaluation (YNHHSC/CORE). Overview: readmission measures. http://www.qualitynet.org/dcs/ContentServer?c=Page&pagename=QnetPublic%2FPage%2FQnetTier2&cid=1219069855273. Accessed June 6, 2019.
5.
Krumholz  HM, Coppi  AC, Warner  F,  et al.  Comparative effectiveness of new approaches to improve mortality risk models from Medicare claims data.  JAMA Netw Open. 2019;2(7):e197314. doi:10.1001/jamanetworkopen.2019.7314Google Scholar
6.
Mukamel  DB, Glance  LG, Dick  AW, Osler  TM.  Measuring quality for public reporting of health provider quality: making it meaningful to patients.  Am J Public Health. 2010;100(2):264-269. doi:10.2105/AJPH.2008.153759PubMedGoogle ScholarCrossref
7.
Metcalfe  D, Rios Diaz  AJ, Olufajo  OA,  et al.  Impact of public release of performance data on the behaviour of healthcare consumers and providers.  Cochrane Database Syst Rev. 2018;9:CD004538. doi:10.1002/14651858.CD004538.pub3PubMedGoogle Scholar
8.
US Centers for Medicare & Medicaid Services. What are the value-based programs? https://cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/Value-Based-Programs.html. Accessed December 2, 2018.
Limit 200 characters
Limit 25 characters
Conflicts of Interest Disclosure

Identify all potential conflicts of interest that might be relevant to your comment.

Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.

Err on the side of full disclosure.

If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.

Not all submitted comments are published. Please see our commenting policy for details.

Limit 140 characters
Limit 3600 characters or approximately 600 words
    ×