Hospital-acquired conditions (HACs) are pervasive and expensive, and they cause unnecessary morbidity and mortality. As of 2017, 9 HACs still occurred for every 100 discharges.1 The Hospital-Acquired Conditions Reduction Program (HACRP) was created to reduce this rate.
Despite the critical need to improve safety, research indicates the HACRP has not been effective. While improvement on claims data–based measures accelerated after HACRP implementation, the program did not improve patient outcomes.2 Risk adjustment is inadequate, leading to disproportionate penalties for teaching hospitals and hospitals caring for more patients with socioeconomic disadvantages.3 Penalization has not improved safety.4 In addition, the HACRP has not been associated with improvement on included measures based on data other than claims5 or high-quality registry data.6
These dismal results highlight 2 HACRP problems: (1) inaccurate, unreliable HAC assessment7 and (2) penalties disproportionately and unfairly affecting certain hospitals. However, crucial modifications could be made through rulemaking. Rulemaking entails the executive branch specifying Congressional policy and regulation details. After public comment and the revision of Federal Register-published proposed rules, a final rule is published creating binding regulations. For Medicare policies, the Department of Health and Human Services (DHHS) plays a leading role.
The HACRP legislation (Affordable Care Act section 3008)8 requires that hospitals in the worst-performing quartile by HAC rate receive a 1% Medicare inpatient payment penalty. However, specific measures, performance determination, auditing, and other details are not described. Given this lack of specificity, DHHS has considerable degrees of freedom to improve the HACRP through rulemaking.
First, it is necessary to modify measures. A leading strategy to reduce central line bloodstream infections and catheter-associated urinary tract infections is reducing central line or catheter exposure. However, the catheter-associated urinary tract infections measure, for instance, considers only the ratio of infections to catheter days. Since hospitals have experience acquiring catheterization duration data, rewarding decreased catheter exposure would reduce HACs without requiring many documentation changes. Moreover, some outcomes (eg, pneumothorax) are especially preventable and uncommon. It is impossible to reliably distinguish hospitals on these measures. They should be eliminated.
Second, the government should improve risk adjustment. Risk adjustment variables are used to calculate hospitals’ expected number of HACs (to compare with the observed number), affecting hospital performance comparisons. Yet the current methodology is insufficient to address the large heterogeneity across patients and hospitals. Compared with other programs, the HACRP measures incorporate wide-ranging diagnoses, increasing heterogeneity in patient characteristics across hospitals. For instance, hospitals performing more surgeries face increased risk for additional HACs, because patients undergoing surgery are eligible for more HACRP measures.
Additionally, certain measures omit key patient characteristics that increase risk of HACs. For example, the risk adjustment for surgical site infections neglects (1) preoperative diagnosis, (2) whether the case is elective or emergency, and (3) patient immunosuppression status—despite the association of these variables with risk and their variations across hospitals.
Although risk adjustment can never fully address such differences, the current methodology can be improved. One approach is enhancing risk adjustment variables to better account for patient characteristics and medical complexities that increase HAC risk. Second, following the Hospital Readmissions Reduction Program and Bundled Payment for Care Improvement Advanced, penalty thresholds could be set differently for hospital peer groups to further address systematic differences. Despite statutory language that a “hospital is in the top quartile of all subsection (d) hospitals [general, short-term hospitals providing acute care], relative to the national average,”8 we believe the provision for the government to “establish and apply an appropriate risk adjustment methodology”8 allows a penalty threshold based on the national mean of similar hospitals.
By implementing such changes, hospitals disproportionately at risk for HACs, given patient compositions and procedures, will be less disadvantaged in performance comparisons. Once established, such changes require little effort.
Third, the government should address the small n problem. To receive a given measure score, a hospital needs 3 or more eligible discharges or 1 more expected HAC (depending on the measure), leading some hospitals to receive scores with small sample sizes. This has an outsized association with performance comparisons if those hospitals lack other scores. Despite minimum sample sizes for moderate reliability for measures, hospitals frequently receive scores based on smaller caseloads.
In response, more data years could increase sample sizes, more hospitals could be excluded from receiving measure scores because of unreliable data, or bayesian shrinkage strategies could be used. There is already precedent for these approaches (eg, transplant center outcomes, the Hospital Readmissions Reduction Program).
Finally, it is necessary to establish clear guidelines and enhance auditing. Wide variability exists in surveillance, testing, and potentially reporting practices for known HACs. Hospitals currently rely on their own resources to determine these procedures. Clinicians can also reach different diagnoses on clinical bases alone, underscoring the importance of conclusive testing. In addition, the Government Accountability Office found the Centers for Medicare & Medicaid Services’ hospital selection practices for quality-measure reporting validation decreased capacity to identify gaming strategies. Moreover, hospitals with aberrant data patterns were not used for additional scrutiny. Insufficiently stringent, incomprehensive auditing and limited consequences for failing undermine incentives to accurately document HACs and negatively affects hospitals’ fairly surveying for, testing for, and reporting HACs. Additionally, hospitals with more resources can hire consultants to determine optimal procedures to identify and report HACs, which may exacerbate penalty disparities.
Together, these circumstances necessitate clearer guidelines, initial technical assistance, enhanced auditing, and larger consequences for failing validation. While enhanced auditing could initially require more time and resources than other changes, strengthening this and adding consequences for failing would reduce data concerns in future years and improve HAC tracking and reduction.
The premise underlying the HACRP—that hospitals accurately collect data on patient safety events that will then be used to penalize them—creates an inherent conflict that is hard to overcome. Nonetheless, it is imperative to improve the HACRP through rulemaking. Although such changes require DHHS investments to develop, finalize, and implement, they require less effort in future years and enhance the program. While not a panacea, these changes make penalties fairer and strengthen reliability and validity of safety measurement. The coronavirus disease 2019 pandemic further highlights measurement limitations and the dire need to prevent infections. Instead of discouraging preventive actions or encouraging efforts to reduce reportable HACs without truly improving quality, these changes better incentivize actions to reduce actual HACs. This in turn could make the HACRP a program that saves lives and money.
Open Access: This is an open access article distributed under the terms of the CC-BY License.
Corresponding Author: Andrew M. Ryan, PhD, University of Michigan School of Public Health, M3124 SPH II, 1415 Washington Heights, Ann Arbor, MI 48109 (amryan@umich.edu).
Conflict of Interest Disclosures: Ms Lawton and Dr Ryan acknowledge funding from Agency for Healthcare Research and Quality (grant 5R01HS026244). No other disclosures were reported.
2.Arntson
E, Dimick
JB, Nuliyalu
U, Errickson
J, Engler
TA, Ryan
AM. Changes in hospital-acquired conditions and mortality associated with the Hospital-Acquired Condition Reduction Program.
Ann Surg. Published online October 16, 2019. doi:
10.1097/SLA.0000000000003641Google Scholar 3.Rajaram
R, Chung
JW, Kinnier
CV,
et al. Hospital characteristics associated with penalties in the Centers for Medicare & Medicaid Services Hospital-Acquired Condition Reduction Program.
JAMA. 2015;314(4):375-383. doi:
10.1001/jama.2015.8609PubMedGoogle ScholarCrossref 4.Sankaran
R, Sukul
D, Nuliyalu
U,
et al. Changes in hospital safety following penalties in the US Hospital Acquired Condition Reduction Program: retrospective cohort study.
BMJ. 2019;366:l4109. doi:
10.1136/bmj.l4109PubMedGoogle ScholarCrossref 7.Sheetz
KH, Ryan
A. Accuracy of quality measurement for the Hospital Acquired Conditions Reduction Program.
BMJ Qual Saf. 2019;bmjqs-2019-009747. Published online December 20, 2019. doi:
10.1136/bmjqs-2019-009747PubMedGoogle Scholar 8.US Congress. Affordable Health Care Act (HR3950F): title III—improving the quality and efficiency of health care: subtitle A, transforming the health care delivery system: section 3008, payment adjustment for conditions acquired in hospitals. Published December 24, 2009. Accessed May 21, 2020.
https://www.congress.gov/111/plaws/publ148/PLAW-111publ148.pdf