Hospital-acquired conditions (HACs) are pervasive and expensive, and they cause unnecessary morbidity and mortality. As of 2017, 9 HACs still occurred for every 100 discharges.1 The Hospital-Acquired Conditions Reduction Program (HACRP) was created to reduce this rate.
Despite the critical need to improve safety, research indicates the HACRP has not been effective. While improvement on claims data–based measures accelerated after HACRP implementation, the program did not improve patient outcomes.2 Risk adjustment is inadequate, leading to disproportionate penalties for teaching hospitals and hospitals caring for more patients with socioeconomic disadvantages.3 Penalization has not improved safety.4 In addition, the HACRP has not been associated with improvement on included measures based on data other than claims5 or high-quality registry data.6
These dismal results highlight 2 HACRP problems: (1) inaccurate, unreliable HAC assessment7 and (2) penalties disproportionately and unfairly affecting certain hospitals. However, crucial modifications could be made through rulemaking. Rulemaking entails the executive branch specifying Congressional policy and regulation details. After public comment and the revision of Federal Register-published proposed rules, a final rule is published creating binding regulations. For Medicare policies, the Department of Health and Human Services (DHHS) plays a leading role.
The HACRP legislation (Affordable Care Act section 3008)8 requires that hospitals in the worst-performing quartile by HAC rate receive a 1% Medicare inpatient payment penalty. However, specific measures, performance determination, auditing, and other details are not described. Given this lack of specificity, DHHS has considerable degrees of freedom to improve the HACRP through rulemaking.
First, it is necessary to modify measures. A leading strategy to reduce central line bloodstream infections and catheter-associated urinary tract infections is reducing central line or catheter exposure. However, the catheter-associated urinary tract infections measure, for instance, considers only the ratio of infections to catheter days. Since hospitals have experience acquiring catheterization duration data, rewarding decreased catheter exposure would reduce HACs without requiring many documentation changes. Moreover, some outcomes (eg, pneumothorax) are especially preventable and uncommon. It is impossible to reliably distinguish hospitals on these measures. They should be eliminated.
Second, the government should improve risk adjustment. Risk adjustment variables are used to calculate hospitals’ expected number of HACs (to compare with the observed number), affecting hospital performance comparisons. Yet the current methodology is insufficient to address the large heterogeneity across patients and hospitals. Compared with other programs, the HACRP measures incorporate wide-ranging diagnoses, increasing heterogeneity in patient characteristics across hospitals. For instance, hospitals performing more surgeries face increased risk for additional HACs, because patients undergoing surgery are eligible for more HACRP measures.
Additionally, certain measures omit key patient characteristics that increase risk of HACs. For example, the risk adjustment for surgical site infections neglects (1) preoperative diagnosis, (2) whether the case is elective or emergency, and (3) patient immunosuppression status—despite the association of these variables with risk and their variations across hospitals.
Although risk adjustment can never fully address such differences, the current methodology can be improved. One approach is enhancing risk adjustment variables to better account for patient characteristics and medical complexities that increase HAC risk. Second, following the Hospital Readmissions Reduction Program and Bundled Payment for Care Improvement Advanced, penalty thresholds could be set differently for hospital peer groups to further address systematic differences. Despite statutory language that a “hospital is in the top quartile of all subsection (d) hospitals [general, short-term hospitals providing acute care], relative to the national average,”8 we believe the provision for the government to “establish and apply an appropriate risk adjustment methodology”8 allows a penalty threshold based on the national mean of similar hospitals.
By implementing such changes, hospitals disproportionately at risk for HACs, given patient compositions and procedures, will be less disadvantaged in performance comparisons. Once established, such changes require little effort.
Third, the government should address the small n problem. To receive a given measure score, a hospital needs 3 or more eligible discharges or 1 more expected HAC (depending on the measure), leading some hospitals to receive scores with small sample sizes. This has an outsized association with performance comparisons if those hospitals lack other scores. Despite minimum sample sizes for moderate reliability for measures, hospitals frequently receive scores based on smaller caseloads.
In response, more data years could increase sample sizes, more hospitals could be excluded from receiving measure scores because of unreliable data, or bayesian shrinkage strategies could be used. There is already precedent for these approaches (eg, transplant center outcomes, the Hospital Readmissions Reduction Program).
Finally, it is necessary to establish clear guidelines and enhance auditing. Wide variability exists in surveillance, testing, and potentially reporting practices for known HACs. Hospitals currently rely on their own resources to determine these procedures. Clinicians can also reach different diagnoses on clinical bases alone, underscoring the importance of conclusive testing. In addition, the Government Accountability Office found the Centers for Medicare & Medicaid Services’ hospital selection practices for quality-measure reporting validation decreased capacity to identify gaming strategies. Moreover, hospitals with aberrant data patterns were not used for additional scrutiny. Insufficiently stringent, incomprehensive auditing and limited consequences for failing undermine incentives to accurately document HACs and negatively affects hospitals’ fairly surveying for, testing for, and reporting HACs. Additionally, hospitals with more resources can hire consultants to determine optimal procedures to identify and report HACs, which may exacerbate penalty disparities.
Together, these circumstances necessitate clearer guidelines, initial technical assistance, enhanced auditing, and larger consequences for failing validation. While enhanced auditing could initially require more time and resources than other changes, strengthening this and adding consequences for failing would reduce data concerns in future years and improve HAC tracking and reduction.
The premise underlying the HACRP—that hospitals accurately collect data on patient safety events that will then be used to penalize them—creates an inherent conflict that is hard to overcome. Nonetheless, it is imperative to improve the HACRP through rulemaking. Although such changes require DHHS investments to develop, finalize, and implement, they require less effort in future years and enhance the program. While not a panacea, these changes make penalties fairer and strengthen reliability and validity of safety measurement. The coronavirus disease 2019 pandemic further highlights measurement limitations and the dire need to prevent infections. Instead of discouraging preventive actions or encouraging efforts to reduce reportable HACs without truly improving quality, these changes better incentivize actions to reduce actual HACs. This in turn could make the HACRP a program that saves lives and money.
Corresponding Author: Andrew M. Ryan, PhD, University of Michigan School of Public Health, M3124 SPH II, 1415 Washington Heights, Ann Arbor, MI 48109 (firstname.lastname@example.org).
Conflict of Interest Disclosures: Ms Lawton and Dr Ryan acknowledge funding from Agency for Healthcare Research and Quality (grant 5R01HS026244). No other disclosures were reported.
Identify all potential conflicts of interest that might be relevant to your comment.
Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.
Err on the side of full disclosure.
If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.
Not all submitted comments are published. Please see our commenting policy for details.