eTable 1. Data Sources Used
eTable 2. HVBP Outcomes Detailed
eTable 3. Number of Hospitals Attesting to Each Meaningful Use Performance Measure by Year
eTable 4. Errors Fixed
eTable 5. Mean Differences (P Value) in MU Measures Between Hospitals Included in HAI Models vs Hospitals Excluded Due to Not Submitting Data Using 2-Sample t Test
eTable 6. Adjusted Quantile Regression Results for HVBP Engagement (Patient Satisfaction) Outcomes at 0.1, 0.5, and 0.9 Quantiles
eTable 7. Adjusted Quantile Regression Results for Medicare Spending per Beneficiary (MSPB) and Hospital-Acquired Infection (HAI) Outcomes at 0.1, 0.5, and 0.9 Quantiles
eMethods. Data Considerations
eFigure. Hospital Value-Based Purchasing Program (HVBP) Domain Component Performance Periods by Fiscal Year
Customize your JAMA Network experience by selecting one or more topics from the list below.
Identify all potential conflicts of interest that might be relevant to your comment.
Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.
Err on the side of full disclosure.
If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.
Not all submitted comments are published. Please see our commenting policy for details.
Murphy ZR, Wang J, Boland MV. Association of Electronic Health Record Use Above Meaningful Use Thresholds With Hospital Quality and Safety Outcomes. JAMA Netw Open. 2020;3(9):e2012529. doi:10.1001/jamanetworkopen.2020.12529
Is electronic health record implementation beyond meaningful use thresholds associated with changes in hospital measures of patient satisfaction, spending, and safety?
In this cross-sectional analysis of 2362 hospitals using data from 2016, associations between meaningful use performance measures and Hospital Value-Based Purchasing Program measures of patient satisfaction, spending, and safety were evaluated. Mixed associations were found that varied depending on whether the hospital was in the lower, middle, or upper quantiles of the Hospital Value-Based Purchasing Program outcome.
These findings suggest that advanced levels of electronic health record implementation are not consistently associated with patient satisfaction, spending, and safety, and in some cases depend on the outcome quantile.
By 2018, Medicare spent more than $30 billion to incentivize the adoption of electronic health records (EHRs), based partially on the belief that EHRs would improve health care quality and safety. In a time when most hospitals are well past minimum meaningful use (MU) requirements, examining whether EHR implementation beyond the minimum threshold is associated with increased quality and safety may guide the future focus of EHR development and incentive structures.
To determine whether EHR implementation above MU performance thresholds is associated with changes in hospital patient satisfaction, efficiency, and safety.
Design, Setting, and Participants
This quantile regression analysis of cross-sectional data used publicly available data sets from 2362 acute care hospitals in the United States participating in both the MU and Hospital Value-Based Purchasing (HVBP) programs from January 1 to December 31, 2016. Data were analyzed from August 1, 2019, to May 22, 2020.
Seven MU program performance measures, including medication and laboratory orders placed through the EHR, online health information availability and access rates, medication reconciliation through the EHR, patient-specific educational resources, and electronic health information exchange.
Main Outcomes and Measures
The HVBP outcomes included patient satisfaction survey dimensions, Medicare spending per beneficiary, and 5 types of hospital-acquired infections.
Among the 2362 participating hospitals, mixed associations were found between MU measures and HVBP outcomes, all varying by outcome quantile and in some cases by interaction with EHR vendor. Computerized provider order entry (CPOE) for laboratory orders was associated with decreased ratings of every patient satisfaction outcome at middle quantiles (communication with nurses: β = −0.33 [P = .04]; communication with physicians: β = −0.50 [P < .001]; responsiveness of hospital staff: β = −0.57 [P = .03]; care transition performance: β = −0.66 [P < .001]; communication about medicines: β = −0.52 [P = .002]; cleanliness and quietness: β = −0.58 [P = .007]; discharge information: β = −0.48 [P < .001]; and overall rating: β = −0.95 [P < .001]). However, at middle quantiles, CPOE for medication orders was associated with increased ratings for communication with physicians (τ = 0.5; β = 0.54; P = .009), care transition (τ = 0.5; β = 1.24; P < .001), discharge information (τ = 0.5; β = 0.41; P = .01), and overall hospital ratings (τ = 0.5; β = 0.97; P = .02). At high quantiles, electronic health information exchange was associated with improved ratings of communication with nurses (τ = 0.9; β = 0.23; P = .03). Medication reconciliation had positive associations with increased communication with nursing at low quantiles (τ = 0.1; β = 0.60; P < .001), increased discharge information at middle quantiles (τ = 0.5; β = 0.28; P = .03), and responsiveness of hospital staff at middle (τ = 0.5; β = 0.77; P = .001) and high (τ = 0.9; β = 0.84; P = .001) quantiles. Patients accessing their health information online was not associated with any outcomes. Increased use of patient-specific educational resources identified through the EHR was associated with increased ratings of communication with physicians at high quantiles (τ = 0.9; β = 0.20; P = .02) and with decreased spending at low-spending hospitals (τ = 0.1; β = −0.40; P = .008).
Conclusions and Relevance
Increasing EHR implementation, as measured by MU criteria, was not straightforwardly associated with increased HVBP measures of patient satisfaction, spending, and safety in this study. These results call for a critical evaluation of the criteria by which EHR implementation is measured and increased attention to how different EHR products may lead to differential outcomes.
The HITECH (Health Information Technology for Economic and Clinical Health) Act of 2009 was motivated by the belief that electronic health records (EHRs) would improve health care quality and safety.1 The HITECH Act created financial incentives for hospitals to demonstrate “meaningful use” (MU) of EHRs by meeting minimum implementation and performance thresholds across an array of EHR functions.
With more than $30 billion spent on the MU program (renamed Promoting Interoperability) by 2018,2 many studies have investigated whether the claim that EHRs would improve hospital quality and safety has paid off. This research has largely focused on comparisons between hospitals that attained the MU threshold and those that did not, which has revealed a divide between large, urban, academic hospitals that tended to achieve MU early and small, rural, nonacademic hospitals that lagged behind.3-5 Studies of patient satisfaction and the EHR in the inpatient setting have shown contradictory findings.6-14 Regarding cost control, attaining MU has not been found to affect expenditures per patient or hospital operating margins.15,16 The evidence that EHRs improve safety is stronger,17 but existing studies largely compared hospitals with full EHRs and hospitals with minimal or no EHRs.
In treating EHR implementation as a dichotomy between MU attained or not, little research has investigated differences between hospitals that just pass the minimum thresholds to meet MU and those that far exceed the minimum thresholds. Existing studies showed hospitals successfully attesting nearer to the minimum thresholds tended to be small, rural, nonacademic hospitals, whereas those at the top of the performance measures tended to be large, urban, academic medical centers.18 Furthermore, among hospitals attesting to MU, the EHR vendor had mixed associations with 6 MU performance measures.19
The heterogeneity of health systems, EHR products, and other factors contributing to the health care environment means that we must continually consider whether we are incentivizing the proper metrics to fully realize EHRs as a driver of quality and safety. In a time when most hospitals have EHR capabilities above the MU minimum thresholds, examining the association between EHR implementation above MU thresholds and quality and safety outcomes may provide insight into whether these MU metrics are still meeting their intended goal.
This study used publicly available data sets and therefore did not fit the Health and Human Services criteria for human subjects research and did not require approval by an institutional review board or informed consent. eTable 1 in the Supplement contains links to the data sources used. This study followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline.
We created a cross-sectional sample of acute care hospitals that attested to participation in the MU program and also participated in the Hospital Value-Based Purchasing Program (HVBP) from January 1 to December 31, 2016, then constructed quantile regression models to examine associations between MU performance measures and 14 outcomes of the HVBP covering patient satisfaction, spending, and safety domains. Although the data represent 2016, this analysis was performed from August 1, 2019, to May 22, 2020.
As measures of hospital patient satisfaction, efficiency, and safety, we used HVBP domain components. The HVBP is a Centers for Medicare & Medicaid (CMS) program that awards or penalizes acute care hospitals for safety and quality outcomes using payment adjustments,20 and data are publicly available through the Hospital Compare website.21,22 Hospitals receive domain scores, each composed of 1 to several components, some of which have changed over time. We have included brief descriptions of the components herein, and eTable 2 in the Supplement contains detailed descriptions.
The engagement domain reflects patient satisfaction and is derived from the Hospital Consumer Assessment of Healthcare Providers and Systems survey, which is sent to a subset of inpatients after hospital discharge to assess dimensions of satisfaction, including ratings of communication with nurses, communication with physicians, responsiveness of hospital staff, care transition, communication about medicines, cleanliness and quietness, discharge information, and the hospital overall. Each dimension is reported as the percentage of respondents selecting the best possible response for the relevant questions, adjusted for patient-level characteristics. Higher scores indicate better satisfaction.
The efficiency domain consists of a single measure, Medicare spending per beneficiary, which is reported as a ratio between the hospital’s mean price-standardized risk-adjusted spending per care episode divided by the national median of spending per episode. Lower scores indicate better efficiency.
The safety domain consists of several measures of in-hospital infections, accidents, and injuries. Among the components, the health care–associated infection measures are amenable to modeling. These reflect risk-adjusted standardized infection ratios for central line–associated bloodstream infections, catheter-associated urinary tract infections, surgical site infections (SSIs) after colon surgery, SSIs after abdominal hysterectomy, methicillin-resistant Staphylococcus aureus bacteremia, and Clostridioides difficile infection. These data are reported as ratios between observed and estimated infection rates. Lower scores indicate better safety.
The clinical care domain reflects 30-day all-cause mortality for 3 admission diagnoses. Although this domain is an important measure of hospital quality and safety, data for this domain have not been released for 2016, and thus we were unable to include it. The eMethods, eTable 3, and the eFigure in the Supplement contain a detailed discussion of our choice of outcomes that was constrained by frequent changes in both MU and outcome measures over time.
Each stage of MU has a set of EHR performance measures for which hospitals must submit data. However, these requirements changed over time, resulting in 4 overlapping sets of measures. We used CMS documentation to link identical measures across data sets, then found that 2016 was the best time frame to analyze.23-26 eMethods in the Supplement includes details. This resulted in 9 MU measures included as potential factors used to estimate performance (Table 1).
The MU attestation data are available through CMS public use files.27 The MU program also maintains a public file of the EHR products used by each hospital.28 Because some EHR vendors split software packages into separate products while others offer a unified product, we examined EHR use at the level of EHR vendor. We used this in conjunction with the Certified Health IT Product List, which contains information about the functionality of each EHR product, to profile the EHR functionality of each hospital.29 We used crosswalks published by CMS to combine the various versions of Certified Health IT Product List criteria into a unified set, then calculated the mean percentage of criteria met by each EHR vendor per hospital.30
We controlled for hospital characteristics, including years of MU program participation, number of EHR vendors used, and the mean percentage of EHR product certification criteria met by each EHR vendor used by each hospital. We also adjusted for hospital characteristics, including ownership, location, and hospital identifiers from Hospital Compare data22; number of beds, inpatient revenue, payor mix, and teaching status from CMS cost reports31; case-mix index from CMS32; Magnet status33; and the urban-rural scale for US counties from the National Center for Health Statistics.34
Data for each hospital were linked using the CMS certification number. Because the Magnet program data set did not include CMS certification numbers, we manually matched each Magnet recipient to its CMS certification number.
Continuous variables (MU measures, HVBP outcomes, total inpatient revenue, Medicare and Medicaid discharge percentages, vendor count, and case-mix index) were examined for outliers greater than 3 SDs from the mean. In cases where outliers could be corroborated as data entry errors through hospital websites or American Hospital Directory Free Hospital Profiles (limited hospital profiles based on public and proprietary data),35 they were replaced with values from 2015 data. In cases where an outlier could not be confirmed as an entry error, it was retained. Records with missing independent and control variables were removed from analysis. Records with missing outcome data were removed from the specific model for that outcome, and baseline characteristics were compared with hospitals that were included using 2-sample t tests. Pairwise correlations less than 0.7 and variable inflation factors less than 10 were considered acceptable for performance measures.
Characteristics of the sample were summarized as means and SDs, median and interquartile range, or frequencies and percentages. The most frequently used EHR vendors were identified by examining how many hospitals used each vendor during 2016.
Quantile regression models were constructed for each outcome using the Statsmodels module, version 0.9.0, in Python, version 3.7 (Python Software Foundation). Quantile regression examines associations between variables used to estimate outcomes and a continuous outcome at different quantiles of the outcome.36,37 At each quantile τ, a model is produced with coefficients for each variable used to estimate outcomes, which allows us to examine different associations between the variables and the outcome as different levels of the outcome. Unit changes for all MU performance measures, percentage of Medicare/Medicaid discharges, and EHR product feature coverage were set at 10%. Unit changes for all outcomes were set at 1%. Interactions between each performance measure and the 4 most commonly used EHR vendors were included, as well as between EHR vendor and number of beds. We examined results at 3 quantiles (0.1, 0.5, and 0.9) for each outcome, selected a priori to represent low, middle, and high outcome performance. We used a Bonferroni correction to account for multiple outcomes by multiplying unadjusted P values by the number of outcomes (14) and reporting the 99.6% CIs. Corrected 2-sided P < .05 was considered significant.
A total of 2362 hospitals were included in the sample. Descriptive statistics are shown in Table 2.38 Three data entry errors were replaced with 2015 values (eTable 4 in the Supplement). We found that of the 165 EHR vendors used, the 4 most frequently used were Epic Systems Corporation (Epic; 585 [24.8%]), Meditech Information Technology, Inc (Meditech; 575 [24.3%]), Cerner Corporation (Cerner; 546 [23.1%]), and McKesson Corporation (McKesson; 283 [12.0%]).
Computerized provider (physicians and nonphysician licensed clinicians) order entry (CPOE) for laboratory orders and CPOE for radiology orders were highly correlated (Pearson correlation, 0.76), so the latter was excluded. Among all the other variables used to estimate outcomes, all pairwise correlations and variable inflation factors values were within acceptable ranges. Only 1365 hospitals (57.8%) submitted data for the electronic prescribing measure, so this outcome was omitted. Only 729 hospitals (30.8%) submitted data for SSI after abdominal hysterectomy, and therefore this outcome was omitted. Hospitals with missing health care–associated infection outcome data had MU measures that were significantly different from those that were included in the models (eTable 5 in the Supplement).
Table 3 contains adjusted regression coefficients for performance measures at the 10th, 50th, and 90th percentiles. eTables 6 and 7 in the Supplement contain complete adjusted results.
Computerized provider order entry for laboratory orders was associated with decreased performance on every patient satisfaction outcome at middle quantiles (Table 3). However, these decreases were not present in the discharge information outcome for hospitals using McKesson (interaction: τ = 0.5; β = 0.47; P = .006) or Meditech (interaction: τ = 0.5; β = 0.46; P = .02). Computerized physician order entry for medication orders was associated with improved communication with physicians (τ = 0.5; β = 0.54; P = .009), care transition (τ = 0.5; β = 1.24; P < .001), discharge information (τ = 0.5; β = 0.41; P = .01), and overall hospital ratings (τ = 0.5; β = 0.97; P = .02) at middle quantiles.
At high quantiles, electronic health information exchange was associated with improved communication with nurses (τ = 0.9; β = 0.23; P = .03) and responsiveness of hospital staff (τ = 0.9; β = 0.56; P < .001), but also with increased rates of central line–associated bloodstream infections (τ = 0.9; β = 5.23; P = .03).
Medication reconciliation was associated with increased communication with nursing at low quantiles (τ = 0.1; β = 0.60; P < .001), increased discharge information at middle quantiles (τ = 0.5; β = 0.28; P = .03), and increased responsiveness of hospital staff at middle (τ = 0.5; β = 0.77; P = .001) and high (τ = 0.9; β = 0.84; P = .001) quantiles. However, the concurrent use of Epic was associated with a reverse in these associations wherein increased medication reconciliation was associated with decreased communication with nursing at low quantiles (interaction: τ = 0.1; β = −1.19; P = .005) and decreased responsiveness of staff ratings at middle quantiles (interaction: τ = 0.5; β = −1.37; P = .02).
Patients accessing their information online was not significantly associated with any outcome. However, having patients’ health information online, whether accessed or not, was associated with an increase in SSIs after colon surgery at high quantiles (τ = 0.9; β = 12.45; P = .03).
Patient-specific educational resources were associated with increased communication with physicians at high quantiles (τ = 0.9; β = 0.20; P = .02); however, a reverse association was found with concurrent use of Cerner (interaction: τ = 0.9; β = −0.36; P = .02) or McKesson (interaction: τ = 0.9; β = −0.36; P = .02). In addition, patient-specific educational resources were associated with decreased spending at low spending hospitals (τ = 0.1; β = −0.40; P = .008).
This study is the first of which we are aware to assess whether EHR implementation above MU thresholds is associated with HVBP outcomes. Our results suggest that EHR use above minimal MU requirements has small, mixed associations with HVBP engagement, efficiency, and safety outcomes that in some cases depend on the EHR vendor.
Although increased use of CPOE for medications was associated with improved patient satisfaction in some areas, increased CPOE for laboratory tests was associated with lower satisfaction in all areas. This finding suggests that studies of CPOE must look at these distinct order types rather than CPOE as a single entity. Although CPOE for medication and laboratory orders is commonly unified by the EHR, the workflows for each activity diverge almost immediately. Systems factors beyond the CPOE system may contribute to these opposing associations, and more research is therefore necessary to explain these findings.
Our finding of no association between patients accessing their information online and cost savings is consistent with past research.16 Regarding our finding that patients’ information being online, whether accessed or not, was associated with an increase in SSIs after colon surgery is most likely the result of an unidentified confounding variable, because no straightforward theory as to why these would be associated appears to exist.
Past research has shown electronic health information exchange to be associated with better patient satisfaction and cost control.39,40 Although we did not find associations with cost savings, we did find positive associations with patient satisfaction, in particular communication with nurses and responsiveness of hospital staff. Communication with nursing is vital at admission and discharge, and increased electronic transmission of health records may facilitate data gathering and nurse-patient communication that occurs during these times, resulting in higher ratings of communication.
Past research41 has examined medication reconciliation and patient satisfaction as independent outcomes in the context of transition of care interventions, but no past research has looked at specific associations between medication reconciliation performed through the EHR and patient satisfaction. We found medication reconciliation was associated with several dimensions of patient satisfaction related to admission and discharge, when medication reconciliation would be performed. However, these associations were scattered among low, medium, and high quantiles. It is unclear why these associations were not more consistent across patient satisfaction dimensions and across quantiles. Moreover, past research42 has found cost savings associated with pharmacist-led interventions involving medication reconciliation, so we were surprised to not find this association at any quantile.
Identifying patient educational information through the EHR was associated with higher ratings of communication with physicians at high quantiles, but only associated with decreased Medicare spending per beneficiary at low-spending hospitals. Physicians with high communication skills may be more adept at using this information through the EHR, so only highly rated communicators might see benefits from using this information. Similarly, cost savings may only be seen with increased use of educational information found through the EHR at low-spending hospitals because less efficient hospitals may not have structures and workflows to use these tools as efficiently. Further research is necessary to explore these results.
Several of our results are associated with significant interaction terms based on EHR vendor, which either removed or reversed the main effects. This finding suggests that the particular solutions offered may differentiate by vendor and warrant further study.
Taken together, our results suggest that the MU performance measures used thus far do not straightforwardly estimate HVBP measures of patient satisfaction, efficiency, or safety. Although stage 2 of MU is largely in the past, stage 3 is the present for hospitals, and many of the performance measures for stage 3 are the same as those considered herein.43 Our results suggest that the current criteria may not be focusing on the right metrics to improve patient satisfaction, efficiency, and some measures of safety as measured by HVBP at all hospitals.
Strengths of our study include a large sample size and the use of quantile regression to explore associations at different levels of the outcomes. There are also several limitations. Owing to changes in the measures used, our time frame is limited to 2016 and reflects only part of the HVBP safety domain and none of the clinical care domain. We may not have been able to include some relevant factors in our models. Of note, the MU program only collects data about certified EHR technology, and thus our analysis does not take into account the potential effects of using noncertified or non-EHR systems. Moreover, there are previously described limitations to the validity of HVBP domains as measures of patient satisfaction and cost control.44,45 Our sample was limited to acute care hospitals in the United States because only they were eligible for the HVBP program, excluding many rural and critical access hospitals, which historically have struggled to implement EHR technology.46 Moreover, much of the data analyzed are self-submitted by hospitals, which may be a source of bias and error. In particular, our findings regarding health care–associated infection outcomes may not be generalizable because the MU measures of hospitals included in the models were different from those in the hospitals excluded because they did not submit data, and hospitals may not have submitted data for particular measures owing to low performance.
Although some MU performance measures were significantly associated with patient satisfaction, efficiency, and safety, most associations varied by the level of the outcomes. Moreover, the EHR vendor was an important interacting factor in several of our findings. Insofar as the MU program was founded on the belief that more EHR implementation will lead to better quality and safety, these results call for a critical evaluation of the criteria by which EHR implementation is measured and incentivized, as well as increased attention to understanding how the different features of EHR solutions may lead to differential outcomes.
Accepted for Publication: May 24, 2020.
Published: September 9, 2020. doi:10.1001/jamanetworkopen.2020.12529
Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2020 Murphy ZR et al. JAMA Network Open.
Corresponding Author: Michael V. Boland, MD, PhD, Wilmer Eye Institute, Johns Hopkins University School of Medicine, 600 N Wolfe St, Wilmer 131, Baltimore, MD 21287 (email@example.com).
Author Contributions: Mr Murphy and Dr Boland had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.
Concept and design: Murphy, Boland.
Acquisition, analysis, or interpretation of data: All authors.
Drafting of the manuscript: Murphy, Boland.
Critical revision of the manuscript for important intellectual content: All authors.
Statistical analysis: Murphy, Wang.
Administrative, technical, or material support: Boland.
Conflict of Interest Disclosures: Dr Wang reported receiving grants from the National Eye Institute (NEI), National Institutes of Health, during the conduct of the study. Dr Boland reported receiving personal fees from Carl Zeiss Meditec, Inc, outside the submitted work. No other disclosures were reported.
Funding/Support: Research at the Wilmer Eye Institute, including biostatistical consultations, was supported by core grant EY001765 from the NEI and Research to Prevent Blindness.
Role of the Funder/Sponsor: The sponsors had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Create a personal account or sign in to: