[Skip to Content]
Sign In
Individual Sign In
Create an Account
Institutional Sign In
OpenAthens Shibboleth
[Skip to Content Landing]
Table 1.  MU Performance Measures
MU Performance Measures
Table 2.  Characteristics of Hospitals Included in the Analysis
Characteristics of Hospitals Included in the Analysis
Table 3.  Adjusted Regression Results for MU Performance Measures at 0.1, 0.5, and 0.9 Quantilesa
Adjusted Regression Results for MU Performance Measures at 0.1, 0.5, and 0.9 Quantilesa
1.
Blumenthal  D.  Launching HITECH.   N Engl J Med. 2010;362(5):382-385. doi:10.1056/NEJMp0912825 PubMedGoogle ScholarCrossref
2.
Centers for Medicare & Medicaid Services (CMS). Promoting interoperability programs: data and program reports. Published May 14, 2019. Accessed September 14, 2019. https://www.cms.gov/regulations-and-guidance/legislation/ehrincentiveprograms/dataandreports.html
3.
Adler-Milstein  J, Holmgren  AJ, Kralovec  P, Worzala  C, Searcy  T, Patel  V.  Electronic health record adoption in US hospitals: the emergence of a digital “advanced use” divide.   J Am Med Inform Assoc. 2017;24(6):1142-1148. doi:10.1093/jamia/ocx080 PubMedGoogle ScholarCrossref
4.
Sandefer  RH, Marc  DT, Kleeberg  P.  Meaningful use attestations among US hospitals: the growing rural-urban divide.   Perspect Health Inf Manag. 2015;12(Spring):1f.PubMedGoogle Scholar
5.
Kruse  CS, DeShazo  J, Kim  F, Fulton  L.  Factors associated with adoption of health information technology: a conceptual model based on a systematic review.   JMIR Med Inform. 2014;2(1):e9. doi:10.2196/medinform.3106 PubMedGoogle Scholar
6.
Hessels  A, Flynn  L, Cimiotti  JP, Bakken  S, Gershon  R.  Impact of heath information technology on the quality of patient care.   Online J Nurs Inform. 2015;19:19.PubMedGoogle Scholar
7.
Mitchell  JP.  Electronic healthcare’s relationship with patient satisfaction and communication.   J Healthc Qual. 2016;38(5):296-303. doi:10.1097/01.JHQ.0000462678.02018.92 PubMedGoogle ScholarCrossref
8.
Restuccia  JD, Cohen  AB, Horwitt  JN, Shwartz  M.  Hospital implementation of health information technology and quality of care: are they related?   BMC Med Inform Decis Mak. 2012;12:109. doi:10.1186/1472-6947-12-109PubMedGoogle ScholarCrossref
9.
Manta  CJ, Caplan  R, Goldsack  J, Smith  S, Robinson  E.  The impact of health information technologies on patient satisfaction.   Am J Accountable Care. 2016;4(4):9-15.Google Scholar
10.
Jarvis  B, Johnson  T, Butler  P,  et al.  Assessing the impact of electronic health records as an enabler of hospital quality and patient satisfaction.   Acad Med. 2013;88(10):1471-1477. doi:10.1097/ACM.0b013e3182a36cab PubMedGoogle ScholarCrossref
11.
Migdal  CW, Namavar  AA, Mosley  VN, Afsar-manesh  N.  Impact of electronic health records on the patient experience in a hospital setting.   J Hosp Med. 2014;9(10):627-633. doi:10.1002/jhm.2240 PubMedGoogle ScholarCrossref
12.
Irani  JS, Middleton  JL, Marfatia  R, Omana  ET, D’Amico  F.  The use of electronic health records in the exam room and patient satisfaction: a systematic review.   J Am Board Fam Med. 2009;22(5):553-562. doi:10.3122/jabfm.2009.05.080259 PubMedGoogle ScholarCrossref
13.
Street  RL  Jr, Liu  L, Farber  NJ,  et al.  Keystrokes, mouse clicks, and gazing at the computer: how physician interaction with the EHR affects patient participation.   J Gen Intern Med. 2018;33(4):423-428. doi:10.1007/s11606-017-4228-2 PubMedGoogle ScholarCrossref
14.
Street  RL  Jr, Liu  L, Farber  NJ,  et al.  Provider interaction with the electronic health record: the effects on patient-centered communication in medical encounters.   Patient Educ Couns. 2014;96(3):315-319. doi:10.1016/j.pec.2014.05.004 PubMedGoogle ScholarCrossref
15.
Adler-Milstein  J, Everson  J, Lee  S-YD.  EHR adoption and hospital performance: time-related effects.   Health Serv Res. 2015;50(6):1751-1771. doi:10.1111/1475-6773.12406 PubMedGoogle ScholarCrossref
16.
Collum  TH, Menachemi  N, Sen  B.  Does electronic health record use improve hospital financial performance? evidence from panel data.   Health Care Manage Rev. 2016;41(3):267-274. doi:10.1097/HMR.0000000000000068 PubMedGoogle ScholarCrossref
17.
Yanamadala  S, Morrison  D, Curtin  C, McDonald  K, Hernandez-Boussard  T.  Electronic health records and quality of care: an observational study modeling impact on mortality, readmissions, and complications.   Medicine (Baltimore). 2016;95(19):e3332. doi:10.1097/MD.0000000000003332 PubMedGoogle Scholar
18.
Vincent  AG, Kirk  A, Augustine  C. EHR incentive programs: 2011 meaningful use census. Centers for Medicare & Medicaid Services. Published November 2012. Accessed September 15, 2019. https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Downloads/WhitePaper2011_MeaningfulUse_CensusRemediated.pdf
19.
Holmgren  AJ, Adler-Milstein  J, McCullough  J.  Are all certified EHRs created equal? assessing the relationship between EHR vendor and hospital meaningful use performance.   J Am Med Inform Assoc. 2018;25(6):654-660. doi:10.1093/jamia/ocx135PubMedGoogle ScholarCrossref
20.
Medicare Learning Network. Hospital value-based purchasing. Published September 2017. Accesssed September 1, 2019. https://www.cms.gov/Outreach-and-Education/Medicare-Learning-Network-MLN/MLNProducts/downloads/Hospital_VBPurchasing_Fact_Sheet_ICN907664.pdf
21.
Centers for Medicare & Medicaid Services (CMS). Medicare Hospital Compare overview. Accessed September 14, 2019. https://www.medicare.gov/hospitalcompare/About/What-Is-HOS.html
22.
Centers for Medicare & Medicaid Services (CMS). Hospital Compare datasets. Data.Medicare.Gov. Published July 25, 2018. Accessed September 15, 2019. https://data.medicare.gov/data/hospital-compare
23.
Centers for Medicare & Medicaid Services (CMS). Eligible hospital and CAH meaningful use table of contents core and menu set objectives, stage 1 (2013 definition). Updated July 2014. Accessed August 1, 2019. https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/downloads/Hosp_CAH_MU-TOC.pdf
24.
Centers for Medicare & Medicaid Services (CMS). Stage 2 eligible hospital and critical access hospital (CAH) meaningful use core and menu objectives table of contents. Published October 2012. Accessed August 1, 2019. https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Downloads/Stage2_MeaningfulUseSpecSheet_TableContents_EligibleHospitals_CAHs.pdf
25.
Centers for Medicare & Medicaid Services (CMS). Eligible hospitals modified stage 2 for stage 1 attestation public use file (PUF) data dictionary and codebook. Published April 2016. Accessed August 1, 2019. https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Downloads/EH_PUF_DataDictionaryCodebookStage1_2.zip
26.
Centers for Medicare & Medicaid Services (CMS). Eligible hospitals modified stage 2 for stage 2 attestation public use file (PUF) data dictionary and codebook. Published April 2016. Accessed August 1, 2019. https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Downloads/EH_PUF_DataDictionaryCodebookStage1_2.zip
27.
Centers for Medicare & Medicaid Services (CMS). Meaningful use data: public use files. Modifed February 11, 2020. Accessed August 1, 2019. https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/PUF.html
28.
Office of the National Coordinator for Health Information Technology. EHR products used for meaningful use attestation. Updated January 10, 2019. Accessed August 15, 2019. https://dashboard.healthit.gov/datadashboard/documentation/ehr-products-mu-attestation-data-documentation.php
29.
Office of the National Coordinator for Health Information Technology. Certified health IT product list (CHPL): public user guide. Accessed September 14, 2019. https://www.healthit.gov/sites/default/files/policy/chpl_public_user_guide.pdf
30.
Office of the National Coordinator for Health Information Technology. 2011 and 2014 certification criteria equivalency table. Published March 20, 2012. Accessed September 20, 2019. https://www.healthit.gov/sites/default/files/equivtable021913_0.pdf
31.
Centers for Medicare & Medicaid Services (CMS). Cost reports by fiscal year. Modified April 16, 2020. Accessed August 7, 2018. https://www.cms.gov/Research-Statistics-Data-and-Systems/Downloadable-Public-Use-Files/Cost-Reports/Cost-Reports-by-Fiscal-Year.html
32.
Centers for Medicare & Medicaid Services (CMS). Case mix index. Published August 14, 2017. Accessed May 13, 2020. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/Acute-Inpatient-Files-for-Download-Items/CMS022630
33.
American Nurses’ Credentialing Center. Find a Magnet facility. Accessed September 15, 2019. https://www.nursingworld.org/organizational-programs/magnet/find-a-magnet-facility/
34.
National Center for Health Statistics. NCHS urban rural classification scheme for counties. Reviewed June 1, 2017. Accessed September 15, 2019. https://www.cdc.gov/nchs/data_access/urban_rural.htm
35.
American Hospital Directory. Advanced search. Accessed August 14, 2019. https://www.ahd.com/search.php
36.
Davino  C, Furno  M, Vistocco  D.  Quantile Regression: Theory and Applications. Vol I. John Wiley & Sons; 2013.
37.
Davino  C, Furno  M, Vistocco  D.  Quantile Regression: Theory and Applications. Vol II. John Wiley & Sons; 2014.
38.
Pollard  TJ, Johnson  AEW, Raffa  JD, Mark  RG.  tableone: an open source Python package for producing summary statistics for research papers.   JAMIA Open. 2018;1(1):26-31. doi:10.1093/jamiaopen/ooy012 PubMedGoogle ScholarCrossref
39.
Rahurkar  S, Vest  JR, Menachemi  N.  Despite the spread of health information exchange, there is little evidence of its impact on cost, use, and quality of care.   Health Aff (Millwood). 2015;34(3):477-483. doi:10.1377/hlthaff.2014.0729 PubMedGoogle ScholarCrossref
40.
Vest  JR, Miller  TR.  The association between health information exchange and measures of patient satisfaction.   Appl Clin Inform. 2011;2(4):447-459. doi:10.4338/ACI-2011-06-RA-0040 PubMedGoogle ScholarCrossref
41.
Brantley  AF, Rossi  DM, Barnes-Warren  S, Francisco  JC, Schatten  I, Dave  V.  Bridging gaps in care: implementation of a pharmacist-led transitions-of-care program.   Am J Health Syst Pharm. 2018;75(5, suppl 1):S1-S5. doi:10.2146/ajhp160652 PubMedGoogle ScholarCrossref
42.
Patel  E, Pevnick  JM, Kennelty  KA.  Pharmacists and medication reconciliation: a review of recent literature.   Integr Pharm Res Pract. 2019;8:39-45. doi:10.2147/IPRP.S169727 PubMedGoogle ScholarCrossref
43.
Centers for Medicare & Medicaid Services (CMS). Medicare promoting interoperability program eligible hospitals, critical access hospitals, and dual-eligible hospitals objectives and measures for 2019. Accessed May 24, 2019. https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Downloads/TableofContents_EH_Medicare_2019.pdf
44.
Westbrook  KW, Babakus  E, Grant  CC.  Measuring patient-perceived hospital service quality: validity and managerial usefulness of HCAHPS scales.   Health Mark Q. 2014;31(2):97-114. doi:10.1080/07359683.2014.907114 PubMedGoogle ScholarCrossref
45.
Lamboy Ruiz  MA, No  WG, Watanabe  OV.  Discrepancies in hospital financial information: comparison of financial data in state data repositories and the healthcare cost reporting information system.   J Inf Syst. 2019;33(3):19-44. doi:10.2308/isys-52149Google Scholar
46.
Kim  J, Ohsfeldt  RL, Gamm  LD, Radcliff  TA, Jiang  L.  Hospital characteristics are associated with readiness to attain stage 2 meaningful use of electronic health records.   J Rural Health. 2017;33(3):275-283. doi:10.1111/jrh.12193 PubMedGoogle ScholarCrossref
Limit 200 characters
Limit 25 characters
Conflicts of Interest Disclosure

Identify all potential conflicts of interest that might be relevant to your comment.

Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.

Err on the side of full disclosure.

If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.

Not all submitted comments are published. Please see our commenting policy for details.

Limit 140 characters
Limit 3600 characters or approximately 600 words
    Views 1,359
    Citations 0
    Original Investigation
    Health Policy
    September 9, 2020

    Association of Electronic Health Record Use Above Meaningful Use Thresholds With Hospital Quality and Safety Outcomes

    Author Affiliations
    • 1currently a medical student at Johns Hopkins University School of Medicine, Baltimore, Maryland
    • 2Department of Biostatistics, Johns Hopkins University Bloomberg School of Public Health, Baltimore, Maryland
    • 3Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, Maryland
    • 4Division of Health Sciences Informatics, Johns Hopkins University School of Medicine, Baltimore, Maryland
    JAMA Netw Open. 2020;3(9):e2012529. doi:10.1001/jamanetworkopen.2020.12529
    Key Points español 中文 (chinese)

    Question  Is electronic health record implementation beyond meaningful use thresholds associated with changes in hospital measures of patient satisfaction, spending, and safety?

    Findings  In this cross-sectional analysis of 2362 hospitals using data from 2016, associations between meaningful use performance measures and Hospital Value-Based Purchasing Program measures of patient satisfaction, spending, and safety were evaluated. Mixed associations were found that varied depending on whether the hospital was in the lower, middle, or upper quantiles of the Hospital Value-Based Purchasing Program outcome.

    Meaning  These findings suggest that advanced levels of electronic health record implementation are not consistently associated with patient satisfaction, spending, and safety, and in some cases depend on the outcome quantile.

    Abstract

    Importance  By 2018, Medicare spent more than $30 billion to incentivize the adoption of electronic health records (EHRs), based partially on the belief that EHRs would improve health care quality and safety. In a time when most hospitals are well past minimum meaningful use (MU) requirements, examining whether EHR implementation beyond the minimum threshold is associated with increased quality and safety may guide the future focus of EHR development and incentive structures.

    Objective  To determine whether EHR implementation above MU performance thresholds is associated with changes in hospital patient satisfaction, efficiency, and safety.

    Design, Setting, and Participants  This quantile regression analysis of cross-sectional data used publicly available data sets from 2362 acute care hospitals in the United States participating in both the MU and Hospital Value-Based Purchasing (HVBP) programs from January 1 to December 31, 2016. Data were analyzed from August 1, 2019, to May 22, 2020.

    Exposures  Seven MU program performance measures, including medication and laboratory orders placed through the EHR, online health information availability and access rates, medication reconciliation through the EHR, patient-specific educational resources, and electronic health information exchange.

    Main Outcomes and Measures  The HVBP outcomes included patient satisfaction survey dimensions, Medicare spending per beneficiary, and 5 types of hospital-acquired infections.

    Results  Among the 2362 participating hospitals, mixed associations were found between MU measures and HVBP outcomes, all varying by outcome quantile and in some cases by interaction with EHR vendor. Computerized provider order entry (CPOE) for laboratory orders was associated with decreased ratings of every patient satisfaction outcome at middle quantiles (communication with nurses: β = −0.33 [P = .04]; communication with physicians: β = −0.50 [P < .001]; responsiveness of hospital staff: β = −0.57 [P = .03]; care transition performance: β = −0.66 [P < .001]; communication about medicines: β = −0.52 [P = .002]; cleanliness and quietness: β = −0.58 [P = .007]; discharge information: β = −0.48 [P < .001]; and overall rating: β = −0.95 [P < .001]). However, at middle quantiles, CPOE for medication orders was associated with increased ratings for communication with physicians (τ = 0.5; β = 0.54; P = .009), care transition (τ = 0.5; β = 1.24; P < .001), discharge information (τ = 0.5; β = 0.41; P = .01), and overall hospital ratings (τ = 0.5; β = 0.97; P = .02). At high quantiles, electronic health information exchange was associated with improved ratings of communication with nurses (τ = 0.9; β = 0.23; P = .03). Medication reconciliation had positive associations with increased communication with nursing at low quantiles (τ = 0.1; β = 0.60; P < .001), increased discharge information at middle quantiles (τ = 0.5; β = 0.28; P = .03), and responsiveness of hospital staff at middle (τ = 0.5; β = 0.77; P = .001) and high (τ = 0.9; β = 0.84; P = .001) quantiles. Patients accessing their health information online was not associated with any outcomes. Increased use of patient-specific educational resources identified through the EHR was associated with increased ratings of communication with physicians at high quantiles (τ = 0.9; β = 0.20; P = .02) and with decreased spending at low-spending hospitals (τ = 0.1; β = −0.40; P = .008).

    Conclusions and Relevance  Increasing EHR implementation, as measured by MU criteria, was not straightforwardly associated with increased HVBP measures of patient satisfaction, spending, and safety in this study. These results call for a critical evaluation of the criteria by which EHR implementation is measured and increased attention to how different EHR products may lead to differential outcomes.

    Introduction

    The HITECH (Health Information Technology for Economic and Clinical Health) Act of 2009 was motivated by the belief that electronic health records (EHRs) would improve health care quality and safety.1 The HITECH Act created financial incentives for hospitals to demonstrate “meaningful use” (MU) of EHRs by meeting minimum implementation and performance thresholds across an array of EHR functions.

    With more than $30 billion spent on the MU program (renamed Promoting Interoperability) by 2018,2 many studies have investigated whether the claim that EHRs would improve hospital quality and safety has paid off. This research has largely focused on comparisons between hospitals that attained the MU threshold and those that did not, which has revealed a divide between large, urban, academic hospitals that tended to achieve MU early and small, rural, nonacademic hospitals that lagged behind.3-5 Studies of patient satisfaction and the EHR in the inpatient setting have shown contradictory findings.6-14 Regarding cost control, attaining MU has not been found to affect expenditures per patient or hospital operating margins.15,16 The evidence that EHRs improve safety is stronger,17 but existing studies largely compared hospitals with full EHRs and hospitals with minimal or no EHRs.

    In treating EHR implementation as a dichotomy between MU attained or not, little research has investigated differences between hospitals that just pass the minimum thresholds to meet MU and those that far exceed the minimum thresholds. Existing studies showed hospitals successfully attesting nearer to the minimum thresholds tended to be small, rural, nonacademic hospitals, whereas those at the top of the performance measures tended to be large, urban, academic medical centers.18 Furthermore, among hospitals attesting to MU, the EHR vendor had mixed associations with 6 MU performance measures.19

    The heterogeneity of health systems, EHR products, and other factors contributing to the health care environment means that we must continually consider whether we are incentivizing the proper metrics to fully realize EHRs as a driver of quality and safety. In a time when most hospitals have EHR capabilities above the MU minimum thresholds, examining the association between EHR implementation above MU thresholds and quality and safety outcomes may provide insight into whether these MU metrics are still meeting their intended goal.

    Methods

    This study used publicly available data sets and therefore did not fit the Health and Human Services criteria for human subjects research and did not require approval by an institutional review board or informed consent. eTable 1 in the Supplement contains links to the data sources used. This study followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline.

    We created a cross-sectional sample of acute care hospitals that attested to participation in the MU program and also participated in the Hospital Value-Based Purchasing Program (HVBP) from January 1 to December 31, 2016, then constructed quantile regression models to examine associations between MU performance measures and 14 outcomes of the HVBP covering patient satisfaction, spending, and safety domains. Although the data represent 2016, this analysis was performed from August 1, 2019, to May 22, 2020.

    Outcomes: HVBP Quality and Safety Domain Scores

    As measures of hospital patient satisfaction, efficiency, and safety, we used HVBP domain components. The HVBP is a Centers for Medicare & Medicaid (CMS) program that awards or penalizes acute care hospitals for safety and quality outcomes using payment adjustments,20 and data are publicly available through the Hospital Compare website.21,22 Hospitals receive domain scores, each composed of 1 to several components, some of which have changed over time. We have included brief descriptions of the components herein, and eTable 2 in the Supplement contains detailed descriptions.

    The engagement domain reflects patient satisfaction and is derived from the Hospital Consumer Assessment of Healthcare Providers and Systems survey, which is sent to a subset of inpatients after hospital discharge to assess dimensions of satisfaction, including ratings of communication with nurses, communication with physicians, responsiveness of hospital staff, care transition, communication about medicines, cleanliness and quietness, discharge information, and the hospital overall. Each dimension is reported as the percentage of respondents selecting the best possible response for the relevant questions, adjusted for patient-level characteristics. Higher scores indicate better satisfaction.

    The efficiency domain consists of a single measure, Medicare spending per beneficiary, which is reported as a ratio between the hospital’s mean price-standardized risk-adjusted spending per care episode divided by the national median of spending per episode. Lower scores indicate better efficiency.

    The safety domain consists of several measures of in-hospital infections, accidents, and injuries. Among the components, the health care–associated infection measures are amenable to modeling. These reflect risk-adjusted standardized infection ratios for central line–associated bloodstream infections, catheter-associated urinary tract infections, surgical site infections (SSIs) after colon surgery, SSIs after abdominal hysterectomy, methicillin-resistant Staphylococcus aureus bacteremia, and Clostridioides difficile infection. These data are reported as ratios between observed and estimated infection rates. Lower scores indicate better safety.

    The clinical care domain reflects 30-day all-cause mortality for 3 admission diagnoses. Although this domain is an important measure of hospital quality and safety, data for this domain have not been released for 2016, and thus we were unable to include it. The eMethods, eTable 3, and the eFigure in the Supplement contain a detailed discussion of our choice of outcomes that was constrained by frequent changes in both MU and outcome measures over time.

    MU Performance Measures

    Each stage of MU has a set of EHR performance measures for which hospitals must submit data. However, these requirements changed over time, resulting in 4 overlapping sets of measures. We used CMS documentation to link identical measures across data sets, then found that 2016 was the best time frame to analyze.23-26 eMethods in the Supplement includes details. This resulted in 9 MU measures included as potential factors used to estimate performance (Table 1).

    The MU attestation data are available through CMS public use files.27 The MU program also maintains a public file of the EHR products used by each hospital.28 Because some EHR vendors split software packages into separate products while others offer a unified product, we examined EHR use at the level of EHR vendor. We used this in conjunction with the Certified Health IT Product List, which contains information about the functionality of each EHR product, to profile the EHR functionality of each hospital.29 We used crosswalks published by CMS to combine the various versions of Certified Health IT Product List criteria into a unified set, then calculated the mean percentage of criteria met by each EHR vendor per hospital.30

    Hospital Characteristics

    We controlled for hospital characteristics, including years of MU program participation, number of EHR vendors used, and the mean percentage of EHR product certification criteria met by each EHR vendor used by each hospital. We also adjusted for hospital characteristics, including ownership, location, and hospital identifiers from Hospital Compare data22; number of beds, inpatient revenue, payor mix, and teaching status from CMS cost reports31; case-mix index from CMS32; Magnet status33; and the urban-rural scale for US counties from the National Center for Health Statistics.34

    Statistical Analysis

    Data for each hospital were linked using the CMS certification number. Because the Magnet program data set did not include CMS certification numbers, we manually matched each Magnet recipient to its CMS certification number.

    Continuous variables (MU measures, HVBP outcomes, total inpatient revenue, Medicare and Medicaid discharge percentages, vendor count, and case-mix index) were examined for outliers greater than 3 SDs from the mean. In cases where outliers could be corroborated as data entry errors through hospital websites or American Hospital Directory Free Hospital Profiles (limited hospital profiles based on public and proprietary data),35 they were replaced with values from 2015 data. In cases where an outlier could not be confirmed as an entry error, it was retained. Records with missing independent and control variables were removed from analysis. Records with missing outcome data were removed from the specific model for that outcome, and baseline characteristics were compared with hospitals that were included using 2-sample t tests. Pairwise correlations less than 0.7 and variable inflation factors less than 10 were considered acceptable for performance measures.

    Characteristics of the sample were summarized as means and SDs, median and interquartile range, or frequencies and percentages. The most frequently used EHR vendors were identified by examining how many hospitals used each vendor during 2016.

    Quantile regression models were constructed for each outcome using the Statsmodels module, version 0.9.0, in Python, version 3.7 (Python Software Foundation). Quantile regression examines associations between variables used to estimate outcomes and a continuous outcome at different quantiles of the outcome.36,37 At each quantile τ, a model is produced with coefficients for each variable used to estimate outcomes, which allows us to examine different associations between the variables and the outcome as different levels of the outcome. Unit changes for all MU performance measures, percentage of Medicare/Medicaid discharges, and EHR product feature coverage were set at 10%. Unit changes for all outcomes were set at 1%. Interactions between each performance measure and the 4 most commonly used EHR vendors were included, as well as between EHR vendor and number of beds. We examined results at 3 quantiles (0.1, 0.5, and 0.9) for each outcome, selected a priori to represent low, middle, and high outcome performance. We used a Bonferroni correction to account for multiple outcomes by multiplying unadjusted P values by the number of outcomes (14) and reporting the 99.6% CIs. Corrected 2-sided P < .05 was considered significant.

    Results
    Sample Characteristics

    A total of 2362 hospitals were included in the sample. Descriptive statistics are shown in Table 2.38 Three data entry errors were replaced with 2015 values (eTable 4 in the Supplement). We found that of the 165 EHR vendors used, the 4 most frequently used were Epic Systems Corporation (Epic; 585 [24.8%]), Meditech Information Technology, Inc (Meditech; 575 [24.3%]), Cerner Corporation (Cerner; 546 [23.1%]), and McKesson Corporation (McKesson; 283 [12.0%]).

    Quantile Regression

    Computerized provider (physicians and nonphysician licensed clinicians) order entry (CPOE) for laboratory orders and CPOE for radiology orders were highly correlated (Pearson correlation, 0.76), so the latter was excluded. Among all the other variables used to estimate outcomes, all pairwise correlations and variable inflation factors values were within acceptable ranges. Only 1365 hospitals (57.8%) submitted data for the electronic prescribing measure, so this outcome was omitted. Only 729 hospitals (30.8%) submitted data for SSI after abdominal hysterectomy, and therefore this outcome was omitted. Hospitals with missing health care–associated infection outcome data had MU measures that were significantly different from those that were included in the models (eTable 5 in the Supplement).

    Table 3 contains adjusted regression coefficients for performance measures at the 10th, 50th, and 90th percentiles. eTables 6 and 7 in the Supplement contain complete adjusted results.

    Computerized provider order entry for laboratory orders was associated with decreased performance on every patient satisfaction outcome at middle quantiles (Table 3). However, these decreases were not present in the discharge information outcome for hospitals using McKesson (interaction: τ = 0.5; β = 0.47; P = .006) or Meditech (interaction: τ = 0.5; β = 0.46; P = .02). Computerized physician order entry for medication orders was associated with improved communication with physicians (τ = 0.5; β = 0.54; P = .009), care transition (τ = 0.5; β = 1.24; P < .001), discharge information (τ = 0.5; β = 0.41; P = .01), and overall hospital ratings (τ = 0.5; β = 0.97; P = .02) at middle quantiles.

    At high quantiles, electronic health information exchange was associated with improved communication with nurses (τ = 0.9; β = 0.23; P = .03) and responsiveness of hospital staff (τ = 0.9; β = 0.56; P < .001), but also with increased rates of central line–associated bloodstream infections (τ = 0.9; β = 5.23; P = .03).

    Medication reconciliation was associated with increased communication with nursing at low quantiles (τ = 0.1; β = 0.60; P < .001), increased discharge information at middle quantiles (τ = 0.5; β = 0.28; P = .03), and increased responsiveness of hospital staff at middle (τ = 0.5; β = 0.77; P = .001) and high (τ = 0.9; β = 0.84; P = .001) quantiles. However, the concurrent use of Epic was associated with a reverse in these associations wherein increased medication reconciliation was associated with decreased communication with nursing at low quantiles (interaction: τ = 0.1; β = −1.19; P = .005) and decreased responsiveness of staff ratings at middle quantiles (interaction: τ = 0.5; β = −1.37; P = .02).

    Patients accessing their information online was not significantly associated with any outcome. However, having patients’ health information online, whether accessed or not, was associated with an increase in SSIs after colon surgery at high quantiles (τ = 0.9; β = 12.45; P = .03).

    Patient-specific educational resources were associated with increased communication with physicians at high quantiles (τ = 0.9; β = 0.20; P = .02); however, a reverse association was found with concurrent use of Cerner (interaction: τ = 0.9; β = −0.36; P = .02) or McKesson (interaction: τ = 0.9; β = −0.36; P = .02). In addition, patient-specific educational resources were associated with decreased spending at low spending hospitals (τ = 0.1; β = −0.40; P = .008).

    Discussion

    This study is the first of which we are aware to assess whether EHR implementation above MU thresholds is associated with HVBP outcomes. Our results suggest that EHR use above minimal MU requirements has small, mixed associations with HVBP engagement, efficiency, and safety outcomes that in some cases depend on the EHR vendor.

    Although increased use of CPOE for medications was associated with improved patient satisfaction in some areas, increased CPOE for laboratory tests was associated with lower satisfaction in all areas. This finding suggests that studies of CPOE must look at these distinct order types rather than CPOE as a single entity. Although CPOE for medication and laboratory orders is commonly unified by the EHR, the workflows for each activity diverge almost immediately. Systems factors beyond the CPOE system may contribute to these opposing associations, and more research is therefore necessary to explain these findings.

    Our finding of no association between patients accessing their information online and cost savings is consistent with past research.16 Regarding our finding that patients’ information being online, whether accessed or not, was associated with an increase in SSIs after colon surgery is most likely the result of an unidentified confounding variable, because no straightforward theory as to why these would be associated appears to exist.

    Past research has shown electronic health information exchange to be associated with better patient satisfaction and cost control.39,40 Although we did not find associations with cost savings, we did find positive associations with patient satisfaction, in particular communication with nurses and responsiveness of hospital staff. Communication with nursing is vital at admission and discharge, and increased electronic transmission of health records may facilitate data gathering and nurse-patient communication that occurs during these times, resulting in higher ratings of communication.

    Past research41 has examined medication reconciliation and patient satisfaction as independent outcomes in the context of transition of care interventions, but no past research has looked at specific associations between medication reconciliation performed through the EHR and patient satisfaction. We found medication reconciliation was associated with several dimensions of patient satisfaction related to admission and discharge, when medication reconciliation would be performed. However, these associations were scattered among low, medium, and high quantiles. It is unclear why these associations were not more consistent across patient satisfaction dimensions and across quantiles. Moreover, past research42 has found cost savings associated with pharmacist-led interventions involving medication reconciliation, so we were surprised to not find this association at any quantile.

    Identifying patient educational information through the EHR was associated with higher ratings of communication with physicians at high quantiles, but only associated with decreased Medicare spending per beneficiary at low-spending hospitals. Physicians with high communication skills may be more adept at using this information through the EHR, so only highly rated communicators might see benefits from using this information. Similarly, cost savings may only be seen with increased use of educational information found through the EHR at low-spending hospitals because less efficient hospitals may not have structures and workflows to use these tools as efficiently. Further research is necessary to explore these results.

    Several of our results are associated with significant interaction terms based on EHR vendor, which either removed or reversed the main effects. This finding suggests that the particular solutions offered may differentiate by vendor and warrant further study.

    Taken together, our results suggest that the MU performance measures used thus far do not straightforwardly estimate HVBP measures of patient satisfaction, efficiency, or safety. Although stage 2 of MU is largely in the past, stage 3 is the present for hospitals, and many of the performance measures for stage 3 are the same as those considered herein.43 Our results suggest that the current criteria may not be focusing on the right metrics to improve patient satisfaction, efficiency, and some measures of safety as measured by HVBP at all hospitals.

    Strengths and Limitations

    Strengths of our study include a large sample size and the use of quantile regression to explore associations at different levels of the outcomes. There are also several limitations. Owing to changes in the measures used, our time frame is limited to 2016 and reflects only part of the HVBP safety domain and none of the clinical care domain. We may not have been able to include some relevant factors in our models. Of note, the MU program only collects data about certified EHR technology, and thus our analysis does not take into account the potential effects of using noncertified or non-EHR systems. Moreover, there are previously described limitations to the validity of HVBP domains as measures of patient satisfaction and cost control.44,45 Our sample was limited to acute care hospitals in the United States because only they were eligible for the HVBP program, excluding many rural and critical access hospitals, which historically have struggled to implement EHR technology.46 Moreover, much of the data analyzed are self-submitted by hospitals, which may be a source of bias and error. In particular, our findings regarding health care–associated infection outcomes may not be generalizable because the MU measures of hospitals included in the models were different from those in the hospitals excluded because they did not submit data, and hospitals may not have submitted data for particular measures owing to low performance.

    Conclusions

    Although some MU performance measures were significantly associated with patient satisfaction, efficiency, and safety, most associations varied by the level of the outcomes. Moreover, the EHR vendor was an important interacting factor in several of our findings. Insofar as the MU program was founded on the belief that more EHR implementation will lead to better quality and safety, these results call for a critical evaluation of the criteria by which EHR implementation is measured and incentivized, as well as increased attention to understanding how the different features of EHR solutions may lead to differential outcomes.

    Back to top
    Article Information

    Accepted for Publication: May 24, 2020.

    Published: September 9, 2020. doi:10.1001/jamanetworkopen.2020.12529

    Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2020 Murphy ZR et al. JAMA Network Open.

    Corresponding Author: Michael V. Boland, MD, PhD, Wilmer Eye Institute, Johns Hopkins University School of Medicine, 600 N Wolfe St, Wilmer 131, Baltimore, MD 21287 (boland@jhu.edu).

    Author Contributions: Mr Murphy and Dr Boland had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

    Concept and design: Murphy, Boland.

    Acquisition, analysis, or interpretation of data: All authors.

    Drafting of the manuscript: Murphy, Boland.

    Critical revision of the manuscript for important intellectual content: All authors.

    Statistical analysis: Murphy, Wang.

    Administrative, technical, or material support: Boland.

    Supervision: Boland.

    Conflict of Interest Disclosures: Dr Wang reported receiving grants from the National Eye Institute (NEI), National Institutes of Health, during the conduct of the study. Dr Boland reported receiving personal fees from Carl Zeiss Meditec, Inc, outside the submitted work. No other disclosures were reported.

    Funding/Support: Research at the Wilmer Eye Institute, including biostatistical consultations, was supported by core grant EY001765 from the NEI and Research to Prevent Blindness.

    Role of the Funder/Sponsor: The sponsors had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

    References
    1.
    Blumenthal  D.  Launching HITECH.   N Engl J Med. 2010;362(5):382-385. doi:10.1056/NEJMp0912825 PubMedGoogle ScholarCrossref
    2.
    Centers for Medicare & Medicaid Services (CMS). Promoting interoperability programs: data and program reports. Published May 14, 2019. Accessed September 14, 2019. https://www.cms.gov/regulations-and-guidance/legislation/ehrincentiveprograms/dataandreports.html
    3.
    Adler-Milstein  J, Holmgren  AJ, Kralovec  P, Worzala  C, Searcy  T, Patel  V.  Electronic health record adoption in US hospitals: the emergence of a digital “advanced use” divide.   J Am Med Inform Assoc. 2017;24(6):1142-1148. doi:10.1093/jamia/ocx080 PubMedGoogle ScholarCrossref
    4.
    Sandefer  RH, Marc  DT, Kleeberg  P.  Meaningful use attestations among US hospitals: the growing rural-urban divide.   Perspect Health Inf Manag. 2015;12(Spring):1f.PubMedGoogle Scholar
    5.
    Kruse  CS, DeShazo  J, Kim  F, Fulton  L.  Factors associated with adoption of health information technology: a conceptual model based on a systematic review.   JMIR Med Inform. 2014;2(1):e9. doi:10.2196/medinform.3106 PubMedGoogle Scholar
    6.
    Hessels  A, Flynn  L, Cimiotti  JP, Bakken  S, Gershon  R.  Impact of heath information technology on the quality of patient care.   Online J Nurs Inform. 2015;19:19.PubMedGoogle Scholar
    7.
    Mitchell  JP.  Electronic healthcare’s relationship with patient satisfaction and communication.   J Healthc Qual. 2016;38(5):296-303. doi:10.1097/01.JHQ.0000462678.02018.92 PubMedGoogle ScholarCrossref
    8.
    Restuccia  JD, Cohen  AB, Horwitt  JN, Shwartz  M.  Hospital implementation of health information technology and quality of care: are they related?   BMC Med Inform Decis Mak. 2012;12:109. doi:10.1186/1472-6947-12-109PubMedGoogle ScholarCrossref
    9.
    Manta  CJ, Caplan  R, Goldsack  J, Smith  S, Robinson  E.  The impact of health information technologies on patient satisfaction.   Am J Accountable Care. 2016;4(4):9-15.Google Scholar
    10.
    Jarvis  B, Johnson  T, Butler  P,  et al.  Assessing the impact of electronic health records as an enabler of hospital quality and patient satisfaction.   Acad Med. 2013;88(10):1471-1477. doi:10.1097/ACM.0b013e3182a36cab PubMedGoogle ScholarCrossref
    11.
    Migdal  CW, Namavar  AA, Mosley  VN, Afsar-manesh  N.  Impact of electronic health records on the patient experience in a hospital setting.   J Hosp Med. 2014;9(10):627-633. doi:10.1002/jhm.2240 PubMedGoogle ScholarCrossref
    12.
    Irani  JS, Middleton  JL, Marfatia  R, Omana  ET, D’Amico  F.  The use of electronic health records in the exam room and patient satisfaction: a systematic review.   J Am Board Fam Med. 2009;22(5):553-562. doi:10.3122/jabfm.2009.05.080259 PubMedGoogle ScholarCrossref
    13.
    Street  RL  Jr, Liu  L, Farber  NJ,  et al.  Keystrokes, mouse clicks, and gazing at the computer: how physician interaction with the EHR affects patient participation.   J Gen Intern Med. 2018;33(4):423-428. doi:10.1007/s11606-017-4228-2 PubMedGoogle ScholarCrossref
    14.
    Street  RL  Jr, Liu  L, Farber  NJ,  et al.  Provider interaction with the electronic health record: the effects on patient-centered communication in medical encounters.   Patient Educ Couns. 2014;96(3):315-319. doi:10.1016/j.pec.2014.05.004 PubMedGoogle ScholarCrossref
    15.
    Adler-Milstein  J, Everson  J, Lee  S-YD.  EHR adoption and hospital performance: time-related effects.   Health Serv Res. 2015;50(6):1751-1771. doi:10.1111/1475-6773.12406 PubMedGoogle ScholarCrossref
    16.
    Collum  TH, Menachemi  N, Sen  B.  Does electronic health record use improve hospital financial performance? evidence from panel data.   Health Care Manage Rev. 2016;41(3):267-274. doi:10.1097/HMR.0000000000000068 PubMedGoogle ScholarCrossref
    17.
    Yanamadala  S, Morrison  D, Curtin  C, McDonald  K, Hernandez-Boussard  T.  Electronic health records and quality of care: an observational study modeling impact on mortality, readmissions, and complications.   Medicine (Baltimore). 2016;95(19):e3332. doi:10.1097/MD.0000000000003332 PubMedGoogle Scholar
    18.
    Vincent  AG, Kirk  A, Augustine  C. EHR incentive programs: 2011 meaningful use census. Centers for Medicare & Medicaid Services. Published November 2012. Accessed September 15, 2019. https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Downloads/WhitePaper2011_MeaningfulUse_CensusRemediated.pdf
    19.
    Holmgren  AJ, Adler-Milstein  J, McCullough  J.  Are all certified EHRs created equal? assessing the relationship between EHR vendor and hospital meaningful use performance.   J Am Med Inform Assoc. 2018;25(6):654-660. doi:10.1093/jamia/ocx135PubMedGoogle ScholarCrossref
    20.
    Medicare Learning Network. Hospital value-based purchasing. Published September 2017. Accesssed September 1, 2019. https://www.cms.gov/Outreach-and-Education/Medicare-Learning-Network-MLN/MLNProducts/downloads/Hospital_VBPurchasing_Fact_Sheet_ICN907664.pdf
    21.
    Centers for Medicare & Medicaid Services (CMS). Medicare Hospital Compare overview. Accessed September 14, 2019. https://www.medicare.gov/hospitalcompare/About/What-Is-HOS.html
    22.
    Centers for Medicare & Medicaid Services (CMS). Hospital Compare datasets. Data.Medicare.Gov. Published July 25, 2018. Accessed September 15, 2019. https://data.medicare.gov/data/hospital-compare
    23.
    Centers for Medicare & Medicaid Services (CMS). Eligible hospital and CAH meaningful use table of contents core and menu set objectives, stage 1 (2013 definition). Updated July 2014. Accessed August 1, 2019. https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/downloads/Hosp_CAH_MU-TOC.pdf
    24.
    Centers for Medicare & Medicaid Services (CMS). Stage 2 eligible hospital and critical access hospital (CAH) meaningful use core and menu objectives table of contents. Published October 2012. Accessed August 1, 2019. https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Downloads/Stage2_MeaningfulUseSpecSheet_TableContents_EligibleHospitals_CAHs.pdf
    25.
    Centers for Medicare & Medicaid Services (CMS). Eligible hospitals modified stage 2 for stage 1 attestation public use file (PUF) data dictionary and codebook. Published April 2016. Accessed August 1, 2019. https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Downloads/EH_PUF_DataDictionaryCodebookStage1_2.zip
    26.
    Centers for Medicare & Medicaid Services (CMS). Eligible hospitals modified stage 2 for stage 2 attestation public use file (PUF) data dictionary and codebook. Published April 2016. Accessed August 1, 2019. https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Downloads/EH_PUF_DataDictionaryCodebookStage1_2.zip
    27.
    Centers for Medicare & Medicaid Services (CMS). Meaningful use data: public use files. Modifed February 11, 2020. Accessed August 1, 2019. https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/PUF.html
    28.
    Office of the National Coordinator for Health Information Technology. EHR products used for meaningful use attestation. Updated January 10, 2019. Accessed August 15, 2019. https://dashboard.healthit.gov/datadashboard/documentation/ehr-products-mu-attestation-data-documentation.php
    29.
    Office of the National Coordinator for Health Information Technology. Certified health IT product list (CHPL): public user guide. Accessed September 14, 2019. https://www.healthit.gov/sites/default/files/policy/chpl_public_user_guide.pdf
    30.
    Office of the National Coordinator for Health Information Technology. 2011 and 2014 certification criteria equivalency table. Published March 20, 2012. Accessed September 20, 2019. https://www.healthit.gov/sites/default/files/equivtable021913_0.pdf
    31.
    Centers for Medicare & Medicaid Services (CMS). Cost reports by fiscal year. Modified April 16, 2020. Accessed August 7, 2018. https://www.cms.gov/Research-Statistics-Data-and-Systems/Downloadable-Public-Use-Files/Cost-Reports/Cost-Reports-by-Fiscal-Year.html
    32.
    Centers for Medicare & Medicaid Services (CMS). Case mix index. Published August 14, 2017. Accessed May 13, 2020. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/Acute-Inpatient-Files-for-Download-Items/CMS022630
    33.
    American Nurses’ Credentialing Center. Find a Magnet facility. Accessed September 15, 2019. https://www.nursingworld.org/organizational-programs/magnet/find-a-magnet-facility/
    34.
    National Center for Health Statistics. NCHS urban rural classification scheme for counties. Reviewed June 1, 2017. Accessed September 15, 2019. https://www.cdc.gov/nchs/data_access/urban_rural.htm
    35.
    American Hospital Directory. Advanced search. Accessed August 14, 2019. https://www.ahd.com/search.php
    36.
    Davino  C, Furno  M, Vistocco  D.  Quantile Regression: Theory and Applications. Vol I. John Wiley & Sons; 2013.
    37.
    Davino  C, Furno  M, Vistocco  D.  Quantile Regression: Theory and Applications. Vol II. John Wiley & Sons; 2014.
    38.
    Pollard  TJ, Johnson  AEW, Raffa  JD, Mark  RG.  tableone: an open source Python package for producing summary statistics for research papers.   JAMIA Open. 2018;1(1):26-31. doi:10.1093/jamiaopen/ooy012 PubMedGoogle ScholarCrossref
    39.
    Rahurkar  S, Vest  JR, Menachemi  N.  Despite the spread of health information exchange, there is little evidence of its impact on cost, use, and quality of care.   Health Aff (Millwood). 2015;34(3):477-483. doi:10.1377/hlthaff.2014.0729 PubMedGoogle ScholarCrossref
    40.
    Vest  JR, Miller  TR.  The association between health information exchange and measures of patient satisfaction.   Appl Clin Inform. 2011;2(4):447-459. doi:10.4338/ACI-2011-06-RA-0040 PubMedGoogle ScholarCrossref
    41.
    Brantley  AF, Rossi  DM, Barnes-Warren  S, Francisco  JC, Schatten  I, Dave  V.  Bridging gaps in care: implementation of a pharmacist-led transitions-of-care program.   Am J Health Syst Pharm. 2018;75(5, suppl 1):S1-S5. doi:10.2146/ajhp160652 PubMedGoogle ScholarCrossref
    42.
    Patel  E, Pevnick  JM, Kennelty  KA.  Pharmacists and medication reconciliation: a review of recent literature.   Integr Pharm Res Pract. 2019;8:39-45. doi:10.2147/IPRP.S169727 PubMedGoogle ScholarCrossref
    43.
    Centers for Medicare & Medicaid Services (CMS). Medicare promoting interoperability program eligible hospitals, critical access hospitals, and dual-eligible hospitals objectives and measures for 2019. Accessed May 24, 2019. https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Downloads/TableofContents_EH_Medicare_2019.pdf
    44.
    Westbrook  KW, Babakus  E, Grant  CC.  Measuring patient-perceived hospital service quality: validity and managerial usefulness of HCAHPS scales.   Health Mark Q. 2014;31(2):97-114. doi:10.1080/07359683.2014.907114 PubMedGoogle ScholarCrossref
    45.
    Lamboy Ruiz  MA, No  WG, Watanabe  OV.  Discrepancies in hospital financial information: comparison of financial data in state data repositories and the healthcare cost reporting information system.   J Inf Syst. 2019;33(3):19-44. doi:10.2308/isys-52149Google Scholar
    46.
    Kim  J, Ohsfeldt  RL, Gamm  LD, Radcliff  TA, Jiang  L.  Hospital characteristics are associated with readiness to attain stage 2 meaningful use of electronic health records.   J Rural Health. 2017;33(3):275-283. doi:10.1111/jrh.12193 PubMedGoogle ScholarCrossref
    ×