Electronic health record (EHR) usability, which is the extent that EHRs support clinicians in achieving their goals in a satisfying, effective, and efficient manner, is a point of frustration for clinicians and can have patient safety consequences.1,2 However, specific usability issues and EHR clinical processes that contribute to possible patient harm across different health care facilities have not been identified.3 We analyzed reports of possible patient harm that explicitly mentioned a major EHR vendor or product.4
This study was approved by the MedStar Health Institutional review board with an informed consent waiver. Patient safety reports, which are free-text descriptions of safety events, were analyzed from 2013 through 2016. Reports were retrieved from the Pennsylvania Patient Safety Authority database, which collects reports from 571 health care facilities in Pennsylvania, and from a large multihospital academic health care system in the mid-Atlantic, outside of Pennsylvania. Reports were voluntarily entered by health care staff, mostly nurses, and included several sentences describing the safety event, contributing factors, and categorization of effect on the patient. This categorization indicates whether the event reached the patient (meaning additional health care services were required), whether there was harm at the time of reporting, or the potential for harm to the patient. The harm categories were (1) reached the patient and potentially required monitoring to preclude harm, (2) potentially caused temporary harm, (3) potentially caused permanent harm, and (4) could have necessitated intervention to sustain life or could have resulted in death.
Reports were included for analysis if 1 of the top 5 EHRs (vendors or products; based on the number of health care organizations attesting to meeting meaningful use requirements with that EHR) was mentioned and if the report was categorized as reaching the patient with possible harm.4 Two usability experts reviewed reports to determine if the report contained explicit language to associate the possible harm report with an EHR usability issue. Usability-related reports were further categorized into 1 of 7 usability topics to describe the primary usability challenge, based on a synthesis of previous taxonomies, and categorized into 1 of 4 EHR clinical processes, based on existing categories of EHR interactions (Table 1). A subset of the data (15%) were dually coded. Inter-rater reliability κ scores were 0.9 (usability as a contributing factor), 0.83 (usability category), and 0.87 (EHR clinical process).
Of 1.735 million reported safety events, 1956 (0.11%) explicitly mentioned an EHR vendor or product and were reported as possible patient harm and 557 (0.03%) had language explicitly suggesting EHR usability contributed to possible patient harm. Harm level analysis of the 557 reports were reached the patient and potentially required monitoring to preclude harm (84%, n = 468), potentially caused temporary harm (14%, n = 80), potentially caused permanent harm (1%, n = 7), and could have necessitated intervention to sustain life or could have resulted in death (<1%, n = 2).
Of the 7 usability categories, challenges were data entry (27%, n = 152), alerting (22%, n = 122), interoperability (18%, n = 102), visual display (9%, n = 52), availability of information (9%, n = 50), system automation and defaults (8%, n = 43), and workflow support (7%, n = 36). Of the 4 EHR clinical processes, usability challenges occurred during order placement (38%, n = 213), medication administration (37%, n = 207), review of results (16%, n = 87), and documentation (9%, n = 50) (Table 2).
EHR usability may have been a contributing factor to some possible patient harm events. Only a small percentage of potential harm events were associated with EHR usability, but the analysis was conservative because safety reports only capture a small fraction of the actual number of safety incidents, and only reports with explicit mentions of the top 5 vendors or products were included.5
Patient safety reports contain limited information making it difficult to identify causal factors and may be subject to reporter bias, inaccuracies, and a tendency to attribute blame for an event to the EHR. Additional research is needed to determine causal relationships between EHR usability and patient harm and the frequency of occurrence. Although federal policies promote EHR usability and safety, additional research may support policy refinement.6
Accepted for Publication: January 26, 2018.
Corresponding Author: Raj Ratwani, PhD, National Center for Human Factors in Healthcare, MedStar Health, 3007 Tilden St NW, Ste 7L, Washington, DC 20008 (firstname.lastname@example.org).
Author Contributions: Dr Ratwani had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Concept and design: Howe, Adams, Ratwani.
Acquisition, analysis, or interpretation of data: All authors.
Drafting of the manuscript: All authors.
Critical revision of the manuscript for important intellectual content: All authors.
Statistical analysis: Howe, Adams, Ratwani.
Obtained funding: Ratwani.
Administrative, technical, or material support: All authors.
Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest and none were reported.
Funding/Support: This project was funded by grant R01 HS023701-02 from the Agency for Healthcare Research and Quality (AHRQ) of the US Department of Health and Human Services (Dr Ratwani).
Role of the Funder/Sponsor: The funder had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Disclaimer: The opinions expressed in this document are those of the authors and do not reflect the official position of AHRQ or the US Department of Health and Human Services.
V. An appraisal of published usability evaluations of electronic health records via systematic review. J Am Med Inform Assoc
. 2016;24(1):218-226.PubMedGoogle ScholarCrossref
LL. The incident reporting system does not detect adverse drug events: a problem for quality improvement. Jt Comm J Qual Improv
. 1995;21(10):541-548.PubMedGoogle Scholar
RJ. Electronic health record vendor adherence to usability certification requirements and testing standards. JAMA
. 2015;314(10):1070-1071.PubMedGoogle ScholarCrossref