Vendor-reported electronic health record (EHR) SUS scores for 2014 and 2015 certified products are compared with average benchmark (dotted line) and above-average benchmark (solid line) SUS scores.
Customize your JAMA Network experience by selecting one or more topics from the list below.
Identify all potential conflicts of interest that might be relevant to your comment.
Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.
Err on the side of full disclosure.
If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.
Not all submitted comments are published. Please see our commenting policy for details.
Gomes KM, Ratwani RM. Evaluating Improvements and Shortcomings in Clinician Satisfaction With Electronic Health Record Usability. JAMA Netw Open. 2019;2(12):e1916651. doi:10.1001/jamanetworkopen.2019.16651
With the widespread adoption of electronic health records (EHRs), there is increased focus on addressing the challenges of EHR usability, ie, the extent to which the technology enables users to achieve their goals effectively, efficiently, and satisfactorily.1 Poor usability is associated with clinician job dissatisfaction and burnout and could have patient safety consequences.2-4
The US Department of Health and Human Services Office of the National Coordinator for Health Information Technology established safety-enhanced design certification requirements for EHRs to promote usability. These requirements stipulate that vendors must conduct and report results of formal usability testing, including measuring satisfaction with the EHR system.5 Results are publicly available. While some vendors use a 5-point, ease-of-use rating scale, most vendors use the system usability scale (SUS), which is a validated posttest questionnaire that measures user satisfaction with product usability.6 The questionnaire provides a score (range, 0-100) based on a participant’s rating of 10 statements regarding a product’s usability.6 Higher scores indicate greater satisfaction with usability.6 Based on an analysis of more than 200 studies of various products in various industries, an SUS score of 68 is considered the average benchmark, and an SUS of 80 is considered the above-average benchmark.6 Recognizing the importance of satisfaction with EHR usability to clinician burnout and patient safety, reported product 2015 SUS scores for EHR systems were compared with 2014 SUS scores and with benchmarks to evaluate whether satisfaction is improving.2-4
Per Common Rule, institutional review board approval was not required for this study because these are publicly available data sets that do not contain protected human participant information. This report followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline.
We identified the 70 EHR vendors with the most attestations to meaningful use from health care facilities between July 1, 2016, and April 30, 2018. For inclusion in analysis the vendor must have had an EHR product with computerized provider order entry functionality, certified according to the safety-enhanced design criterion, and a reported SUS score for the 2014 and 2015 certification requirements. For each vendor, the usability report for the most recent version of the product meeting the 2014 certification requirements (ie, before January 14, 2016, when the 2015 certification requirements became effective) and the usability report for the most recent version of the product meeting the 2015 certification requirements were retrieved, and the SUS scores were analyzed. A paired t test, with a 2-tailed P < .05 indicating statistical significance, was used to determine differences in SUS scores between 2014 and 2015, with means and standard deviations reported. All statistical analyses were performed with SPSS statistical software version 25 (IBM Corp).
A total of 27 vendors met the inclusion criteria. Mean (SD) SUS scores for 2014 and 2015 products were not statistically different (73.2 [16.6] vs 75.0 [14.2]; t26 = 0.674; P = .51). Comparing 2014 products to benchmarks, 9 (33%) were below the average benchmark SUS score of 68, 18 (67%) were at or above average, and 11 (41%) met or exceeded the above-average benchmark score of 80 (Figure). For 2015 products, 7 (26%) were below the average benchmark, 20 (74%) were at or above average, and 12 (44%) met or exceeded the above-average benchmark. Between 2014 and 2015, SUS scores for 12 products (44%) decreased, 13 (48%) increased, and 2 (7%) were unchanged.
There was no statistical improvement in EHR SUS scores between products certified according to 2014 and 2015 standards. One-third of 2014 products and one-quarter of 2015 products fell below the average benchmark SUS score. Despite the implications of EHR dissatisfaction on clinician burnout and patient safety, SUS scores decreased for 44% of vendors from 2014 to 2015.2-4
This study has limitations. Vendor-reported SUS scores may not reflect satisfaction with implemented EHRs, and only a subset of vendors were analyzed because of differences in methods for measuring satisfaction.
Based on vendor-reported SUS scores, clinician satisfaction with EHR usability is not improving for many widely used products. An increased focus on clinician end users during product design and development as well as optimized certification requirements are needed to improve usability.
Accepted for Publication: October 11, 2019.
Published: December 13, 2019. doi:10.1001/jamanetworkopen.2019.16651
Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2019 Gomes KM et al. JAMA Network Open.
Corresponding Author: Raj M. Ratwani, PhD, MedStar Health National Center for Human Factors in Healthcare, MedStar Health, 3007 Tilden St, Ste 7M, Washington, DC 20008 (email@example.com).
Author Contributions: Ms Gomes had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Concept and design: Both authors.
Acquisition, analysis, or interpretation of data: Both authors.
Drafting of the manuscript: Both authors.
Critical revision of the manuscript for important intellectual content: Gomes.
Statistical analysis: Gomes.
Obtained funding: Ratwani.
Administrative, technical, or material support: Ratwani.
Conflict of Interest Disclosures: Dr Ratwani reported receiving grants from the Agency for Healthcare Research and Quality outside the submitted work. No other disclosures were reported.
Funding/Support: This work was supported by grant number R01 HS025136 from the Agency for Healthcare Research and Quality to Dr Ratwani.
Role of the Funder/Sponsor: The funder had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.