Key Points español 中文 (chinese) Question
What is the association between electronic health record use and physician fatigue and efficiency?
Findings
In this cross-sectional study of 25 physicians completing 4 simulated cases of intensive care unit patients in the electronic health record, all physicians experienced fatigue at least once and 80% experienced fatigue within the first 22 minutes of electronic health record use, which was associated with less efficient electronic health record use (more time, more clicks, and more screens) on the subsequent patient case.
Meaning
Physicians experience electronic health record–related fatigue in short periods of continuous electronic health record use, which may be associated with inefficient and suboptimal electronic health record use.
Importance
The use of electronic health records (EHRs) is directly associated with physician burnout. An underlying factor associated with burnout may be EHR-related fatigue owing to insufficient user-centered interface design and suboptimal usability.
Objective
To examine the association between EHR use and fatigue, as measured by pupillometry, and efficiency, as measured by mouse clicks, time, and number of EHR screens, among intensive care unit (ICU) physicians completing a simulation activity in a prominent EHR.
Design, Setting, and Participants
A cross-sectional, simulation-based EHR usability assessment of a leading EHR system was conducted from March 20 to April 5, 2018, among 25 ICU physicians and physician trainees at a southeastern US academic medical center. Participants completed 4 simulation patient cases in the EHR that involved information retrieval and task execution while wearing eye-tracking glasses. Fatigue was quantified through continuous eye pupil data; EHR efficiency was characterized through task completion time, mouse clicks, and EHR screen visits. Data were analyzed from June 1, 2018, to August 31, 2019.
Main Outcomes and Measures
Primary outcomes were physician fatigue, measured by pupillometry (with lower scores indicating greater fatigue), and EHR efficiency, measured by task completion times, number of mouse clicks, and number of screens visited during EHR simulation.
Results
The 25 ICU physicians (13 women; mean [SD] age, 33.2 [6.1] years) who completed a simulation exercise involving 4 patient cases (mean [SD] completion time, 34:43 [11:41] minutes) recorded a total of 14 hours and 27 minutes of EHR activity. All physician participants experienced physiological fatigue at least once during the exercise, and 20 of 25 participants (80%) experienced physiological fatigue within the first 22 minutes of EHR use. Physicians who experienced EHR-related fatigue in 1 patient case were less efficient in the subsequent patient case, as demonstrated by longer task completion times (r = −0.521; P = .007), higher numbers of mouse clicks (r = −0.562; P = .003), and more EHR screen visits (r = −0.486; P = .01).
Conclusions and Relevance
This study reports high rates of fatigue among ICU physicians during short periods of EHR simulation, which were negatively associated with EHR efficiency and included a carryover association across patient cases. More research is needed to investigate the underlying causes of EHR-associated fatigue, to support user-centered EHR design, and to inform safe EHR use policies and guidelines.
Use of electronic health records (EHRs) is directly associated with physician burnout.1,2 Many physicians have voiced dissatisfaction with the click-heavy, data-busy interfaces of existing EHRs.1,3 Other factors associated with EHR frustration include scrolling through pages of notes and navigating through multiscreen workflows in the search for information.4 Excess EHR screen time leads to emotional distress in physicians and limits face-to-face contact with patients, resulting in higher rates of medical errors.5,6 Thus, common attitudes among physicians toward the EHR include “inefficient,”7 “time-consuming,”8 and “exhausting.”9
Patient safety and quality of care depend on EHR usability.6,10 This fact is especially true in intensive care units (ICUs), where critically ill patients generate, on average, more than 1200 individual data points each day,11 and it has been estimated that ICU clinicians monitor about 187 alerts per patient per day,12 mostly through the EHR. Poor EHR design exacerbates this cycle, potentially affecting decision-making and causing delays in care,6 medical errors,6,13 and unanticipated patient safety events, especially in high-risk environments.14-16 Despite the challenges of today’s EHR interfaces, much work remains to achieve truly user-centered EHR systems with better designs that improve efficiency (ie, mouse clicks and time), streamline decision-making processes, and support patient safety.17,18 Whereas traditional EHR usability testing often focuses on intrinsic, vendor-specific aspects of the system (such as screen layouts and workflows), it is important to distinguish EHR efficiency as extrinsic and dynamic—as much a function of the user as the system itself.
Eye tracking, the study of movements of the eyes, and pupillometry, the measurement of pupil dilation, have been applied in many nonclinical domains. Eye-tracking research, which typically analyzes fixation duration, gaze points, and fixation counts,19 has been used to investigate users’ engagement with advanced interfaces and website design, as well as visual attention in video games.20-22 In biomedicine, eye-tracking techniques have mostly been used to understand factors associated with interpretation of radiology studies, identification of medication allergies, reading progress notes in the EHR, and physician attention during cardiopulmonary bypass.23-26
Pupillometry, however, remains underused in medical research despite its promising capabilities. The degree of pupillary constriction during a task is a validated biomarker for fatigue and alertness.27,28 Research has consistently shown that during conditions of fatigue, baseline pupil diameters are smaller than normal.29-33 Reduction in pupil size by 1 mm has been associated with signs of tiredness.29 Change in pupil diameters is typically small, ranging between 0.87 and 1.79 mm from normal pupil size.29 In 1 study, significant correlations were found between individual differences in pupil size and mental workload for patients with anxiety, suggesting an association between these 2 indicators.34 Despite the potential of these technologies, eye tracking and pupillometry have yet to be used to understand EHR-related fatigue and its association with the user experience for clinicians.
The purpose of this study was to examine the association between EHR use and fatigue, as measured by pupillometry, and efficiency, as measured by completion time, mouse clicks, and number of EHR screens, among ICU physicians completing a simulation activity in a prominent EHR.
We conducted a cross-sectional, simulation-based EHR usability assessment of a leading EHR system (Epic; Epic Systems) among ICU physicians and physician trainees at a southeastern US academic medical center, after approval from the University of North Carolina at Chapel Hill Institutional Review Board. Details of our study methods have been reported previously.35 Testing took place from March 20 to April 5, 2018. This study followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline.36 Participants provided written consent.
Study Setting and Participants
The study was conducted at a southeastern US tertiary academic medical center with a 30-bed medical ICU. We recruited participants through departmental emails and flyers. The eligibility criteria were: (1) medical ICU physicians (ie, faculty or trainee), (2) any previous experience using Epic in critical care settings, and (3) not wearing prescription glasses at the time of the study, to avoid interference with the eye-tracking glasses.
We recruited 25 medical ICU physicians for this study. Our sample exceeded the conventional usability study standards that recommend 5 to 15 participants to reveal 85% to 97% of usability issues.37,38 All testing took place in an onsite biobehavioral laboratory designed for simulation-based studies, equipped with a computer workstation with access to institutional EHR training environment (Epic Playground), away from the live clinical environment. The computer screen was the standard screen clinicians use in their practice setting, with appropriate ergonomic placement, ambient lighting, and seating. Participants were recruited for a 1-hour individual session. Prior to each session, the principal investigator (S.K.) explained the study protocol to participants, assuring them that our study aim was to assess EHR efficiency rather than their clinical knowledge.
We asked participants to wear eye-tracking glasses (Tobii Pro Glasses 2; Tobii AB; eFigure 1 in the Supplement), which are extremely lightweight and do not impair vision. On sitting at the work station, the glasses were calibrated for each participant to establish individual baseline pupil size. Each participant then logged into the EHR training environment and completed, in sequence, the same 4 ICU patient cases, which were developed by a domain expert (T.B.) and physician trainee (C.C.), as published previously.35 Participants were asked to review a patient case (eTable 1 in the Supplement) and notify the research assistant when they completed their review. At that point, the research assistant asked the participant a series of interactive questions that involved verbal responses as well as completing EHR-based tasks. There were 21 total questions and tasks across the 4 patient cases (eTable 1 in the Supplement). Pupil diameter was recorded continuously during the entire study, and all participants used the same eye-tracking glasses. After participants completed the 4 cases, they removed the eye-tracking glasses, indicating the end of the study. Each participant received a $100 gift card on completion.
Primary outcomes were physician fatigue, measured by pupillometry (with lower scores indicating greater fatigue), and EHR efficiency, measured by completion time, number of mouse clicks, and number of screens visited during EHR simulation.
Quantification of Fatigue
Fatigue was measured on a scale from −1 to 1, as advised by an eye-tracking specialist, with lower scores than baseline indicating signs of fatigue, and negative scores (between 0 and −1) indicating actual physiological fatigue. Simulation sessions occurred across a mix of conditions (morning and afternoon), with some participants undergoing testing on a day off or nonclinical day and other participants coming from a clinical shift in the medical ICU. Thus, to account for individual differences in baseline pupil size, we calculated a baseline for each participant, defined as the participant’s mean pupil size for the first 5 seconds during calibration. We then determined acute changes in pupil size during the simulation exercise by subtracting each participant’s baseline pupil size from his or her pupil size for each question or case. For each participant, we analyzed changes in pupil size to generate fatigue scores associated with the EHR simulation exercise by question and by case, according to the equations:
Quantification of EHR Efficiency
We measured EHR efficiency by using standard usability software that ran in the background during the simulation exercises (TURF; University of Texas Health Science Center). This software includes a toolkit to capture task completion time, number of mouse clicks, and number of visited EHR screens for each case.
Data were analyzed from June 1, 2018, to August 31, 2019. We calculated summary and descriptive statistics for the primary outcome measures of fatigue and EHR efficiency, including subgroup analysis by sex and clinical role. To explore the association between fatigue and efficiency, we calculated Pearson correlation coefficients between fatigue scores and the EHR efficiency measures (time, mouse clicks, number of EHR screens visited). All analysis was performed in SPSS, version 22.0 (SPSS Inc). All P values were from 2-sided tests and results were deemed statistically significant at P < .05.
We recorded a total of 14 hours and 27 minutes of EHR activity across 25 ICU physicians (13 women; mean [SD] age, 33.2 [6.1] years) who completed a simulation exercise involving 4 patient cases (mean [SD] completion time, 34:43 [11:41] minutes) (Table). There was an uneven distribution by clinical role, with more resident physicians (n = 11) and fellows (n = 9) than attending physicians (n = 5). Mean (SD) age tended to mirror clinical role, with residents being the youngest group (29.0 [1.4] years; fellows, 32.7 [0.5] years; and attending physicians, 44.0 [6.5] years). An inverse trend was noted between clinical role and the mean (SD) self-reported time spent per week using the EHR, with residents spending the most time (41.2 [13.5] hours) and attending physicians spending the least (8.3 [7.2] hours). The mean self-reported years’ experience with Epic was similar across all 3 clinical roles.
All participants experienced actual physiological fatigue at least once throughout the EHR simulation exercise, as evidenced by a negative fatigue score. Total fatigue scores for participants ranged from −0.804 to 0.801 (eTable 2 in the Supplement).
Fatigue scores varied by case and by question or task. Figure 1 shows the distribution of physicians experiencing fatigue at the question level, ranging from 4 of 25 (16%) for relatively simple tasks involving basic information retrieval (“What was the patient’s last outpatient weight prior to this ICU admission?”) to 15 of 25 (60%) for tasks involving clinical ambiguity (“Reconcile a possibly spurious lab value”). Fifteen participants (60%) experienced fatigue by the end of reviewing case 3.
Cumulative Fatigue Over Time
Figure 2 shows the cumulative percentage of participants who experienced actual physiological fatigue at least once during the study, where each participant is counted as experiencing fatigue from the first instance. A total of 9 of 25 participants (36%) experienced fatigue within the first minute of the study; 16 of 25 participants (64%) experienced fatigue at least once within the first 20 minutes of the study, and 20 of 25 participants (80%) experienced fatigue after 22 minutes of EHR use. A sensitivity analysis was performed, in which we counted the second instance an individual experienced fatigue, and findings remained robust as 19 of 25 participants (76%) experienced a second instance of fatigue within 1 minute of the first instance (Figure 2).
Figure 3 shows the distribution of physician fatigue scores at the case level, stratified by sex and clinical role. Across all participants, mean fatigue scores remained similar from 1 case to the next and tightly clustered around 0; however, we did see some variation. Overall fatigue scores were negative for cases 2 and 3. Although there were differences in mean scores across different subgroups, these differences were not statistically significantly different (Figure 3).
Participants completed the study in a mean (SD) of 34:43 (11:41) minutes, using 304 (79) mouse clicks, and visiting 85 (19) EHR screens (Table). Female physicians were faster than male physicians (mean [SD], 31:37 [8:22] vs 38:04 [13:40] minutes) but required more mouse clicks (mean [SD], 355 [101] vs 301 [66]). Fellows were faster (mean [SD], 28:51 [5:52] vs 36:54 [14:43] minutes) and more efficient (mean [SD], 312.7 [88] vs 411.6 [90] mouse clicks) compared with residents. Attending physicians visited the fewest EHR screens compared with fellows and residents (mean [SD], 73 [8] vs 81 [16] vs 94 [21]). None of the observed sex- or role-based differences in EHR efficiency reached statistical significance. One participant spent noticeably more time than the mean on the simulation task (approximately 73 minutes compared with a mean of approximately 34 minutes). Sensitivity analyses conducted with the omission of this participant led to no significant differences in study findings.
The Carryover Association of EHR-Related Fatigue With Physician Efficiency
Physicians’ EHR efficiency was negatively associated with having experienced EHR-related fatigue. We observed a pattern in physicians’ EHR use after experiencing fatigue in 1 case such that the subsequent case required more time, mouse clicks, and EHR screen visits to complete, irrespective of the nature or order of the case. These results suggest a carryover association: when participants experienced greater fatigue during 1 patient case (as evidenced by more negative fatigue scores), they were less efficient using the EHR during the subsequent patient case. Figure 4A and B provide scatterplots mapping these associations.
Significant negative correlations were found between: fatigue scores for case 2 and the number of mouse clicks in case 3 (r = −0.481; P = .01), fatigue scores for case 3 and the number of mouse clicks in case 4 (r = −0.562; P = .003), fatigue scores in case 3 and the time to complete case 4 (r = −0.521; P = .007), and fatigue scores in case 3 and the number of EHR screens visited in case 4 (r = −0.486; P = .01). The association between fatigue scores for case 1 and the number of EHR screens visited in case 2 was not significant (r = −0.381; P = .06).
Our sensitivity analysis of the carryover showed similar patterns. When removing outliers, we observed the same negative correlations between fatigue scores and efficiency measures in the subsequent cases, as shown in Figure 4 and eFigure 2 and eTable 3 in the Supplement.
To our knowledge, this cross-sectional, simulation-based EHR usability study is the first to use pupillometry to assess the association of EHR activity with fatigue and efficiency among ICU physicians. We report that 20 of 25 physician participants (80%) experienced physiological fatigue at least once in 22 minutes of EHR use, as measured by pupillometry. Experiencing EHR-related fatigue was negatively associated with EHR efficiency as measured by time, mouse clicks, and screen visits.
We observed a carryover association: when participants experienced greater fatigue during 1 patient case, they were less efficient using the EHR during the subsequent patient case. There was an inverse association and a temporal component between fatigue scores and multiple domains of EHR efficiency spanning patient cases. This finding was most consistent with mouse clicks: across multiple sets of consecutive cases, lower fatigue scores on 1 case (indicating greater physiological fatigue) were associated with more mouse clicks on the subsequent case. To a lesser degree, we also observed an association between greater physiological fatigue during 1 case and needing more time and more screen visits in the subsequent case, although this pattern was limited to just 1 set of consecutive patient cases. These findings are hypothesis-generating, especially from the standpoint of the patient: if clinicians experience EHR-induced fatigue during the care of 1 patient, it may be associated with the care of the next patient in ways that are worthy of further investigation.
When compared with a typical day in an ICU, the simulation undertested the clinical demands of a physician. First-year trainees routinely review 5 or more patients, while upper-level residents, fellows, and attending physicians routinely review 12 or more patients. Even small differences in EHR efficiency measures during a single patient case, such as 10 to 20 mouse clicks or 30 to 60 seconds, could be clinically significant to a busy physician when scaled to a typical workload of 12 or more patients. Thus, the preliminary findings of this study may be increasingly pronounced as the number of patients reviewed in the EHR rises.
Previous Research Findings
Prior studies using pupillometry in EHR simulation have examined physician workload (pupil dilation) among emergency department and hospitalist physicians as well as physician workload (blink rates) among primary care physicians managing follow-up test results in the outpatient setting.39-42 Our study adds value by using pupillometry to characterize physician fatigue among intensivists managing critically ill patients, a particularly high-stakes setting. We also add nuance by extending our analysis to examine physician fatigue and EHR efficiency over time and across multiple cases, which mirrors the reality of clinical workflows in most inpatient settings.27,43 The finding that physiological fatigue appears to occur in short periods of EHR-related work among physicians is itself an important advancement, given that fatigue is one of the leading human factors associated with errors and accidents in the workplace44,45 and that it can co-occur with burnout.46
Strengths and Limitations
This study has some strengths, including the use of high-fidelity patient cases and clinically relevant interactive tasks, inclusion of physicians from different levels of training and clinical experience, the use of a leading EHR system, and the relatively large sample size (n = 25) that exceeds the typical threshold for usability studies. Furthermore, our approach to identifying and quantifying fatigue is a conservative one because we use relative pupil size changes and baseline testing rather than instantaneous (absolute) changes, so our findings may understate the actual physiological burden of EHR-related fatigue.
There are limitations in the study methods, procedures, and analysis that could potentially lead to the misinterpretation of findings. First, as this was a single-site study, we cannot exclude the possibility of selection bias, although we aimed to achieve a balance of sex representation and clinical roles. Second, cases were not randomized between participants in the simulation task, so it is possible that the observed fatigue was associated with case order. We also did not control for case-level features such as clinical acuity or number of tasks that might have explained the differences in time, number of EHR screens, and mouse clicks. However, in the natural clinical environment, there will always be variation in case complexity and task requirements from one patient to the next, so we wanted to mimic clinical workflows in the real world. Third, because all participants used the same eye-tracking glasses, there is the possibility of nondifferential measurement bias in the pupillometry data, which would introduce a conservative bias. Fourth, we did not collect subjective measures of fatigue from participants, as doing so for each case and question would have interrupted the flow of the study. Thus, we are unable to analyze the moment-to-moment association between objective fatigue, which we report, and subjective fatigue, which may be more clinically relevant. Fifth, in one case, the eye-tracking built-in battery died, which required an interruption to the activity.
These findings open the door for many potential research questions and opportunities for future work. Although we observed fatigue among participants using the EHR, it is unknown whether this fatigue was simply owing to the challenging nature of reviewing cases of critically ill patients or whether certain aspects of EHR design such as screen layouts or workflows played a role. Future research is needed to better understand the complex association between EHR-related fatigue and care outcomes. Additional work should randomize case order and should evaluate differences in perceived satisfaction and physiological fatigue levels since our preliminary findings may show a discrepancy in perceived and actual EHR association. Furthermore, testing should be expanded to include clinical practitioners from other roles whose work is EHR-intensive such as nursing, respiratory therapy, and social work. Finally, additional work is needed to better understand the association of user-centered design with EHR performance, satisfaction, usability, and patient outcomes.
We observed high rates of fatigue among ICU physicians during short periods of EHR simulation, which was negatively associated with EHR efficiency and included a carryover association across patient cases. More research is needed to investigate the underlying causes of EHR-associated fatigue, to support user-centered EHR design, and to inform safe EHR use policies and guidelines.
Accepted for Publication: March 11, 2020.
Published: June 9, 2020. doi:10.1001/jamanetworkopen.2020.7385
Correction: This article was corrected on June 24, 2020, to fix errors in demographic data in the Results section of the abstract and text and in Table 1.
Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2020 Khairat S et al. JAMA Network Open.
Corresponding Author: Saif Khairat, PhD, MPH, Carolina Health Informatics Program, University of North Carolina at Chapel Hill, 438 Carrington Hall, Chapel Hill, NC 27514 (saif@unc.edu).
Author Contributions: Drs Khairat and Coleman had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.
Concept and design: Khairat, Coleman, Bice, Carson.
Acquisition, analysis, or interpretation of data: Khairat, Coleman, Ottmar, Jayachander, Carson.
Drafting of the manuscript: Khairat, Coleman, Ottmar, Jayachander.
Critical revision of the manuscript for important intellectual content: Khairat, Coleman, Bice, Carson.
Statistical analysis: Khairat, Coleman, Ottmar.
Administrative, technical, or material support: Ottmar.
Supervision: Khairat, Bice, Carson.
Conflict of Interest Disclosures: Dr Carson reported receiving grants from Biomarck Pharmaceuticals outside the submitted work. No other disclosures were reported.
Funding/Support: This work was supported by grant 1T15LM012500-01 from the National Library of Medicine, which supports Dr Coleman in postdoctoral informatics training.
Role of the Funder/Sponsor: The funding source had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Additional Contributions: We acknowledge the efforts of Donald Spencer, CMIO, and the Epic team at University of North Carolina Health for their efforts with building Epic cases, and for providing personnel support and dedicated server space to run our study. We also acknowledge the Biobehavioral Lab at the School of Nursing and CHAI Core at the University of North Carolina at Chapel Hill for providing the research facility and technical support, as well as research assistants Thomas Newlin, Victoria Rand, and Lauren Zalla for their assistance with data collection and analysis, and Katherine Martin, eye-tracking specialist. None of these individuals were compensated.
8.Tutty
MA, Carlasare
LE, Lloyd
S, Sinsky
CA. The complex case of EHRs: examining the factors impacting the EHR user experience.
J Am Med Inform Assoc. 2019:26(7):673-677. Published correction appears in
J Am Med Inform Assoc. 2019:26(11):1424. doi:
10.1093/jamia/ocz021PubMedGoogle ScholarCrossref 9.Adler-Milstein
J, Zhao
W, Willard-Grace
R, Knox
M, Grumbach
K. Electronic health records and burnout: time spent on the electronic health record after hours and message volume associated with exhaustion but not with cynicism among primary care clinicians.
J Am Med Inform Assoc. 2020;27(4):531-538. doi:
10.1093/jamia/ocz220PubMedGoogle ScholarCrossref 11.Morris
A. Computer applications. In: Hall JB, Schmidt GA, Wood LDH, eds. Principles of Critical Care. McGraw Hill Inc, Health Professions Division, PreTest Series; 1992:500-514.
12.Drew
BJ, Harris
P, Zègre-Hemsey
JK,
et al. Insights into the problem of alarm fatigue with physiologic monitor devices: a comprehensive observational study of consecutive intensive care unit patients.
PLoS One. 2014;9(10):e110274. doi:
10.1371/journal.pone.0110274PubMedGoogle Scholar 13.Faiola
A, Srinivas
P, Duke
J. Supporting clinical cognition: a human-centered approach to a novel ICU information visualization dashboard.
AMIA Annu Symp Proc. 2015;2015:560-569.
PubMedGoogle Scholar 16.Khairat
S, Whitt
S, Craven
CK, Pak
Y, Shyu
CR, Gong
Y. Investigating the impact of intensive care unit interruptions on patient safety events and electronic health records use: an observational study.
J Patient Saf. 2019. doi:
10.1097/PTS.0000000000000603
PubMedGoogle Scholar 17.Committee on Patient Safety and Health Information Technology; Institute of Medicine. Health IT and Patient Safety: Building Safer Systems for Better Care. National Academies Press; 2011.
18.Khairat
S, Coleman
C, Ottmar
P, Bice
T, Koppel
R, Carson
SS. Physicians’ gender and their use of electronic health records: findings from a mixed-methods usability study.
J Am Med Inform Assoc. 2019;26(12):1505-1514. doi:
10.1093/jamia/ocz126
PubMedGoogle ScholarCrossref 21.Ehmke
C, Wilson
S. Identifying web usability problems from eye-tracking data. In: Proceedings of the 21st British HCI Group Annual Conference on People and Computers: HCI…But Not as We Know It—Volume 1. University of Lancaster, United Kingdom: British Computer Society; 2007:119-128.
25.Eghdam
A, Forsman
J, Falkenhav
M, Lind
M, Koch
S. Combining usability testing with eye-tracking technology: evaluation of a visualization support for antibiotic use in intensive care.
Stud Health Technol Inform. 2011;169:945-949. doi:
10.3233/978-1-60750-806-9-945PubMedGoogle Scholar 26.Merkle
F, Kurtovic
D, Starck
C, Pawelke
C, Gierig
S, Falk
V. Evaluation of attention, perception, and stress levels of clinical cardiovascular perfusionists during cardiac operations: a pilot study.
Perfusion. 2019;34(7):544-551. doi:
10.1177/0267659119828563
PubMedGoogle ScholarCrossref 28.de Rodez Benavent
SA, Nygaard
GO, Harbo
HF,
et al. Fatigue and cognition: pupillary responses to problem-solving in early multiple sclerosis patients.
Brain Behav. 2017;7(7):e00717. doi:
10.1002/brb3.717
PubMedGoogle Scholar 31.Unsworth
N, Robison
MK, Miller
AL. Individual differences in baseline oculometrics: examining variation in baseline pupil diameter, spontaneous eye blink rate, and fixation stability.
Cogn Affect Behav Neurosci. 2019;19(4):1074-1093. doi:
10.3758/s13415-019-00709-z
PubMedGoogle Scholar 36.von Elm
E, Altman
DG, Egger
M, Pocock
SJ, Gøtzsche
PC, Vandenbroucke
JP; STROBE Initiative. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies.
Int J Surg. 2014;12(12):1495-1499. doi:
10.1016/j.ijsu.2014.07.013PubMedGoogle ScholarCrossref 37.Nielsen
J, Landauer
TK. A mathematical model of the finding of usability problems. Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems. Amsterdam, The Netherlands: ACM;1993:206-213.
38.US Department of Health and Human Services, Food and Drug Administration. Applying Human Factors and Usability Engineering to Medical Devices. Center for Devices and Radiological Health; 2016.
40.Jayachander
D, Coleman
C, Rand
V, Newlin
T, Khairat
S. Novel eye-tracking methods to evaluate the usability of electronic health records.
Stud Health Technol Inform. 2019;262:244-247.
PubMedGoogle Scholar 41.Khairat
S, Jayachander
D, Coleman
C, Newlin
T, Rand
V. Understanding the impact of clinical training on EHR use optimization.
Stud Health Technol Inform. 2019;262:240-243.
PubMedGoogle Scholar 42.Mazur
LM, Mosaly
PR, Moore
C,
et al. Toward a better understanding of task demands, workload, and performance during physician-computer interactions.
J Am Med Inform Assoc. 2016;23(6):1113-1120. doi:
10.1093/jamia/ocw016
PubMedGoogle ScholarCrossref 45.McCormick
F, Kadzielski
J, Landrigan
CP, Evans
B, Herndon
JH, Rubash
HE. Surgeon fatigue: a prospective analysis of the incidence, risk, and intervals of predicted fatigue-related impairment in residents.
Arch Surg. 2012;147(5):430-435. doi:
10.1001/archsurg.2012.84PubMedGoogle ScholarCrossref