Association of Health Record Visualizations With Physicians’ Cognitive Load When Prioritizing Hospitalized Patients | Electronic Health Records | JAMA Network Open | JAMA Network
[Skip to Navigation]
Sign In
Figure 1.  Acuity Prototype
Acuity Prototype

The acuity prototype presented an overview of all 5 hypothetical patients on the left, displaying activity levels as a vertical bar, with higher bars indicating more activity and darker colors representing abnormal activity. Selecting a patient opened their specific clinical details, which included medications, laboratory results, vital sign measurements, and communication events, organized by hour. Trends or changes in laboratory results or medication doses were identified by placing a directional arrow adjacent to the specific item.

Figure 2.  Problem Prototype
Problem Prototype

The problem prototype organized data by clinical problems. Each problem was represented by a horizontal line, with its length determined by its relative priority. Tapping a problem displayed the relevant details, including pertinent medications, laboratory tests, and vital signs. Arrows placed next to results identified how details changed over time. Results, such as vital signs or communication events, that did not fit into a hypothetical patient’s clinical problems were put in the other category.

Figure 3.  Change Prototype
Change Prototype

Hypothetical patients, represented as circles, were plotted on a grid, with the horizontal axis representing the amount of clinical change (both positive and negative) and the vertical axis representing the degree of abnormal activity. The size of each hypothetical patient’s circle was directly proportional to their activity level during the past 24 hours. Selecting an individual patient displayed patient-specific information, including details on medications, laboratory test results, vital sign measurements, and the frequency of clinical notes.

Figure 4.  Interpretation of Change Visualization
Interpretation of Change Visualization

The prototype highlighting clinical change supported fast pattern recognition and decision-making by effectively organizing data into 4 quadrants. Patients found in the upper-right corner would be considered high priority because they were clinically deteriorating, while those in the lower-right corner would be considered low priority because they were clinically improving. Patients found in the upper-left corner indicate abnormal activity but with little substantial change. Finally, those in the lower-left corner indicate patients with normal activity and little change.

Table.  Participant Characteristics
Participant Characteristics
1.
Manor-Shulman  O, Beyene  J, Frndova  H, Parshuram  CS.  Quantifying the volume of documented clinical information in critical illness.  J Crit Care. 2008;23(2):245-250. doi:10.1016/j.jcrc.2007.06.003PubMedGoogle ScholarCrossref
2.
Sweller  J. Cognitive load during problem solving: effects on learning.  Cognitive Sci. 1988;12(2):257-285. doi:10.1207/s15516709cog1202_4Google ScholarCrossref
3.
Patel  VL, Kushniruk  AW, Yang  S, Yale  JF.  Impact of a computer-based patient record system on data collection, knowledge organization, and reasoning.  J Am Med Inform Assoc. 2000;7(6):569-585. doi:10.1136/jamia.2000.0070569PubMedGoogle ScholarCrossref
4.
Singh  H, Spitzmueller  C, Petersen  NJ, Sawhney  MK, Sittig  DF.  Information overload and missed test results in electronic health record-based settings.  JAMA Intern Med. 2013;173(8):702-704. doi:10.1001/2013.jamainternmed.61PubMedGoogle ScholarCrossref
5.
Pickering  BW, Gajic  O, Ahmed  A, Herasevich  V, Keegan  MT.  Data utilization for medical decision making at the time of patient admission to ICU.  Crit Care Med. 2013;41(6):1502-1510. doi:10.1097/ccm.0b013e318287f0c0PubMedGoogle ScholarCrossref
6.
Varpio  L, Rashotte  J, Day  K, King  J, Kuziemsky  C, Parush  A.  The EHR and building the patient’s story: a qualitative investigation of how EHR use obstructs a vital clinical activity.  Int J Med Inform. 2015;84(12):1019-1028. doi:10.1016/j.ijmedinf.2015.09.004PubMedGoogle ScholarCrossref
7.
Ahmed  A, Chandra  S, Herasevich  V, Gajic  O, Pickering  BW.  The effect of two different electronic health record user interfaces on intensive care provider task load, errors of cognition, and performance.  Crit Care Med. 2011;39(7):1626-1634. doi:10.1097/ccm.0b013e31821858a0PubMedGoogle ScholarCrossref
8.
Mazur  LM, Mosaly  PR, Moore  C, Marks  L.  Association of the usability of electronic health records with cognitive workload and performance levels among physicians.  JAMA Netw Open. 2019;2(4):e191709. doi:10.1001/jamanetworkopen.2019.1709PubMedGoogle Scholar
9.
Pollack  AH, Tweedy  CG, Blondon  K, Pratt  W.  Knowledge crystallization and clinical priorities: evaluating how physicians collect and synthesize patient-related data.  AMIA Annu Symp Proc. 2014;2014:1874-1883.PubMedGoogle Scholar
10.
von Elm  E, Altman  DG, Egger  M, Pocock  SJ, Gøtzsche  PC, Vandenbroucke  JP; STROBE Initiative.  The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies.  J Clin Epidemiol. 2008;61(4):344-349. doi:10.1016/j.jclinepi.2007.11.008PubMedGoogle ScholarCrossref
11.
Pollack  AH, Miller  A, Mishra  SR, Pratt  W, Pratt  W.  PD-atricians: leveraging physicians and participatory design to develop novel clinical information tools.  AMIA Annu Symp Proc. 2017;2016:1030-1039.PubMedGoogle Scholar
12.
Pollack  AH, Simon  TD, Snyder  J, Pratt  W.  Creating synthetic patient data to support the design and evaluation of novel health information technology.  J Biomed Inform. 2019;95:103201. doi:10.1016/j.jbi.2019.103201PubMedGoogle Scholar
13.
Cullen  DJ, Civetta  JM, Briggs  BA, Ferrara  LC.  Therapeutic intervention scoring system: a method for quantitative comparison of patient care.  Crit Care Med. 1974;2(2):57-60. doi:10.1097/00003246-197403000-00001PubMedGoogle ScholarCrossref
14.
Hart  SG, Staveland  LE.  Development of NASA-TLX (Task Load Index): results of empirical and theoretical research.  Adv Psychol. 1988;52:139-183. doi:10.1016/s0166-4115(08)62386-9Google ScholarCrossref
15.
Yurko  YY, Scerbo  MW, Prabhu  AS, Acker  CE, Stefanidis  D.  Higher mental workload is associated with poorer laparoscopic performance as measured by the NASA-TLX tool.  Simul Healthc. 2010;5(5):267-271. doi:10.1097/SIH.0b013e3181e3f329PubMedGoogle ScholarCrossref
16.
Hoonakker  P, Carayon  P, Gurses  A,  et al.  Measuring workload of ICU nurses with a questionnaire survey: the NASA Task Load Index (TLX).  IIE Trans Healthc Syst Eng. 2011;1(2):131-143. doi:10.1080/19488300.2011.609524PubMedGoogle ScholarCrossref
17.
Young  G, Zavelina  L, Hooper  V.  Assessment of workload using NASA Task Load Index in perianesthesia nursing.  J Perianesth Nurs. 2008;23(2):102-110. doi:10.1016/j.jopan.2008.01.008PubMedGoogle ScholarCrossref
18.
Longo  L, Kane  B.  A novel methodology for evaluating user interfaces in health care.  In:  2011 24th International Symposium on Computer-Based Medical Systems (CBMS). Bristol, United Kingdom: IEEE; 2011:1-6. doi:10.1109/cbms.2011.5999024Google Scholar
19.
Wachter  SB, Johnson  K, Albert  R, Syroid  N, Drews  F, Westenskow  D.  The evaluation of a pulmonary display to detect adverse respiratory events using high resolution human simulator.  J Am Med Inform Assoc. 2006;13(6):635-642. doi:10.1197/jamia.m2123PubMedGoogle ScholarCrossref
21.
Huggins  A, Claudio  D. A performance comparison between the subjective workload analysis technique and the NASA-TLX in a healthcare setting.  IISE Transactions Healthc Syst Eng. 2017;8(1):59-71. doi:10.1080/24725579.2017.1418765Google ScholarCrossref
22.
Battiste  V, Bortolussi  M.  Transport pilot workload: a comparison of two subjective techniques.  Proc Hum Factors Ergonomics Soc Annu Meet. 1988;32(2):150-154. doi:10.1177/154193128803200232Google ScholarCrossref
23.
Rubio  S, Díaz  E, Martín  J, Puente  JM.  Evaluation of subjective mental workload: a comparison of SWAT, NASA-TLX, and workload profile methods.  Appl Psychol. 2004;53(1):61-86. doi:10.1111/j.1464-0597.2004.00161.xGoogle ScholarCrossref
24.
Hill  SG, Iavecchia  HP, Byers  JC, Bittner  AC, Zaklade  AL, Christ  RE.  Comparison of four subjective workload rating scales.  Hum Factors J Hum Factors Ergonomics Soc. 1992;34(4):429-439. doi:10.1177/001872089203400405Google ScholarCrossref
25.
Herasevich  V, Ellsworth  MA, Hebl  JR, Brown  MJ, Pickering  BW.  Information needs for the OR and PACU electronic medical record.  Appl Clin Inform. 2014;5(3):630-641. doi:10.4338/aci-2014-02-ra-0015PubMedGoogle ScholarCrossref
26.
Pickering  BW, Dong  Y, Ahmed  A,  et al.  The implementation of clinician designed, human-centered electronic medical record viewer in the intensive care unit: a pilot step-wedge cluster randomized trial.  Int J Med Inform. 2015;84(5):299-307. doi:10.1016/j.ijmedinf.2015.01.017PubMedGoogle ScholarCrossref
27.
Doré  L, Lavril  M, Jean  FC, Degoulet  P.  An object oriented computer-based patient record reference model.  Proc Annu Symp Comput Appl Med Care. 1995;377-381.PubMedGoogle Scholar
28.
Zeng  Q, Cimino  JJ.  A knowledge-based, concept-oriented view generation system for clinical data.  J Biomed Inform. 2001;34(2):112-128. doi:10.1006/jbin.2001.1013PubMedGoogle ScholarCrossref
29.
Forsman  J, Anani  N, Eghdam  A, Falkenhav  M, Koch  S.  Integrated information visualization to support decision making for use of antibiotics in intensive care: design and usability evaluation.  Inform Health Soc Care. 2013;38(4):330-353. doi:10.3109/17538157.2013.812649PubMedGoogle ScholarCrossref
30.
Lin  YL, Guerguerian  A-M, Tomasi  J, Laussen  P, Trbovich  P.  “Usability of data integration and visualization software for multidisciplinary pediatric intensive care: a human factors approach to assessing technology.”  BMC Med Inform Decis Mak. 2017;17(1):122. doi:10.1186/s12911-017-0520-7PubMedGoogle ScholarCrossref
Limit 200 characters
Limit 25 characters
Conflicts of Interest Disclosure

Identify all potential conflicts of interest that might be relevant to your comment.

Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.

Err on the side of full disclosure.

If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.

Not all submitted comments are published. Please see our commenting policy for details.

Limit 140 characters
Limit 3600 characters or approximately 600 words
    Original Investigation
    Health Informatics
    January 15, 2020

    Association of Health Record Visualizations With Physicians’ Cognitive Load When Prioritizing Hospitalized Patients

    Author Affiliations
    • 1Division of Nephrology, Seattle Children’s Hospital, Seattle, Washington
    • 2Department of Pediatrics, University of Washington School of Medicine, Seattle
    • 3Information School, University of Washington, Seattle
    JAMA Netw Open. 2020;3(1):e1919301. doi:10.1001/jamanetworkopen.2019.19301
    Key Points español 中文 (chinese)

    Question  Can information visualization tools within electronic health records reduce the cognitive workload for physicians when identifying which patients have the highest-priority care needs?

    Findings  In this cross-sectional study of 29 physicians, information visualization tools that identified and highlighted clinically meaningful patterns were associated with a significantly lower cognitive workload compared with tools that required physicians to spend more time searching for similar information.

    Meaning  Electronic health records that use well-designed information visualization tools have the potential to reduce cognitive workload among physicians.

    Abstract

    Importance  Current electronic health records (EHRs) contribute to increased physician cognitive workload when completing clinical tasks.

    Objective  To assess the association of different design features of an EHR-based information visualization tool with the cognitive load of physicians during the clinical prioritization process.

    Design, Setting, and Participants  This cross-sectional study included a convenience sample of 29 attending physicians at Seattle Children’s Hospital, a large tertiary academic pediatric hospital. Data collection took place from August 2017 through October 2017, and analysis occurred from August to October 2018.

    Exposure  Physician participants used 3 prototypes with novel visualizations of simulated EHR data that highlighted 1 of 3 key patient characteristics, as follows: (1) acuity, (2) clinical problem list, and (3) clinical change.

    Main Outcomes and Measures  Cognitive workload was measured using the NASA Task Load Index (TLX) scale (range, 1-100, with lower scores indicating lower cognitive workload). Cognitive workload was assessed for the 2 following clinical prioritization tasks: (1) finding information for a specific patient and (2) comparing results among patients for each prototype. Participants ranked 5 hypothetical patients from having the highest to the lowest priority in each design.

    Results  A total of 29 physician participants (15 [52%] men; 14 [48%] women; mean [range] age, 43 [35-58] years; mean [range] time in practice, 11 [3-30] years) completed the study. For task 1, the prototype highlighting clinical change was associated with lower median (interquartile range) NASA TLX scores compared with the prototype highlighting acuity (30.3 [15.2-41.6] vs 48.5 [18.7-59.3]; P = .02). For task 2, the prototype highlighting clinical change was associated with lower median (interquartile range) NASA TLX scores compared with the prototype highlighting the clinical problem list (29.1 [16.3-50.8] vs 43.5 [26.6-55.9]; P = .02). The prototype highlighting clinical change had the lowest TLX score in 17 of 29 rankings (59%) for task 1 (χ24 = 24.4; P < .001) and 18 of 29 rankings (62%) for task 2 (χ24 = 17.2; P = .002).

    Conclusions and Relevance  In this study, well-designed EHR-based information visualizations that highlighted and featured clinically meaningful information patterns significantly reduced physician cognitive workload when prioritizing patient needs.

    Introduction

    Hospitalized patients generate hundreds to thousands of new pieces of clinical data each day, requiring physicians to review, process, prioritize, and ultimately take action on tens of thousands of different data points when managing multiple patients.1 Identifying which patients to prioritize requires physicians to search and filter this large volume of information to find the clinically meaningful and important details and distinguish between patients with high-priority and low-priority care needs. Without the proper support, this overwhelming task leads to information overload and increases physician cognitive workload, ie, the effort required to identify, use, and maintain data in working memory.2 Despite their initial promise, electronic health records (EHRs) have failed to help physicians reduce their cognitive workload. Owing to a lack of intelligent or effective EHR information visualization support tools, physicians receive little support for recognizing clinically relevant patterns and recreating the patient’s story.3-6 These deficiencies cause physicians to spend a significant amount of time searching for and synthesizing results across multiple EHR sections, thus increasing their cognitive workload as they interpret results and make clinical decisions.7,8

    Systems that aggregate and visualize data to highlight clinically meaningful patterns are needed to reduce the cognitive workload associated with EHR use. In this study, we identified EHR visualization strategies associated with decreases in the cognitive workload experienced by physicians when identifying which patients to see first (ie, the clinical prioritization process), ensuring that the most urgent needs are addressed first. In previous work,9 we identified the key data elements used during the clinical prioritization process, with physicians synthesizing details of their patients’ acuity, clinical problem list, and, most importantly, changes in clinical status. In this study, we assessed the cognitive workload associated with the use of 3 novel visualizations of EHR data designed to support physicians during the clinical prioritization process by exploring the association of different data organization and visualization strategies with cognitive workload. We hypothesized that well-designed visualization tools would be associated with reductions in cognitive workload and could support physicians during the clinical prioritization process.

    Methods

    This cross-sectional study was approved by the Seattle Children’s Hospital institutional review board and was conducted at Seattle Children’s Hospital, a tertiary care pediatric hospital in Seattle, Washington. Participants were attending-level physicians at Seattle Children’s Hospital and were invited to participate via a direct email request from 1 of us (A.H.P.). Potential participants were purposefully selected by 1 of us (A.H.P.) to maximize diversity based on medical specialty, sex, age, and clinical experience. Written informed consent was obtained from all participants. We followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline for cross-sectional studies.10

    Visualization Development

    Before recruitment, the research team developed 3 high-fidelity prototypes that used novel visualizations of simulated EHR data. These prototypes were designed to support a key step of the clinical prioritization process, ie, helping clinicians answer the question, “Which patient should I see next?” Each prototype used a different visualization strategy to present clinical data to participants. The final designs were informed by 2 previous studies, as follows: focus groups with practicing clinicians that identified information needs during the clinical prioritization process9 and a human-centered participatory design session during which clinicians had an opportunity to design a clinical prioritization support tool (unpublished data).11 Each prototype highlighted and organized data around 1 of 3 key patient characteristics (ie, acuity, clinical problem list, or clinical change) identified in our previous work.9 Clinical data included details and results (ie, laboratory tests, medications, vital sign measurements, and clinical notes) on 5 fictional hospitalized patients during a 24-hour period. To accurately assess the cognitive workload associated with the use of each prototype, the data in each prototype had to be functionally equivalent yet clinically different. Therefore, each prototype contained the same amount of clinical information, including the number of laboratory test results, medications, vital sign measurements, and clinical notes. In addition, each prototype had the same number of abnormal results as well as scheduled and as-needed medications. However, the specific details (eg, specific laboratory test results or medications) changed in each prototype, so our participants would not remember details between prototypes. Thus, any differences associated with the complexity of using a particular prototype were associated with its specific design features and not with similarities or differences in the clinical content. To accomplish this task, we developed and used a 5-step process to generate synthetic patient data explicitly for the purpose of evaluating novel health information technology, which has been previously reported.12

    Each prototype first provided users with a summary of the clinical status of the 5 hypothetical patients via a dashboard-style overview to support quick comparisons among patients. Selecting an individual patient opened a more detailed view that provided insight into the summary view visualization. The fully interactive prototypes consisted of multiple pages of clinical information linked through design-specific features unique to each prototype.

    Prototype 1: Acuity View

    Acuity refers to the severity of a patient’s illness. Patients with higher acuity levels tend to be more ill and require more interventions and closer observations. Other systems make the assumption that acuity corresponds to the number of interventions. For example, nursing assignments in the hospital are determined based on the acuity of patients, and the patient-to-nurse ratio decreases among patients with higher acuity because nursing workload increases with increasing care needs. To predict the nursing workload as well as to measure a patient’s severity of illness, the Therapeutic Intervention Scoring System was developed in the 1970s.13 Therefore, in the first prototype, we explored the association of acuity with patient activity by presenting details on the amount of clinical activity that had taken place for a hypothetical patient during a 24-hour period (Figure 1). Activity represented the number of medications administered, laboratory tests completed, vital signs measured, and/or communication events that occurred for the hypothetical patient during each hour of the day. In this sense, activity can be used as a proxy for acuity because patients with higher acuity tend to have more activity (eg, more laboratory tests, vital sign measurements, etc). Using a familiar timeline approach, the acuity prototype organized data (including specific details on medications, laboratory tests, vital sign measurements, and communication events) by time and clinical category, providing a sense of the frequency and volume of events that took place during the 24-hour period for each hypothetical patient. The acuity prototype required 61 pages to present data for all 5 patients.

    Prototype 2: Problem View

    The second prototype organized data into their respective clinical problems to provide a problem-focused approach to data organization and presentation (Figure 2). Each problem was represented by a horizontal bar, with longer bars suggesting more clinical significance or priority for the hypothetical patients and problems. The length of the bar was determined by a novel algorithm we developed to assign a priority score for each problem. The algorithm took into consideration the activity level (including the number of laboratory test results, medications, vital sign data, and clinical notes) and the degree of abnormality for a specific result as well as how much a specific result changed compared with a previous value, taking into consideration the directionality of change (ie, better or worse). For example, a clinical problem associated with a greater number of medications and worsening laboratory test results would generate a higher-priority score, which we used to determine the length of the line on the display. The actual numeric scores were not displayed in the prototype. Aligning all clinical problems for a hypothetical patient on a single horizontal line provided a sense of the patient’s overall clinical priority. The problem prototype required 20 pages to present the clinical data, making it the most compact of the 3 designs.

    Prototype 3: Change View

    The change prototype highlighted change, abnormal activity (ie, abnormal test results or vital sign measurements, needed medication events, and risk documentation), and overall activity within a single visualization by plotting each patient on a grid (Figure 3). Patients were represented as a circle, the diameter of which was associated with the amount of patient activity (ie, larger circles indicated more activity). The location of the patient’s circle on the horizontal axis represented their degree of change, while their location on the vertical axis captured the percentage of abnormal activity during a 24-hour period. The location of patients on the grid provided a sense of their overall clinical picture. The change prototype required 38 pages to present all clinical data.

    Procedures

    After obtaining consent, participants were shown a single prototype at a time, provided a brief introduction to its specific features and visualization strategies, and then asked to rank the 5 hypothetical patients in the order that they would prioritize seeing them based on each prototype’s overview (ie, initial ranking). Participants then interacted with the prototype to view each hypothetical patient’s medical record to form a more complete clinical picture for each patient. After this thorough review, participants ranked the patients again (ie, final ranking). Participants then repeated these tasks for the remaining prototypes. We compared the initial and final rankings of individual participants to evaluate the ability of each prototype’s overview visualization to accurately convey patient priority. We presented the prototypes to participants in varying order to minimize presentation bias and recorded the interaction sessions using a screen capture tool.

    Outcomes

    To assess our primary outcome, we measured cognitive workload directly via the NASA Task Load Index (TLX) scale, a standardized scale that measures the subjective workload across the 6 following factors experienced by an individual when completing a specific task: mental demand, physical demand, temporal demand, performance, effort, and frustration.14 The NASA TLX scale has been used to assess the task-based cognitive workload experienced by health care professionals in a variety of settings, ranging from surgery15 to nursing16,17 to health care technology.7,8,18,19 The Agency for Healthcare Research and Quality20 recommends its use as a tool to assess health information technology workflows because it provides a simple method to measure a user’s cognitive workload across a variety of domains. While the NASA TLX instrument has some limitations, it has been shown to be preferred compared with other cognitive assessment tools in health care applications,21 with high levels of reliability,22 validity,23 and sensitivity.24 We used the NASA TLX to evaluate the workload associated with completing the 2 following key prioritization tasks: (1) finding information for an individual patient and (2) comparing information among patients. The NASA TLX scale generates a score from 1 to 100, with lower scores indicating reduced cognitive workload. Participants completed the NASA TLX assessment after completing each prioritization task with each prototype, resulting in 6 total scores for each participant.

    After using each prototype, participants provided feedback on the usability of the prototypes to complete additional tasks performed during the clinical prioritization process via a 5-point Likert scale (ie, hard, somewhat hard, neutral, somewhat easy, easy). We asked participants to rank the prototypes in order of their personal preference at the end of the study.

    Statistical Analysis

    We reported NASA TLX scores for each prototype and task as medians with interquartile ranges (IQRs) because of their nonnormal distribution. We performed linear mixed-effects regression modeling to compare the NASA TLX scores for each prototype and task. To account for participant variation in the actual NASA TLX scores, we ranked each participant’s scores for each prototype from lowest (first) to highest (third) and compared the number of rankings (ie, the number of first, second, and third ranks) for each prototype and task using χ2 analysis. We also looked at the use patterns associated with each prototype by comparing the mean number of pages viewed by each participant and compared these results via 1-way analysis of variance. We considered P ≤ .05 statistically significant, and all tests were 1-tailed. Participant preference was also compared using χ2 tests. All analysis was performed in R version 3.6.1 (R Project for Statistical Computing).

    Results

    Invitations to 49 physicians at Seattle Children’s Hospital were sent, 32 (65%) indicated interest in participating, and 29 (59%) completed the study. Overall, 14 participants (48%) identified as women and 15 (52%) as men. The mean (range) age of our participants was 43 (35-58) years. Participants had been practicing in their respective fields for a mean (range) of 11 (3-30) years. These physicians represented a broad diversity of practices, including general and subspecialty pediatricians as well as pediatric surgeons (Table).

    Assessing Cognitive Workload

    For task 1, the change prototype was associated with lower median (IQR) NASA TLX scores (ie, less cognitive burden) compared with the acuity prototype (30.3 [15.2-41.6] vs 48.5 [18.7-59.3]; P = .02) but not compared with the problem prototype (30.3 [15.2-41.6] vs 36.6 [18.7-51.8]; P = .13) (eFigure 1 in the Supplement). For task 2, the change prototype was associated with lower median (IQR) NASA TLX scores compared with the problem prototype (29.1 [16.3-50.8] vs 43.5 [26.6-55.9]; P = .02) but not compared with the acuity prototype (29.1 [16.3-50.8] vs 39.9 [25.7-61.3]; P = .07) (eFigure 1 in the Supplement). In addition, the change prototype had the lowest NASA TLX scores for individual participants in 17 of 29 (59%) rankings for task 1 (χ24 = 24.4; P < .001) and 18 of 29 (62%) rankings for task 2 (χ24 = 17.2; P = .002).

    Secondary Outcomes

    The acuity prototype had the highest agreement in patient priority ranking comparing the initial ranking with the final ranking (18 of 29 [62%]), followed by the change (12 of 29 [41%]) and problem (9 of 29 [31%]) prototypes, but the comparison was not statistically significant (χ22 = 5.9; P = .054). Usability data showed that participants generally found the change prototype easier to use (eFigure 2 in the Supplement). Despite the acuity prototype having the most pages and the highest mean (SD) page views per participant (68.4 [44.2] page views) compared with the problem (56.8 [27.5] page views) and change (52.0 [29.1] page views) prototypes, this difference was not significant (P = .18). Significantly more participants selected the change prototype (22 of 29 [76%]) as their first choice compared with the acuity (5 of 29 [17%]) and problem (2 of 29 [7%]) prototypes (χ24 = 36.21; P < .001).

    Discussion

    Our results demonstrated that differences in the visualization of EHR data were associated with changes in the cognitive workload of clinical decision-making when identifying patients with high-priority care needs. The change prototype was associated with the lowest cognitive workload when completing the prioritization tasks compared with the acuity and clinical problems prototypes. Despite the similar amount of clinical information, the organization and presentation of the clinical content differed substantially among the prototypes, leading to cognitive workload differences. The lower cognitive workload associated with the use of the change design was most likely associated with the visualizations that highlighted clinically meaningful patterns. In comparison, the overview visualizations of the other prototypes did not provide enough detail or context, meaning that participants had to search for additional information to form a clinical impression, which increased their cognitive workload.

    Traditionally, EHRs display a tremendous amount of data spread across multiple locations in the clinical record5,25 and therefore require physicians to search, identify, and remember important data elements used during the care process. This time-consuming and error-prone workflow continues until they have found enough details to form an impression of the patient’s status. The change prototype was associated with changes in this search-and-discovery process by highlighting and organizing key prioritization details, ie, abnormal activity and change, into a concise visualization that supported fast interpretation and projection about the clinical status of a patient or group of patients (Figure 4). For example, patients who were clinically deteriorating were clustered on the upper-right side of the grid (ie, more change and more abnormal results), while those who had clinical improvements would tend to cluster in the lower right (ie, more change and fewer abnormal results). Therefore, once orientated to the design, physicians could quickly identify patients who were likely to be the most concerning by looking in the upper-right portion of the display, while patients in the lower-right portion would be less concerning.

    This organization supports preattentive processing, ie, subconsciously recognizing visual patterns, by separating items based on location, resulting in rapid pattern recognition and data interpretation. Grouping the required data used during the clinical prioritization process simplified the interpretation of clinical data because the EHR performed some of the processing that has traditionally been done by physicians, resulting in reduced cognitive workload. Importantly, the visualization did not make specific recommendations but instead provided enough details and context to allow physicians to quickly recognize and process meaningful patterns without having to search the medical record and assemble the clinical picture by memory. Given that 76% of our participants preferred the change prototype, this finding suggests that physicians are open to the EHR processing, organizing, and visualizing details that highlight clinically meaningful patterns. In addition, despite introducing a new and unfamiliar method of visualizing clinical data, the change design still had the lowest NASA TLX scores and highest usability scores of all 3 designs. This implies that novel systems should not avoid creating new kinds of visualizations if they successfully communicate clinically meaningful patterns and enhance physician decision-making.

    Our work complements previous work that demonstrated that providing highly relevant details was associated with reductions in a physician’s cognitive workload when using EHRs.7,26 However, EHR systems have traditionally organized data into source-oriented (ie, where the data comes from, such as the laboratory, clinical notes, etc) or time-oriented (ie, data are primarily organized chronologically) views requiring physicians to search for and find the fraction of clinical details required to provide care. Concept-oriented views, in which data are organized and presented around clinical concepts, have the potential to greatly reduce the cognitive workload associated with EHR use.27,28 Each of our 3 prototypes was a different concept-oriented view based on the 3 patient characteristics used during the prioritization process, ie, acuity, clinical problems, and change. Given that physicians typically search for information to help them accomplish a specific clinical task (eg, prescribing medications), well-designed concept-oriented views could provide all the required data elements in a single location to support the task’s successful completion. For example, a concept-oriented medication list would provide relevant data elements associated with each medication, such as laboratory test results, physical examination findings, dosage changes, adherence data, etc, in a single location within the EHR, allowing a physician to quickly understand a patient’s response to therapy without having to search multiple EHR sections. Ahmed et al7 demonstrated that a novel EHR interface that reduces extraneous details and organizes details via concept-oriented views based on grouped lists of text and/or numerical data was associated with a reduction in the cognitive workload of physicians in intensive care units compared with a traditional EHR. Our work extends their findings by demonstrating that well-designed concept-oriented views that leverage information visualization, a method of communicating data visually to support fast, efficient, and accurate interpretation, were also associated with reductions in the cognitive workload.

    The ability to reduce the cognitive workload through concept-oriented information visualization depends on achieving the appropriate balance between presenting clinically meaningful patterns as discrete and fully contained visualizations with the need for physicians to search, identify, and remember the details required to identify and validate the same patterns. The structure of the acuity and problem prototypes likely led to clinicians searching through additional details to provide insight into the overview visualizations, which may have been a factor in increasing the cognitive workload for our participants when compared with the change prototype. The need to spend more time searching likely explained why the acuity view had the highest concordance between the first and final ranking tasks of the 3 prototypes. With data spread across the most pages in the acuity prototype, participants likely relied on the overview visualization to form their clinical impressions because they may have found searching for the confirmatory details burdensome and overwhelming. This study provides additional evidence that designing concept-oriented visualizations requires a human-centered approach to uncover the content, organization, and visualization methods to best support the information needs of practicing physicians.29,30

    Limitations

    This study has several limitations. First, our work focused on the cognitive workload associated with the use of 3 novel high-fidelity prototypes and did not include a production-based EHR for comparison. Although we cannot directly compare the cognitive workload of these prototypes with that of a functional EHR, our results can inform the development of future work to directly compare concept-oriented information visualization displays with existing EHR information retrieval systems. Second, the clinical data used in our 3 prototypes was synthetically created specifically for the purposes of this study. Although we used a rigorous and methodical process to create the synthetic patient data used in each prototype,12 unrecognized variations in complexity could exist and be associated with differences in patient data among the prototypes that influenced our results. Third, our study only included 29 physicians from a single tertiary care pediatric hospital. Future studies would benefit from a larger and more diverse group of participants. Fourth, we timed the participants completing each ranking task as an additional outcome, but because of conversations with participants during study procedures, the timings did not reflect the actual time to task completion. Without this objective measure of the time required to complete the ranking tasks, it is difficult to assess how the differences in NASA TLX scores affected our participants’ ability to prioritize their patients when using each prototype. Therefore, future assessments would benefit from including objective assessments that measure a task’s successful or efficient completion.

    Conclusions

    Using well-designed, concept-oriented visualizations to highlight clinically meaningful patterns appropriately shifts the cognitive burden of information seeking from physicians to EHRs. Instead of searching, finding, and remembering the clinical details required to identify the same patterns, physicians could benefit from new EHR-derived visualizations to assist them. Freeing physicians from search-intensive tasks reduces their cognitive workload, allowing them to focus on what matters most—caring for their patients.

    Back to top
    Article Information

    Accepted for Publication: November 20, 2019.

    Published: January 15, 2020. doi:10.1001/jamanetworkopen.2019.19301

    Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2020 Pollack AH et al. JAMA Network Open.

    Corresponding Author: Ari H. Pollack, MD, Division of Nephrology, Seattle Children’s Hospital, Mail Stop OC.9.820, 4800 Sand Point Way NE, Seattle, WA 98105 (ari.pollack@seattlechildrens.org).

    Author Contributions: Dr Pollack had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

    Concept and design: Both authors.

    Acquisition, analysis, or interpretation of data: Pollack.

    Drafting of the manuscript: Both authors.

    Critical revision of the manuscript for important intellectual content: Both authors.

    Statistical analysis: Pollack.

    Obtained funding: Pollack.

    Administrative, technical, or material support: Pollack.

    Supervision: Pratt.

    Conflict of Interest Disclosures: Dr Pollack reported having obtained a provisional patent on the design of the the prototype highlighting clinical change. Dr Pratt reported having a patent related to providing feedback to clinicians based on their communication with patients; it does not relate to the work presented in this article.

    Funding/Support: This study was funded by the Seattle Children’s Research Institute, Center for Clinical and Translational Research through the Research Scholars Program.

    Role of the Funder/Sponsor: The funder had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

    Meeting Presentations: This article was previously presented in abstract form at the Annual Symposium of the American Medical Informatics; November 6, 2018; San Francisco, California, and at the Visual Analytics in Health Care Workshop; October 20, 2019; Vancouver, British Columbia, Canada.

    References
    1.
    Manor-Shulman  O, Beyene  J, Frndova  H, Parshuram  CS.  Quantifying the volume of documented clinical information in critical illness.  J Crit Care. 2008;23(2):245-250. doi:10.1016/j.jcrc.2007.06.003PubMedGoogle ScholarCrossref
    2.
    Sweller  J. Cognitive load during problem solving: effects on learning.  Cognitive Sci. 1988;12(2):257-285. doi:10.1207/s15516709cog1202_4Google ScholarCrossref
    3.
    Patel  VL, Kushniruk  AW, Yang  S, Yale  JF.  Impact of a computer-based patient record system on data collection, knowledge organization, and reasoning.  J Am Med Inform Assoc. 2000;7(6):569-585. doi:10.1136/jamia.2000.0070569PubMedGoogle ScholarCrossref
    4.
    Singh  H, Spitzmueller  C, Petersen  NJ, Sawhney  MK, Sittig  DF.  Information overload and missed test results in electronic health record-based settings.  JAMA Intern Med. 2013;173(8):702-704. doi:10.1001/2013.jamainternmed.61PubMedGoogle ScholarCrossref
    5.
    Pickering  BW, Gajic  O, Ahmed  A, Herasevich  V, Keegan  MT.  Data utilization for medical decision making at the time of patient admission to ICU.  Crit Care Med. 2013;41(6):1502-1510. doi:10.1097/ccm.0b013e318287f0c0PubMedGoogle ScholarCrossref
    6.
    Varpio  L, Rashotte  J, Day  K, King  J, Kuziemsky  C, Parush  A.  The EHR and building the patient’s story: a qualitative investigation of how EHR use obstructs a vital clinical activity.  Int J Med Inform. 2015;84(12):1019-1028. doi:10.1016/j.ijmedinf.2015.09.004PubMedGoogle ScholarCrossref
    7.
    Ahmed  A, Chandra  S, Herasevich  V, Gajic  O, Pickering  BW.  The effect of two different electronic health record user interfaces on intensive care provider task load, errors of cognition, and performance.  Crit Care Med. 2011;39(7):1626-1634. doi:10.1097/ccm.0b013e31821858a0PubMedGoogle ScholarCrossref
    8.
    Mazur  LM, Mosaly  PR, Moore  C, Marks  L.  Association of the usability of electronic health records with cognitive workload and performance levels among physicians.  JAMA Netw Open. 2019;2(4):e191709. doi:10.1001/jamanetworkopen.2019.1709PubMedGoogle Scholar
    9.
    Pollack  AH, Tweedy  CG, Blondon  K, Pratt  W.  Knowledge crystallization and clinical priorities: evaluating how physicians collect and synthesize patient-related data.  AMIA Annu Symp Proc. 2014;2014:1874-1883.PubMedGoogle Scholar
    10.
    von Elm  E, Altman  DG, Egger  M, Pocock  SJ, Gøtzsche  PC, Vandenbroucke  JP; STROBE Initiative.  The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies.  J Clin Epidemiol. 2008;61(4):344-349. doi:10.1016/j.jclinepi.2007.11.008PubMedGoogle ScholarCrossref
    11.
    Pollack  AH, Miller  A, Mishra  SR, Pratt  W, Pratt  W.  PD-atricians: leveraging physicians and participatory design to develop novel clinical information tools.  AMIA Annu Symp Proc. 2017;2016:1030-1039.PubMedGoogle Scholar
    12.
    Pollack  AH, Simon  TD, Snyder  J, Pratt  W.  Creating synthetic patient data to support the design and evaluation of novel health information technology.  J Biomed Inform. 2019;95:103201. doi:10.1016/j.jbi.2019.103201PubMedGoogle Scholar
    13.
    Cullen  DJ, Civetta  JM, Briggs  BA, Ferrara  LC.  Therapeutic intervention scoring system: a method for quantitative comparison of patient care.  Crit Care Med. 1974;2(2):57-60. doi:10.1097/00003246-197403000-00001PubMedGoogle ScholarCrossref
    14.
    Hart  SG, Staveland  LE.  Development of NASA-TLX (Task Load Index): results of empirical and theoretical research.  Adv Psychol. 1988;52:139-183. doi:10.1016/s0166-4115(08)62386-9Google ScholarCrossref
    15.
    Yurko  YY, Scerbo  MW, Prabhu  AS, Acker  CE, Stefanidis  D.  Higher mental workload is associated with poorer laparoscopic performance as measured by the NASA-TLX tool.  Simul Healthc. 2010;5(5):267-271. doi:10.1097/SIH.0b013e3181e3f329PubMedGoogle ScholarCrossref
    16.
    Hoonakker  P, Carayon  P, Gurses  A,  et al.  Measuring workload of ICU nurses with a questionnaire survey: the NASA Task Load Index (TLX).  IIE Trans Healthc Syst Eng. 2011;1(2):131-143. doi:10.1080/19488300.2011.609524PubMedGoogle ScholarCrossref
    17.
    Young  G, Zavelina  L, Hooper  V.  Assessment of workload using NASA Task Load Index in perianesthesia nursing.  J Perianesth Nurs. 2008;23(2):102-110. doi:10.1016/j.jopan.2008.01.008PubMedGoogle ScholarCrossref
    18.
    Longo  L, Kane  B.  A novel methodology for evaluating user interfaces in health care.  In:  2011 24th International Symposium on Computer-Based Medical Systems (CBMS). Bristol, United Kingdom: IEEE; 2011:1-6. doi:10.1109/cbms.2011.5999024Google Scholar
    19.
    Wachter  SB, Johnson  K, Albert  R, Syroid  N, Drews  F, Westenskow  D.  The evaluation of a pulmonary display to detect adverse respiratory events using high resolution human simulator.  J Am Med Inform Assoc. 2006;13(6):635-642. doi:10.1197/jamia.m2123PubMedGoogle ScholarCrossref
    21.
    Huggins  A, Claudio  D. A performance comparison between the subjective workload analysis technique and the NASA-TLX in a healthcare setting.  IISE Transactions Healthc Syst Eng. 2017;8(1):59-71. doi:10.1080/24725579.2017.1418765Google ScholarCrossref
    22.
    Battiste  V, Bortolussi  M.  Transport pilot workload: a comparison of two subjective techniques.  Proc Hum Factors Ergonomics Soc Annu Meet. 1988;32(2):150-154. doi:10.1177/154193128803200232Google ScholarCrossref
    23.
    Rubio  S, Díaz  E, Martín  J, Puente  JM.  Evaluation of subjective mental workload: a comparison of SWAT, NASA-TLX, and workload profile methods.  Appl Psychol. 2004;53(1):61-86. doi:10.1111/j.1464-0597.2004.00161.xGoogle ScholarCrossref
    24.
    Hill  SG, Iavecchia  HP, Byers  JC, Bittner  AC, Zaklade  AL, Christ  RE.  Comparison of four subjective workload rating scales.  Hum Factors J Hum Factors Ergonomics Soc. 1992;34(4):429-439. doi:10.1177/001872089203400405Google ScholarCrossref
    25.
    Herasevich  V, Ellsworth  MA, Hebl  JR, Brown  MJ, Pickering  BW.  Information needs for the OR and PACU electronic medical record.  Appl Clin Inform. 2014;5(3):630-641. doi:10.4338/aci-2014-02-ra-0015PubMedGoogle ScholarCrossref
    26.
    Pickering  BW, Dong  Y, Ahmed  A,  et al.  The implementation of clinician designed, human-centered electronic medical record viewer in the intensive care unit: a pilot step-wedge cluster randomized trial.  Int J Med Inform. 2015;84(5):299-307. doi:10.1016/j.ijmedinf.2015.01.017PubMedGoogle ScholarCrossref
    27.
    Doré  L, Lavril  M, Jean  FC, Degoulet  P.  An object oriented computer-based patient record reference model.  Proc Annu Symp Comput Appl Med Care. 1995;377-381.PubMedGoogle Scholar
    28.
    Zeng  Q, Cimino  JJ.  A knowledge-based, concept-oriented view generation system for clinical data.  J Biomed Inform. 2001;34(2):112-128. doi:10.1006/jbin.2001.1013PubMedGoogle ScholarCrossref
    29.
    Forsman  J, Anani  N, Eghdam  A, Falkenhav  M, Koch  S.  Integrated information visualization to support decision making for use of antibiotics in intensive care: design and usability evaluation.  Inform Health Soc Care. 2013;38(4):330-353. doi:10.3109/17538157.2013.812649PubMedGoogle ScholarCrossref
    30.
    Lin  YL, Guerguerian  A-M, Tomasi  J, Laussen  P, Trbovich  P.  “Usability of data integration and visualization software for multidisciplinary pediatric intensive care: a human factors approach to assessing technology.”  BMC Med Inform Decis Mak. 2017;17(1):122. doi:10.1186/s12911-017-0520-7PubMedGoogle ScholarCrossref
    ×