Testing a Novel Deliberate Practice Intervention to Improve Diagnostic Reasoning in Trauma Triage

Key Points Question Can deliberate practice (goal-oriented training with a coach who provides immediate, personalized performance feedback) improve diagnostic reasoning in trauma triage? Findings In this pilot randomized clinical trial of a novel deliberate practice intervention, 93% of participants received 3 planned coaching sessions, and most participants (93%) described the sessions as entertaining and valuable. During a simulation, the triage decisions of physicians in the intervention group were more likely to adhere to clinical practice guidelines than the triage decisions of physicians in the control group. Meaning The deliberate practice intervention was feasible, acceptable, and effective in the laboratory, setting the stage for a future phase 3 clinical trial.


Introduction
Half of all injured patients present initially to a nontrauma center, where a clinician must evaluate and stabilize the patient's injuries and determine whether they warrant transfer to a trauma center. 1,21][12][13] Diagnostic errors-defined as the failure to establish an accurate and timely explanation of the patient's health problem-are an important cause of undertriage. 14,157][18] However, the use of deliberate practice to improve diagnostic reasoning is uncommon and, to our knowledge, has never been tried in trauma triage. 19e objective of this pilot randomized clinical trial was to test the feasibility (practicability), fidelity (delivery of tasks), acceptability (palatability), adoption (intention to try behaviors), appropriateness (fitting the user's goals and needs), and effect (compliance with clinical guidelines) of a novel deliberate practice intervention in trauma triage.

Study Overview
We conducted a pilot randomized clinical trial of a deliberate practice intervention to improve diagnostic reasoning in trauma triage between January 1 and March 31, 2022, without follow-up.We enrolled and randomized a national respondent-driven sample of physicians to the intervention group or to a passive control group.We structured the process evaluation of the intervention using the Proctor framework of outcomes for implementation research and followed the Consolidated Standards of Reporting Trials Extension (CONSORT Extension) reporting guideline (ie, extension for pilot and feasibility trials) in reporting our results. 20,21We previously published the trial protocol with a priori hypotheses about criteria for defining success. 22The University of Pittsburgh Human Research Protection Office approved the study.Trial participants provided digital written informed consent at the time of enrollment (trial protocol in Supplement 1).

Trial Participants and Coaches
To recruit participants for the study, we contacted physicians who had previously participated in our research and asked them to refer us to 2 colleagues.We sought board-certified emergency physicians who treated adult patients in the emergency department of either a nontrauma center or Physicians who provided consent were randomized in a 1:1 ratio, stratified by prior participation in our research, using a schema built in Stata, version 16.0 (StataCorp LLC), with block sizes of 4 (Figure 1).
Although we could not blind study personnel and participants, we masked physicians' exposure during analysis.
Three members of the study team with expertise in trauma surgery (D.M. and R.M.F.) and emergency medicine (J.E.) acted as the coaches.We standardized the fidelity of intervention delivery in 3 ways.First, prior to the trial, we conducted three 1-hour training sessions, supervised by experts in deliberate practice (R.M.A., B.F., and D.B.W.).Second, we created a coaching manual as a reference that summarized the learning objectives, core tasks of the coaching sessions, and the pedagogical strategies that coaches should use (a full draft of the coaching manual is in the eAppendix in Supplement 2).Finally, coaches met weekly with the full study team during the trial to debrief and to discuss strategies for managing issues that had arisen.Based on these sessions, we made several modifications to the intervention, including condensing the content to increase the time spent on each decision principle and identifying additional pedagogical strategies that coaches could use to engage participants in the sessions (eg, retrieval practice during sessions 2 and 3).

Interventions Deliberate Practice
The intervention consisted of 3 weekly, 30-minute, video-conferenced coaching sessions, in which the participant played a trauma triage video game, the coach observed his or her performance, and they discussed best practice decision principles in trauma triage.We describe the conceptual framework of the intervention in Figure 2.  patients over 90 seconds, compared 2 cases to identify similarities or differences so that they could derive the rule for the level, received standardized feedback on their performance, reviewed the decision principle, and finally received a synthesis of the evidence supporting the decision principle.
Coaching | Both the participant and the coach logged into Zoom, and the participant shared his or her screen so that the coach could observe gameplay.The coach would select the levels covered during the session, personalizing the selection to the needs and skills of the participant.The coach would also encourage the participant to "think aloud" as he or she played, using observations made during the process to provide feedback tailored to improve the participant's diagnostic reasoning.
Each session covered 1 to 3 decision principles and included 6 to 8 tasks (eg, introductions or debriefing).

Passive Control
We did not ask trial participants randomly assigned to the control group to engage in any additional continuing medical education, with the intention of replicating usual care.Trainee asked to triage 10 cases over 90 seconds.
• Five of the cases conform to 1 decision principle (eg, penetrating injury).• Each level demonstrates a different decision principle.• Trainee can opt to triage another set of 10 cases after reviewing the case comparison step.
Coach selects the levels seen by the trainee, curating the user experience based on their experience and knowledge.
Trainee asked to compare 2 cases from set of 10 seen during the triage step and asked to identify relevant contextual cues by focusing on similarities and differences between the cases.
Coach guides the identification of relevant contextual cues by using standardized question prompts and guiding attention to important information.
In-game character provides standardized feedback based on trainee performance during the case comparison step.
Feedback.Immediate, high-quality feedback allows the trainee to acquire and refine the skills necessary to improve their performance on the training task.
Collaborative learning environment.Engagement of the trainee in the intervention fosters autonomous motivation (the desire to perform a task because it generates innate satisfaction or aligns with deeply held values).The development of rapport increases the likelihood that the trainee will remain open and participate in the self-reflective process.
Coach provides personalized feedback based on player performance during triage step and during the case comparison step.

Feedback
Player reviews a set of standardized questions to deconstruct the decision principle and then receives a short summary of the evidence.
Coach guides the question-answer session, reframing the language and emphasizing salient contextual cues.He or she provides additional clinical data to further reinforce the consequences of decision-making.

Not applicable
Coach elicits goals for training session, prompts retrieval of decision principles (at the beginning of the session), encourages discussion of content covered during the session (at the end of the session), and provides encouragement.

JAMA Network Open | Surgery
Deliberate Practice Intervention to Improve Diagnostic Reasoning in Trauma Triage

Trial Protocol
After randomization, participating physicians received written instructions on how to complete the trial tasks.We had the capacity to provide coaching for 30 physicians.We therefore asked those in the intervention group to select 1 of the 2 blocks (January or February) in which we offered coaching and to sign up for three 30-minute sessions within the block.Based on availability, we paired participants with a coach on a first-come, first-served basis.After the sessions, we asked participants to complete a survey, a semistructured debriefing interview, and an online simulation.We asked participants in the passive control group to complete the same simulation within 3 weeks of the start of the trial.The trial tasks took approximately 3 hours for those in the intervention group and 1 hour for those in the control group.Participants received 3 personalized reminder emails at weekly intervals or until they completed the trial tasks.We offered a financial incentive to increase response rates, setting its size with a wage-based model of reimbursement. 25,26Physicians in the intervention group received an iPad with the game and Zoom app preloaded, which they used for the coaching sessions and which they kept as their honorarium (approximate value, $300).Those in the control group received a $100 gift card after they completed the simulation.

Outcomes
Using the Proctor framework of outcomes for implementation research, we assessed both implementation and service outcomes. 20We defined the implementation outcomes as feasibility, fidelity, acceptability, adoption, and appropriateness.Using the National Institutes of Health stage model of intervention development, which recommends assessment of efficacy in the laboratory before moving to real-world testing, 27 we defined the service outcome (efficacy) as compliance with clinical practice guidelines, measured using a simulation.

Data Sources and Management
Screening Questionnaire and Tracking Database Each respondent described his or her personal characteristics on the screening questionnaire at the time of enrollment.We maintained a database with a list of scheduled coaching sessions, which was updated daily with the status of the sessions.

Coaching Sessions
We recorded all the coaching sessions and automatically uploaded them to a secure server hosted by the University of Pittsburgh.Two members of the study team (K.R. and J.L.B.) developed a codebook to assess the delivery of session tasks, refined it until they achieved acceptable interrater reliability (Cohen κ = 0.84), and independently applied it to the recordings.Coding discrepancies were resolved through consensus (D.M., K.R., and J.L.B.).We used NVivo qualitative analysis software (QSR International) for data management.

Postintervention Debriefing Materials
Participants in the intervention group provided structured assessments of the acceptability of the intervention using the User Engagement Scale-Short Form to evaluate the video game (a validated 12-item instrument with a 5-point Likert scale) and the Wisconsin Surgical Coaching Rubric to evaluate the quality of the coaching (a 4-item instrument with a 5-point scale). 28,29They also participated in semistructured debriefing interviews after the final coaching session, during which they discussed their perception of the acceptability, adoption, and appropriateness of the intervention.Two members of the study team (K.R. and J.L.B.) coded the interviews using the same process as for the coaching sessions (Cohen κ = 0.84).

Simulation to Measure Efficacy
We used a validated 2-dimensional simulation to assess compliance with guidelines after exposure to the intervention. 30The simulation required participants to respond to 10 cases over 42 minutes: 4 Users could request information by selecting from a prespecified list of 250 medications, studies, and procedures.They could place orders and request consultations.Each case ended when either the player made a disposition decision (admit, discharge, or transfer) or the patient died.We asked all trial participants to complete the simulation online; responses were uploaded and stored on a secure server hosted by the University of Pittsburgh.

Statistical Analysis
We summarized physician characteristics using mean (SD) values for continuous variables and counts and percentages for categorical variables.We analyzed implementation outcomes using an intention-to-treat approach but excluded from the efficacy analysis participants who did not use the simulation.We had 2 criteria for the success of the trial: efficacy and feasibility.Our primary hypothesis was that physicians exposed to the intervention would undertriage 25% fewer patients or more on the simulation than physicians in the control group.All P values were from 2-sided tests and results were deemed statistically significant at P < .05.Our secondary hypothesis was that we could deliver 3 coaching sessions to 90% or more of participants.All analyses were conducted in Stata, version 16.0 (StataCorp LLC).

Implementation Outcomes
We quantified the percentage of coach-participant dyads that completed three 30-minute sessions (to measure feasibility) and summarized the percentage of session tasks delivered to participants (to measure fidelity).We summarized participant responses to the User Engagement Scale-Short Form and to the Wisconsin Surgical Coaching Rubric (to measure acceptability).We also summarized themes that arose during the semistructured interviews (to further assess acceptability and to assess appropriateness and adoption).

Efficacy
We summarized the time spent and the decisions made for each severely injured trauma case (n = 4) on the simulation (eg, diagnostic testing or administration of blood products) using median values and IQRs, and we scored disposition decisions as consistent with the American College of Surgeons guidelines or not.To compare differences between the intervention and control groups, we fit a mixed-effects logistic regression model, clustered at the participant level, with the transfer decision as the dependent variable and physicians' exposure as the primary independent variable.Given the statistical power, we did not adjust for any potential confounders (eg, practice environment).In a post hoc sensitivity analysis, we excluded physicians who had previously participated in our research.

Human Participants and Power Calculation
We designed the experiment to detect a 25% (large effect size) reduction in undertriage between physicians in the intervention and control groups, with an α of .05 and a power of 80%, using the Cohen method of estimating power for behavioral trials.Based on these estimates, and anticipating a 67% retention rate in the control group, we planned to recruit 30 physicians for each group.

Participant Characteristics
We randomly assigned 72 physicians to the 2 groups of the trial but limited registration of physicians in the intervention group to 30 because of the availability of the coaches (Figure 1).

Feasibility and Fidelity
We summarize our assessment of the intervention in

Acceptability, Appropriateness, and Adoption
In semistructured interviews, most participants (93% [26 of 28]) in the intervention group described the sessions as entertaining, providing a useful refresher of guidelines, distilling clear learning points, and modeling valuable communication scripts for emergency department physicians.Most participants responded that the length and number of sessions were appropriate (80% [16 of 20]) and would recommend the intervention (87% [20 of 23]).Of the 25 physicians who discussed adoption of the principles, 6 (24%) reported having used the material since completing the coaching sessions, while 16 (64%) said they would use the material in the future.Some participants (7 of 28 [25%]) had reservations about the program.For example, 1 participant noted a discordance between the intervention and the realities of clinical practice; another responded that the time commitment was excessive.We provide additional qualitative assessments of the intervention by participants in the Box.
Responses to the surveys also were positive.For example, 96% (23 of 24) agreed or strongly agreed that their experience with the game was worthwhile, and 100% (24 of 24) strongly agreed that the coach provided constructive feedback.We provide complete responses to the surveys in eTable 2 in Supplement 2.

Efficacy
Physicians in the intervention group spent a median of 5.

Discussion
In this pilot randomized clinical trial, we delivered a novel deliberate practice intervention to practicing emergency medicine physicians with high fidelity.Most physicians described the intervention as valuable and the time required as appropriate.They also reported intentions to adopt the lessons learned during the training sessions.The intervention had an effect on physicians' adherence to trauma triage practice guidelines during an online simulation. [34][35][36]

Sample quotation:
"Again, I think as I said, the length of the session I think should be a little bit longer maybe.Just when you start to feel comfortable, they're like, 'Okay that's it.'And I know it's hard for any doctors to get together for more than any tiny period of time, but I think the process was fulfilling and might've been even more so with longer sessions." Subtopic: participants responded that the time commitment was excessive (1 of 20 [5%])

Sample quotation:
"I think so I think it's like you know worth the experience…it can be a little bit more streamlined, and you know hour and a half seems a lot of time for that…"

Figure 2
Figure 2. Conceptual Framework of Intervention

JAMA Network Open | Surgery Deliberate
Practice Intervention to Improve Diagnostic Reasoning in Trauma Triage Level III or IV trauma center in the US and who therefore would have responsibility for performing trauma triage in their clinical practice.Respondents received a screening questionnaire with details about the trial, a consent form, and items querying their demographic characteristics.Racial and ethnic categories were specified by the study team based on National Institutes of Health criteria.23 JAMA Network Open.2023;6(5):e2313569.doi:10.1001/jamanetworkopen.2023.13569(Reprinted) May 17, 2023 2/14 Downloaded From: https://jamanetwork.com/ on 10/22/2023 a

Downloaded From: https://jamanetwork.com/ on 10/22/2023 Video Game | We
24ed a single-player, theory-based puzzle video game, previously developed by our group to improve diagnostic reasoning in trauma triage (Shift: The Next Generation).24Toallow its use as a training task, we adapted the user interface and game mechanics in collaboration with Schell Games, creating Shift With Friends.The game included 10 levels, each covering a separate decision principle and involving a 5-step game loop (eFigure in Supplement 2): players triaged 10 injured

JAMA Network Open | Surgery Deliberate
Practice Intervention to Improve Diagnostic Reasoning in Trauma Triage severely injured patients, 2 minimally injured patients, and 4 critically ill nontrauma patients (ie, distractor patients).New patients arrived at prespecified but unpredictable intervals, so that users managed multiple patients concurrently.Without clinical intervention by the player, severely injured patients and critically ill distractor patients decompensated and died over the course of the simulation.Each case included a 2-dimensional rendering of the patient, a chief symptom, vital signs that updated every 30 seconds, a history, and a written description of the physical examination.

Table 2 .
Summary of Assessment of Intervention Using the Proctor Framework of Outcomes in Implementation Research Box.

Participant Assessments of the Acceptability, Appropriateness, and Adoption of the Intervention During Semistructured Interviews
It was a good way of engaging and teaching information because I think I've been on Zoom the past year and a half listening to lectures and no one's listening.So, this was for the you know adult learners who you know need kind of more than just listening to someone talk and look at the same slides you know it's a much better way to learn.""Iknow when to transfer and I know when not to but then you know kind of distilling it into what is it exactly about this patient, what are the reasons we make that decision.I guess it's not like I sit and think about it.It was just kind of like that's what we always do or that's what we know what we usually do but this kind of like helped me kind of clarify in my mind what those criteria are.And I thought you know for someone[who]had no experience at a trauma center and they were going to go work somewhere that it'll be a great way for them too especially at the resident level as well, but even anyone that was just changing their practice environment, using that too, it was a good way of engaging and teaching information…" "You're asking me to transfer a patient out based on let's say their age or injury, and you know a lot of the times the surgeon in house, if I'm working in a small hospital, I only know what they can handle and cannot handle, and it's not totally up to me to transfer someone out.If I call a surgeon and they would say, 'Hey I can take care of that, please admit to my service.'Defying that has some repercussions if you're working in a small hospital, you cannot just say I work independently.Nobody does.Right?You work as part of a hospital or part of a group or team.So, there's lots of gray lines, sometimes somebody says I can take care of, or I cannot take care of, and you based on the rest of the team.Right, this game almost made it seem like it's not a team, you're making the decision and I don't think that's true.So, it made it very black and white and that's not true.""You know, a number of the concepts the app is designed to teach or reinforce were those that I consider myself reasonably well familiar with.So, from that standpoint, I'm not sure I learned a ton, although I can certainly see its utility for other providers.""They were great, [coach] was wonderful, the game was fun.They were just the right amount of length, you know what I mean we did like a I think a half an hour 3 times it was like, we got to play like 1 or 2 games each time it wasn't like it got like, it wasn't like too long I suppose.Enough to keep you interested."Subtopic: participants wanted more time with the coach (3 of 20 [15%]) it [xx] years I feel like I've refined it quite a lot, but there's always opportunity to get it a little bit better." American College of Surgeons Committee on Trauma.National Trauma Data Bank 2016 annual report.Accessed December 21, 2021.https://www.facs.org/media/ez1hpdcu/ntdb-annual-report-2016.pdf39.Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine.Arch Intern Med.2005;165(13): 1493-1499.doi:10.1001/archinte.165.13.1493 40.Graber ML.The incidence of diagnostic error in medicine.BMJ Qual Saf.2013;22(suppl 2):ii21-ii27.doi:10.1136/bmjqs-2012-001615 Characteristics of Responders and Nonresponders to the Virtual Simulation eTable 2. Structured Assessment of Acceptability of Intervention (continued)38.