[Skip to Content]
Access to paid content on this site is currently suspended due to excessive activity being detected from your IP address 54.161.216.242. Please contact the publisher to request reinstatement.
Sign In
Individual Sign In
Create an Account
Institutional Sign In
OpenAthens Shibboleth
[Skip to Content Landing]
Article
Journal Club, Comparative Effectiveness Research
June 2013

Examining Pediatric Resuscitation Education Using Simulation and Scripted DebriefingA Multicenter Randomized Trial

Journal Club PowerPoint Slide Download
Author Affiliations

Author Affiliations: University of Calgary, KidSim-ASPIRE Research Program, Division of Emergency Medicine, Department of Pediatrics, Alberta Children's Hospital, Calgary, Alberta, Canada (Dr Cheng); Departments of Anesthesiology and Critical Care Medicine and Pediatrics, Johns Hopkins University School of Medicine, Baltimore, Maryland (Drs Hunt and Nelson-McMillan); Divisions of Emergency Medicine (Dr Donoghue) and Critical Care Medicine (Drs Donoghue, Nishisaki, and Nadkarni), The Children's Hospital of Philadelphia, University of Pennsylvania School of Medicine, Philadelphia; College of Nursing, The University of Texas at Arlington (Dr LeFlore); Division of Emergency Medicine, Ann & Robert H. Lurie Children's Hospital of Chicago, Northwestern University Feinberg School of Medicine, Chicago, Illinois (Drs Eppich and Adler); TriHealth Education and Simulation Services, Bethesda North Hospital, Cincinnati, Ohio (Mr Moyer); Children's Hospital of Boston, Harvard Medical School, Boston, Massachusetts (Drs Brett-Fleegler and Kleinman); Division of Neonatology, Doernbecher Children's Hospital, Oregon Health and Science University (Dr Anderson); Division of Critical Care Medicine, Children's Hospital at Dartmouth, Hanover, New Hampshire (Dr Braga); Division of Emergency Medicine, Nemours/Alfred I. duPont Hospital for Children, Jefferson Medical College, Wilmington, Delaware (Drs Kost and Stryjewski); Department of Pediatrics, Walter Reed National Military Medical Center, Uniformed Services University of the Health Sciences, Bethesda, Maryland (Drs Min, Podraza, and Lopreiato); Division of Critical Care Medicine, Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania (Dr Hamilton); Division of Emergency Medicine, Seattle Children's Hospital, University of Washington School of Medicine, Seattle (Drs Stone and Reid); Department of Pediatrics, Children's Medical Center Dallas, Dallas, Texas (Mr Hopkins); Division of Emergency Medicine, Cincinnati Children's Medical Center, Cincinnati, Ohio (Ms Manos); Division of Critical Care Medicine, Stollery Children's Hospital, University of Alberta, Edmonton, Alberta, Canada (Dr Duff); and Dementia Guide Inc, Clinical, Halifax, Nova Scotia, Canada (Mr Richard).

JAMA Pediatr. 2013;167(6):528-536. doi:10.1001/jamapediatrics.2013.1389
Abstract

Importance Resuscitation training programs use simulation and debriefing as an educational modality with limited standardization of debriefing format and content. Our study attempted to address this issue by using a debriefing script to standardize debriefings.

Objective To determine whether use of a scripted debriefing by novice instructors and/or simulator physical realism affects knowledge and performance in simulated cardiopulmonary arrests.

Design Prospective, randomized, factorial study design.

Setting The study was conducted from 2008 to 2011 at 14 Examining Pediatric Resuscitation Education Using Simulation and Scripted Debriefing (EXPRESS) network simulation programs. Interprofessional health care teams participated in 2 simulated cardiopulmonary arrests, before and after debriefing.

Participants We randomized 97 participants (23 teams) to nonscripted low-realism; 93 participants (22 teams) to scripted low-realism; 103 participants (23 teams) to nonscripted high-realism; and 94 participants (22 teams) to scripted high-realism groups.

Intervention Participants were randomized to 1 of 4 arms: permutations of scripted vs nonscripted debriefing and high-realism vs low-realism simulators.

Main Outcomes and Measures Percentage difference (0%-100%) in multiple choice question (MCQ) test (individual scores), Behavioral Assessment Tool (BAT) (team leader performance), and the Clinical Performance Tool (CPT) (team performance) scores postintervention vs preintervention comparison (PPC).

Results There was no significant difference at baseline in nonscripted vs scripted groups for MCQ (P = .87), BAT (P = .99), and CPT (P = .95) scores. Scripted debriefing showed greater improvement in knowledge (mean [95% CI] MCQ-PPC, 5.3% [4.1%-6.5%] vs 3.6% [2.3%-4.7%]; P = .04) and team leader behavioral performance (median [interquartile range (IQR)] BAT-PPC, 16% [7.4%-28.5%] vs 8% [0.2%-31.6%]; P = .03). Their improvement in clinical performance during simulated cardiopulmonary arrests was not significantly different (median [IQR] CPT-PPC, 7.9% [4.8%-15.1%] vs 6.7% [2.8%-12.7%], P = .18). Level of physical realism of the simulator had no independent effect on these outcomes.

Conclusions and Relevance The use of a standardized script by novice instructors to facilitate team debriefings improves acquisition of knowledge and team leader behavioral performance during subsequent simulated cardiopulmonary arrests. Implementation of debriefing scripts in resuscitation courses may help to improve learning outcomes and standardize delivery of debriefing, particularly for novice instructors.

Resuscitation training programs, such as the American Heart Association Pediatric Advanced Life Support (PALS) course, use simulation as an educational modality.119 Debriefing following simulated or real resuscitations can improve the process and outcome of resuscitations.20,21 However, the most effective manner in which to train novice instructors to debrief is untested.

Currently, PALS instructors complete a certification course, but the quality and style of instruction remain variable. Few instructors have prior simulation-based education (SBE) or debriefing training.22 Cognitive aids have been used in resuscitation,23 anesthesia,24 and other fields of medicine25,26 to help guide training and care, but to our knowledge, the use of a debriefing cognitive aid for SBE has not been explored. Quiz Ref IDTo standardize and improve novice PALS instructor debriefing, we developed a debriefing script as a roadmap for systematic review of trainee performance, focused on existing PALS learning objectives. The script uses language to guide conversation between novice debriefers and trainees and promote reflective learning.

Despite the growing trend of SBE into resuscitation courses, there is little evidence of whether physical realism of simulators affects learning outcomes.11,27,28 The potential addition of high-realism simulation to all American Heart Association courses would be a substantial financial investment for many training centers. The primary objective of this study was to determine whether use of a debriefing script for novice instructors compared with standard debriefing without a script improves PALS-related educational outcomes. The secondary objective was to determine whether use of a high physical-realism simulator “turned on” compared with the same simulator “turned off” (low physical realism) improves PALS-related educational outcomes. Thus, our objective was to determine whether use of a script designed to facilitate debriefings by novice instructors and/or simulator physical realism affects knowledge and team performance of learners in simulated cardiopulmonary arrests.

METHODS

Quiz Ref IDWe conducted a multicenter, prospective, randomized, blinded, factorial-design study to assess PALS-related educational outcomes. Research ethics board approval was obtained at all sites. Informed consent was obtained from all participants. Participants were recruited from 14 pediatric tertiary care centers across North America (eTable 1) and randomized to 1 of 4 study arms: (1) nonscripted debriefing and low physical-realism simulator, (2) scripted debriefing and low physical-realism simulator, (3) nonscripted debriefing and high physical-realism simulator, and (4) scripted debriefing and high physical-realism simulator.

STUDY PARTICIPANTS

Novice instructors were recruited to debrief simulations. Teams had 4 or 5 participants and were interprofessional. Detailed inclusion and exclusion criteria and team composition are described in the eMethods. All participants (instructors and team members) were distinct and were not recruited to participate in the study multiple times.

STUDY SEQUENCE
Randomization

Each team and novice facilitator was randomized into 1 of 4 study arms (eFigure indicates study flow). All participants were given a standardized orientation to the simulator followed by (1) baseline multiple choice question (MCQ) test, (2) first simulation scenario, (3) debriefing (scripted vs nonscripted) by the novice instructor, (4) second simulation scenario, and (5) postdebriefing MCQ test.

Simulation Scenario

A standardized 12-minute simulation scenario was used that depicted a 12-month-old infant in hypotensive shock progressing to ventricular fibrillation. Two different scenario “stems,” each with different histories of presenting illness (A and B), were written for the same scenario such that participants were unaware that the predebriefing and postdebriefing scenarios were identical (eTable 2).

INTERVENTION
Debriefing Script Development

A debriefing script was designed for novice instructors to facilitate a 20-minute debriefing session (eTables 3-5). The script was developed using an iterative process (eMethods) with a multidisciplinary development team that included pediatric emergency and intensive care physicians, an organizational behavior specialist, a medical educator, and human factors engineers. The language used in the script was based on the debriefing theory known as “advocacy-inquiry.”29,30

Scripted vs Nonscripted Debriefing

All novice instructors received the scenario 2 weeks before the study session. Instructors randomized to scripted debriefing were also given the script but with no instruction on how to use it except with direction to use and follow the script as closely as possible during the debriefing. Instructors randomized to nonscripted debriefing were asked to conduct a debriefing to cover the predefined learning objectives, with no specific instruction on style or method. All instructors held a clipboard while observing the simulation session to access the debriefing script and take notes. This allowed for blinding of the video reviewers as to nonscripted vs scripted debriefing. A research assistant verbally intervened to stop debriefings that extended to 20 minutes.

High vs Low Physical-Realism Simulators

A preprogrammed infant simulator (SimBaby; Laerdal Medical) was used for all simulation sessions. To create a high level of physical realism, full simulator functions were activated (turned on), including vital sign monitoring, audio feedback, breath sounds, chest rise, heart sounds, and palpable pulses. Low physical-realism groups had the identical simulator but the compressor was turned off, thus eliminating those physical findings. In addition, the low-realism simulator was connected to a monitor, but it displayed only the cardiac rhythm, and not pulse oximetry, respiratory rate, blood pressure, temperature, and audio feedback, which were present in the high-realism group. All other aspects of the simulated resuscitation environment were standardized (eMethods).

OUTCOME MEASURES

Three outcome measures were used: an MCQ test to assess the medical knowledge of individual participants, the Clinical Performance Tool (CPT)31,32 to assess the clinical management of the team, and the Behavioral Assessment Tool (BAT)33,34 to assess the team leader's behavioral performance. Existing evidence3541 suggests that measures of knowledge, clinical performance, and behavioral performance may be related to changes in patient care and/or outcomes.

Two 20-question MCQ examinations (scored as 0%-100%) were developed using a set of predetermined learning objectives matched to the study scenario and validated as described in the eMethods. The CPT,31,32 with 21 individual items (maximum, 42 points; scored as 0%-100%) and designed for evaluation of PALS scenarios, was used to assess clinical performance of the team and validated as described in the eMethods. The BAT is composed of 10 crisis resource management behaviors (maximum, 40 points; scored as 0%-100%). Work by LeFlore et al33 and LeFlore and Anderson34 has established the reliability and validity of the tool, with an intraclass correlation coefficient of 0.84 (P < .001) to 0.95 and a Cronbach α of 0.95 to 0.97.

RATER TRAINING AND VIDEO REVIEW PROCESS

Sixteen video reviewers comprising pediatric emergency medicine, critical care, and neonatal intensive care physicians and nurse educators rated videos. Each rater was randomly assigned and then blindly viewed and scored 10 to 12 pairs of simulation videos on a password-protected research portal,42 with each pair representing the presimulation and postsimulation video for a particular team (eMethods).

RANDOMIZATION

Randomization occurred at the level of the team, was stratified by study site, and was conducted in blocks of 4 to ensure equal distribution of teams across the study arms. Randomization packages were prepared at a central study site using a web-based random number generator http://www.random.org). Sequentially numbered recruitment packages at each site contained 4 opaque envelopes (1 envelope for each study arm) with study arm assignments and random unique identifier codes for the individual participants. One envelope was pulled randomly from the recruitment package for each team on the day of the study. Within each envelope, in addition to study arm allocation, specific assignment of the order of MCQ test delivery (A vs B) for pre- and post-MCQ tests and order of scenario stem delivery (A vs B) for presimulation and postsimulation scenarios were carefully delineated to ensure an even order distribution (A-B vs B-A) among all recruited teams.

STATISTICAL ANALYSIS

All data analysis was performed using statistical software (JMP, version 7.0.1; http://www.jmp.com), with significance designated as P < .05. Pearson χ2 was used to assess whether demographics were evenly distributed across study arms. Postintervention vs preintervention comparison (PPC) scores for MCQ, BAT, and CPT were calculated in percentages. Because each score represents an individual or team compared with itself, this is a form of repeated-measures analysis, with the advantage of limiting intersubject variability. Shapiro-Wilk tests were used to evaluate for normality. The MCQ data were normally distributed, so means with 95% CIs are reported and 1-way analysis of variance was used to test for differences between the 4 study arms. Two-sample independent 1-tailed t tests (performed on individual PPC scores) queried for differences between scripted vs nonscripted MCQ-PPC and high-realism vs low-realism MCQ-PPC. Because BAT and CPT data were not normally distributed, medians with IQRs were reported, and the Kruskal-Wallis 1-way analysis of variance test was used. Mann-Whitney tests queried for differences between scripted vs nonscripted BAT and CPT-PPC scores and high-realism vs low-realism BAT and CPT-PPC.

RESULTS
STUDY POPULATION

A total of 453 participants composing 104 teams were recruited from July 1, 2008, to February 1, 2011. Of these, 443 individuals (97.8%) completed both the pre- and post-MCQ tests and were included in the analysis for that outcome measure. Thirty-seven participants from 8 different teams were randomly selected and removed from our sample to perform further validation of the outcome measurement tools (eMethods). Quiz Ref IDOf the remaining 416 participants (96 teams), 29 participants (6 teams) were dropped from the study because of poor audio or video quality of the recorded simulations (eTable 6). Data from the remaining 387 participants and 90 teams were analyzed for CPT and BAT performance (Figure).

Figure. Study flow: participant recruitment and dropouts. BAT indicates Behavioral Assessment Tool; CPT, Clinical Performance Tool; and MCQ, multiple choice question.

Figure. Study flow: participant recruitment and dropouts. BAT indicates Behavioral Assessment Tool; CPT, Clinical Performance Tool; and MCQ, multiple choice question.

Demographic characteristics of the 443 participants who completed the pre- and post-MCQ tests and 90 novice instructors demonstrated no significant differences between the 4 arms (Table 1 and Table 2). There was no significant difference in scores between examination A and B (69.6% vs 69.0%; P = .80).

Table 1. Comparison of Demographic Characteristics Between the 4 Study Arms for All 443 Participants
Table 1. Comparison of Demographic Characteristics Between the 4 Study Arms for All 443 Participants
Table 1. Comparison of Demographic Characteristics Between the 4 Study Arms for All 443 Participants
Table 2. Comparison of Demographic Composition Between the 4 Study Arms for All 90 Novice Instructors
Table 2. Comparison of Demographic Composition Between the 4 Study Arms for All 90 Novice Instructors
Table 2. Comparison of Demographic Composition Between the 4 Study Arms for All 90 Novice Instructors
SCRIPTED VS NONSCRIPTED DEBRIEFING
Knowledge (MCQ Test)

The mean (95% CI) MCQ test scores did not vary for scripted vs nonscripted debriefing at both baseline (69.3% [67.6-71.1] vs 69.1% [67.4-70.8]; P = .87) and postdebriefing (74.6% [73.1-76.3] vs 72.7% [71.1-74.3]; P = .09). Quiz Ref IDParticipants receiving scripted debriefing showed greater improvement compared with participants randomized to nonscripted debriefing (MCQ-PPC, 5.3% [4.1%-6.5%] vs 3.6% [2.3%-4.7%]; P = .04).

Team Leader Behavioral Performance (BAT)

Median (interquartile range [IQR]) BAT scores for team leaders did not vary significantly for scripted vs nonscripted debriefing at baseline (52% [38%-71%] vs 54% [40%-67%]; P = .99) and postdebriefing (82% [62.5%-90%] vs 74.6% [54.5%-88%]; P = .24). Team leaders receiving scripted debriefing showed greater improvement in median BAT scores compared with those receiving nonscripted debriefing (BAT-PPC, 16% [7.4%-28.5%] vs 8% [0.2%-31.6%]; P = .03).

Team Clinical Performance (CPT)

Median (IQR) CPT scores did not vary significantly for scripted vs nonscripted debriefing at baseline (73% [68.2%-79.3%] vs 74.6% [69.8%-76.6%]; P = .95) and postdebriefing (82.5% [79.3%-87.3%] vs 82.5% [77.7%-85.7%]; P = .38). Teams receiving scripted debriefing had improved CPT scores compared with nonscripted debriefing teams (CPT-PPC, 7.9% [4.8%-15.1%] vs 6.7% [2.8%-12.7%]), but this difference was not statistically significant (P = .18).

HIGH VS LOW PHYSICAL-REALISM SIMULATOR

There was no significant difference in baseline scores in between low-realism and high-realism groups for MCQ (P = .24), BAT (P = .82), and CPT (P = .34). The level of physical realism of the simulator (high realism vs low realism) did not have a statistically significant effect on MCQ-PPC scores (4.9% [3.7%-6.1%] vs 4.0% [2.8%-5.2%]; P = .29), BAT-PPC scores (12.0% [6.4%-32.7%] vs 12.7% [0.4%-26.5%]; P = .28), or CPT-PPC scores (7.9% [4.8%-14.3%] vs 6.4% [3.2%-12.7%]; P = .23).

Tables 3, 4, and 5 provide a summary of results for scripting and realism. eTable 7 summarizes secondary analysis across all 4 study arms. Because realism of the simulation did not have a statistically significant effect on our 3 outcome measures, we did not explore interaction terms (ie, impact of realism + scripted debriefing).

Table 3. Postintervention vs Preintervention Comparison Scores for MCQ, BAT, and CPT
Table 3. Postintervention vs Preintervention Comparison Scores for MCQ, BAT, and CPT
Table 3. Postintervention vs Preintervention Comparison Scores for MCQ, BAT, and CPT
Table 4. Results Before and After Debriefing for MCQ, BAT, and CPT in Scripted vs Nonscripted Debriefing Groups
Table 4. Results Before and After Debriefing for MCQ, BAT, and CPT in Scripted vs Nonscripted Debriefing Groups
Table 4. Results Before and After Debriefing for MCQ, BAT, and CPT in Scripted vs Nonscripted Debriefing Groups
Table 5. Results Before and After Debriefing for MCQ Test, BAT, and CPT in High- vs Low-Realism Groups
Table 5. Results Before and After Debriefing for MCQ Test, BAT, and CPT in High- vs Low-Realism Groups
Table 5. Results Before and After Debriefing for MCQ Test, BAT, and CPT in High- vs Low-Realism Groups
DISCUSSION
SCRIPTED DEBRIEFING

Our results suggest that novice instructors of a standard PALS course benefit from use of a scripted debriefing tool resulting in improved cognitive and behavioral learning outcomes. Quiz Ref IDNovice instructors, who typically struggle with debriefing aspects of crisis resource management, facilitated discussion of these issues better while using the script, as demonstrated by improved behavioral performance by participant team leaders in simulation following debriefing. Although we did not explore the reason behind this improvement, the positive effect of the script is compelling given the short duration of debriefing and the fact that all instructors were provided learning objectives beforehand.

Our study did not demonstrate a significant improvement in CPT score. The absence of a significant improvement is not unexpected because (1) the CPT is a team-based performance metric, requiring effective interaction of multiple individuals to score positively, and (2) we studied only a single scenario followed by debriefing. The improvement with one scenario and debriefing is encouraging, suggesting that repeated scenario practice and debriefing may improve CPT scores more significantly.

Debriefing is a critically important component of SBE,43,44 and many models of debriefing exist.29,30,4547 Although cognitive aids have been useful in several medical contexts,2326 we are unaware of any studies assessing the efficacy of a cognitive aid for simulation-based debriefing. Our results support the notion that debriefing is an important element of the simulated learning experience. This work addresses the important issue of instructor competency in standardized resuscitation courses, which rely on large numbers of instructors across many training centers. Recently, the American Heart Association has incorporated a new debriefing tool into the 2011 PALS instructor manuals and courses,48,49 signaling a shift in philosophy for instructor training and standardization.

PHYSICAL REALISM

Several different ways of categorizing simulation fidelity or realism have been described.27,28 For this discussion, we used the categorization of realism into physical, semantical, and phenomenal modes by Dieckmann et al.27 Physical realism consists of physical properties of the simulator and the environment. Semantical realism refers to concepts, their relationships, and how they influence the simulation. Phenomenal realism includes emotions and cognitive states of thought that people experience during simulation. Several groups have described learning benefits of highly realistic simulation for resuscitation training. Wayne et al40,50 demonstrated that inclusion of simulator practice sessions on Advanced Cardiac Life Support algorithms improved cognitive performance in residents as well as adherence to Advanced Cardiac Life Support guidelines in management of actual cardiac arrests. In a randomized trial, Owen et al51 demonstrated that high-fidelity simulation training compared with low-fidelity training improved cognitive and behavioral performance in medical officers. Lee et al10 showed that simulator-trained interns performed better trauma assessments compared with interns trained with moulaged patients. In a recent pediatric study, Donoghue et al52 examined the benefits of physical realism by conducting mock resuscitations for pediatric residents and randomizing them to simulation with physical features activated or inactivated. The residents in the high physical-realism group rated scenarios more highly compared with those in the low physical-realism group, particularly in the pulseless arrest scenarios. Despite growing evidence supporting SBE, to our knowledge no study to date has attempted to describe the specific contribution of physical realism to various types of learning outcomes.

Our study did not demonstrate the same benefits of a high-realism (eg, turned-on) simulator. The effect may have been diminished because other aspects of realism in our study were high. All participants in the study were exposed to a simulated environment with very high physical realism. The preprogramming of scenarios helped to ensure a high degree of semantical realism. In the low-realism groups, phenomenal realism was optimized by using facilitator-guided verbal cues at predefined times. In addition, the scenario we selected for the study involved a pulseless patient, that is, a scenario that did not demand much physical feedback from the mannequin to simulate reality. Thus, our findings related to the lack of simulator physical realism are likely tempered by relatively high degrees of physical, semantical, and phenomenal realism and possibly by selection of a scenario with limited physical findings.

GENERALIZABILITY

Although our study was conducted with a specific multidisciplinary group of learners, we suspect that effective use of scripted debriefing would be generalizable to learner groups of varying composition (eg, PALS courses). Scripted debriefing may have a positive effect on more experienced instructors or instructors teaching content in related areas of resuscitation (eg, Advanced Cardiac Life Support). Finally, the impact of a debriefing script may be enhanced with greater familiarity and use of the script. Further research is required to explore the generalizability of scripted debriefing in these related contexts.

LIMITATIONS

For practical reasons, we limited the study to one type of scenario, and learners were exposed to only one scenario and one debriefing before the assessment. The brief experience provided by one scenario and debriefing may have introduced a timing bias to our study and limited our ability to assess the full benefit of the intervention. In addition, the debriefing script was provided as a cognitive aid without supplemental instruction on how to effectively use the tool. This was done to ensure practical application and widespread implementation of the script across American Heart Association training centers. The mode of questioning used in the script is not as open-ended as a traditional reflective debriefing. The phrases in the script were developed to promote some reflection specific to predefined learning objectives but not necessarily to invoke prolonged reflective discussion. Enhanced instructor training and practice using the script before the sessions would likely have altered our results. Furthermore, the debriefing sessions were limited to 20 minutes; thus, the effect of scripted debriefing on variable length debriefings is unknown. Variable adherence to the debriefing script may have affected the results. In some instances, participants managed the simulation unexpectedly, making some of the phrases in the script inapplicable in certain contexts. Because this was difficult to control, we chose to keep all instructors randomized to the scripted debriefing arms in their preassigned arm of the study (ie, intention-to-treat analysis) regardless of how tightly they adhered to the wording of the script.

CONCLUSIONS

Our study has demonstrated that scripted debriefing for simulation-based pediatric resuscitation education improves educational outcomes (knowledge) and behavioral performance of the team leader. Turning on or off physical realism features of the mannequin does not improve learning outcomes when other aspects of physical, conceptual, and emotional realism are maintained. Further work is needed to identify the impact of scripted debriefing when used by more experienced instructors, for longer debriefing sessions, and in the context of other types of simulated scenarios.

Back to top
Article Information

Correspondence: Adam Cheng, MD, University of Calgary, KidSim-ASPIRE Research Program, Division of Emergency Medicine, Department of Pediatrics, Alberta Children's Hospital, 2888 Shaganappi Trail NW, Calgary, AB T3B 6A8, Canada (adam.cheng@albertahealthservices.ca).

Accepted for Publication: October 26, 2012.

Published Online: April 22, 2013. doi:10.1001/jamapediatrics.2013.1389

Author Contributions: Dr Cheng had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: Cheng, Hunt, Donoghue, Nelson-McMillan, Nishisaki, Moyer, Brett-Fleegler, Kost, Stryjewski, Podraza, Hamilton, Hopkins, Duff, and Nadkarni. Acquisition of data: Cheng, Hunt, Donoghue, Nelson-McMillan, Nishisaki, LeFlore, Eppich, Moyer, Brett-Fleegler, Kleinman, Anderson, Adler, Braga, Kost, Stryjewski, Min, Podraza, Lopreiato, Hamilton, Stone, Reid, Hopkins, Manos, and Nadkarni. Analysis and interpretation of data: Cheng, Hunt, Donoghue, Nelson-McMillan, LeFlore, Eppich, Anderson, Stryjewski, Duff, Richard, and Nadkarni. Drafting of the manuscript: Cheng, Hunt, Nelson-McMillan, LeFlore, Anderson, and Nadkarni. Critical revision of the manuscript for important intellectual content: Cheng, Hunt, Donoghue, Nishisaki, LeFlore, Eppich, Moyer, Brett-Fleegler, Kleinman, Adler, Braga, Kost, Stryjewski, Min, Podraza, Lopreiato, Hamilton, Stone, Reid, Hopkins, Manos, Duff, Richard, and Nadkarni. Statistical analysis: Hunt, Nishisaki, LeFlore, Anderson, and Richard. Obtained funding: Cheng, Hunt, Nishisaki, and Nadkarni. Administrative, technical, and material support: Cheng, Donoghue, Nelson-McMillan, Nishisaki, Moyer, Brett-Fleegler, Kleinman, Min, Podraza, Hamilton, Stone, Reid, Hopkins, and Manos. Study supervision: Cheng, Hunt, Donoghue, Nishisaki, Moyer, Stryjewski, Lopreiato, and Nadkarni.

EXPRESS Investigators: Kristine Boyle, MS, Lucile Packard Children's Hospital; John R. Boulet, PhD, Foundation for Advancement of International Medical Education and Research; Laura Corbin, MD, Oregon Health Science University; Marino Festa, MBBS, Children's Hospital at Westmead; John Gosbee, MD, University of Michigan Health Systems; Laura Gosbee, MD, Red Forest Consulting LLC; Louis P. Halamek, MD, Stanford University; Takanari Ikeyama, MD, The Children's Hospital of Philadelphia; Liana Kappus, MEd, Yale New Haven Health; Douglas Leonard, MD, Oregon Health and Science University; Frank Overly, MD, Alpert Medical School of Brown University; Jenny Rudolph, PhD, Harvard Medical School; Stephen Schexnayder, MD, University of Arkansas; Robert Simon, EdD, Harvard Medical School; Stephanie Sudikoff, MD, Yale University School of Medicine; and Kathleen Ventre, MD, University of Colorado.

Conflict of Interest Disclosures: Dr Cheng reports receiving a research grant from the American Heart Association (AHA) for design and conduct of the study and collection and analysis of data, a grant from the Laerdal Foundation for Acute Medicine, and an infrastructure grant for the EXPRESS collaborative to support administrative and technical positions (funds from the Laerdal Foundation for Acute Medicine grant were not used for conducting this study). Dr Hunt reports receiving a research grant from the Laerdal Foundation for Acute Medicine, The Hartwell Foundation, and money paid for expert testimony by DeBlasio & Donnell LLC. Dr Nelson-McMillan reports receiving a research grant from the AHA to provide PALS books for this study. Dr Nishisaki reports receiving funding from the Laerdal Foundation Center for Excellence and AHRQ Ro3 Pilot Grant (not relevant to this study). Dr Eppich reports receiving a research grant from the AHA for this study, travel reimbursement as a board member of the Society for Simulation in Healthcare, a research grant from the Agency for Healthcare Research and Quality for an in situ simulation project, and travel reimbursement and salary support from the Center for Medical Simulation and the Tübingen Center for Patient Safety and Simulation. Dr Brett-Fleegler reports receiving a research grant from the AHA. Dr Kleinman reports receiving travel reimbursement as a volunteer for the AHA, is an employee of the Children's Hospital Boston Anesthesia Foundation, and is a paid speaker for Children's Mercy Medical Center. Dr Anderson reports being a consultant for SimHealth, employment at Oregon Health Sciences University, receiving payment for a lecture from The Doctor's Company, and receiving payment from the American Academy of Pediatrics for development of video cases. Dr Kost reports being an employee of Nemours/Alfred I. duPont Hospital for Children and receiving continuing medical education allocation from employers for travel. Dr Stone reports receiving a research grant from the AHA funding for this study and royalties for Up to Date articles written on femur fractures for children. Dr Reid reports receiving a research grant from the AHA for this study. Dr Nadkarni reports receiving support from the Laerdal Foundation for Acute Care Medicine Center of Excellence Grant (no specific funding for this study), Laerdal Medical Corporation, unrelated research on quality of cardiopulmonary resuscitation and cardiopulmonary resuscitation training, and sponsored travel for visiting professorship and presentation at simulation users groups (unrelated to this study).

Funding/Support: This study was funded by an educational research grant from the AHA.

Role of the Sponsors: Funds from this grant were used for the design and conduct of the study, as well as collection, management, analysis, and interpretation of data.

Online-Only Material: Listen to an author interview about this article, and others, at http://bit.ly/MW1WVH. This article is featured in the JAMA Pediatrics Journal Club. Go to here to download teaching PowerPoint slides.

Additional Contributions: Anne Marie White, RN, BC Children's Hospital participated in recruitment and data collection; Albert Ho, Center for Excellence in Simulation Education and Innovation conducted video processing; Ferooz Sekandarpoor, Center for Excellence in Simulation Education and Innovation conducted video processing; and Mary Patterson, MD, Cincinnati Children's Hospital, participated in study design. No financial compensation was received by these individuals for their contributions to the study.

REFERENCES
1.
Hunt EA, Fiedor-Hamilton M, Eppich WJ. Resuscitation education: narrowing the gap between evidence-based resuscitation guidelines and performance using best educational practices.  Pediatr Clin North Am. 2008;55(4):1025-1050, xiiPubMedArticle
2.
Eppich WJ, Adler MD, McGaghie WC. Emergency and critical care pediatrics: use of medical simulation for training in acute pediatric emergencies.  Curr Opin Pediatr. 2006;18(3):266-271PubMedArticle
3.
Adler MD, Vozenilek JA, Trainor JL,  et al.  Development and evaluation of a simulation-based pediatric emergency medicine curriculum.  Acad Med. 2009;84(7):935-941PubMedArticle
4.
Cheng A, Duff J, Grant E, Kissoon N, Grant VJ. Simulation in paediatrics: an educational revolution.  Paediatr Child Health. 2007;12(6):465-468PubMed
5.
Eppich WJ, Brannen M, Hunt EA. Team training: implications for emergency and critical care pediatrics.  Curr Opin Pediatr. 2008;20(3):255-260PubMedArticle
6.
Issenberg SB, Pringle S, Harden RM, Khogali S, Gordon MS. Adoption and integration of simulation-based learning technologies into the curriculum of a UK undergraduate education programme.  Med Educ. 2003;37:(suppl 1)  42-49PubMedArticle
7.
Issenberg SB, McGaghie WC, Hart IR,  et al.  Simulation technology for health care professional skills training and assessment.  JAMA. 1999;282(9):861-866PubMedArticle
8.
Marshall RL, Smith JS, Gorman PJ, Krummel TM, Haluck RS, Cooney RN. Use of a human patient simulator in the development of resident trauma management skills.  J Trauma. 2001;51(1):17-21PubMedArticle
9.
Holcomb JB, Dumire RD, Crommett JW,  et al.  Evaluation of trauma team performance using an advanced human patient simulator for resuscitation training.  J Trauma. 2002;52(6):1078-1086PubMedArticle
10.
Lee SK, Pardo M, Gaba D,  et al.  Trauma assessment training with a patient simulator: a prospective, randomized study.  J Trauma. 2003;55(4):651-657PubMedArticle
11.
Donoghue AJ, Durbin DR, Nadel FM, Stryjewski GR, Kost SI, Nadkarni VM. Effect of high-fidelity simulation on Pediatric Advanced Life Support training in pediatric house staff: a randomized trial.  Pediatr Emerg Care. 2009;25(3):139-144PubMedArticle
12.
Nishisaki A, Donoghue AJ, Colborn S,  et al.  Effect of just-in-time simulation training on tracheal intubation procedure safety in the pediatric intensive care unit.  Anesthesiology. 2010;113(1):214-223PubMedArticle
13.
Hunt EA, Heine M, Hohenhaus SM, Luo X, Frush KS. Simulated pediatric trauma team management: assessment of an educational intervention.  Pediatr Emerg Care. 2007;23(11):796-804PubMedArticle
14.
Hunt EA, Walker AR, Shaffner DH, Miller MR, Pronovost PJ. Simulation of in-hospital pediatric medical emergencies and cardiopulmonary arrests: highlighting the importance of the first 5 minutes.  Pediatrics. 2008;121(1):e34-e43PubMedArticleArticle
15.
Hunt EA, Hohenhaus SM, Luo X, Frush KS. Simulation of pediatric trauma stabilization in 35 North Carolina emergency departments: identification of targets for performance improvement.  Pediatrics. 2006;117(3):641-648PubMedArticle
16.
Morey JC, Simon R, Jay GD,  et al.  Error reduction and performance improvement in the emergency department through formal teamwork training: evaluation results of the MedTeams project.  Health Serv Res. 2002;37(6):1553-1581PubMedArticle
17.
Shapiro MJ, Morey JC, Small SD,  et al.  Simulation based teamwork training for emergency department staff: does it improve clinical team performance when added to an existing didactic teamwork curriculum?  Qual Saf Health Care. 2004;13(6):417-421PubMedArticle
18.
Wallin C-J, Meurling L, Hedman L, Hedegård J, Felländer-Tsai L. Target-focused medical emergency team training using a human patient simulator: effects on behaviour and attitude.  Med Educ. 2007;41(2):173-180PubMedArticle
19.
Small SD, Wuerz RC, Simon R, Shapiro N, Conn A, Setnik G. Demonstration of high-fidelity simulation team training for emergency medicine.  Acad Emerg Med. 1999;6(4):312-323PubMedArticle
20.
Savoldelli GL, Naik VN, Park J, Joo HS, Chow R, Hamstra SJ. Value of debriefing during simulated crisis management: oral versus video-assisted oral feedback.  Anesthesiology. 2006;105(2):279-285PubMedArticle
21.
Edelson DP, Litzinger B, Arora V,  et al.  Improving in-hospital cardiac arrest process and outcomes with performance debriefing.  Arch Intern Med. 2008;168(10):1063-1069PubMedArticle
22.
Dieckmann P, Molin Friis S, Lippert A, Østergaard D. The art and science of debriefing in simulation: ideal and practice.  Med Teach. 2009;31(7):e287-e294PubMedArticleArticle
23.
Nelson KL, Shilkofski NA, Haggerty JA, Saliski M, Hunt EA. The use of cognitive AIDS during simulated pediatric cardiopulmonary arrests.  Simul Healthc. 2008;3(3):138-145PubMedArticle
24.
Harrison TK, Manser T, Howard SK, Gaba DM. Use of cognitive aids in a simulated anesthetic crisis.  Anesth Analg. 2006;103(3):551-556PubMedArticle
25.
Winters BD, Gurses AP, Lehmann H, Sexton JB, Rampersad CJ, Pronovost PJ. Clinical review: checklists—translating evidence into practice.  Crit Care. 2009;13(6):210PubMedArticleArticle
26.
Hales BM, Pronovost PJ. The checklist—a tool for error management and performance improvement.  J Crit Care. 2006;21(3):231-235PubMedArticle
27.
Dieckmann P, Gaba D, Rall M. Deepening the theoretical foundations of patient simulation as social practice.  Simul Healthc. 2007;2(3):183-193PubMedArticle
28.
Rudolph JW, Simon R, Raemer DB. Which reality matters? questions on the path to high engagement in healthcare simulation.  Simul Healthc. 2007;2(3):161-163PubMedArticle
29.
Rudolph JW, Simon R, Rivard P, Dufresne RL, Raemer DB. Debriefing with good judgment: combining rigorous feedback with genuine inquiry.  Anesthesiol Clin. 2007;25(2):361-376PubMedArticle
30.
Rudolph JW, Simon R, Dufresne RL, Raemer DB. There's no such thing as “nonjudgmental” debriefing: a theory and method for debriefing with good judgment.  Simul Healthc. 2006;1(1):49-55PubMed
31.
Donoghue A, Ventre K, Boulet J,  et al; EXPRESS Pediatric Simulation Research Investigators.  Design, implementation, and psychometric analysis of a scoring instrument for simulated pediatric resuscitation: a report from the EXPRESS pediatric investigators.  Simul Healthc. 2011;6(2):71-77PubMedArticle
32.
Donoghue A, Nishisaki A, Sutton R, Hales R, Boulet J. Reliability and validity of a scoring instrument for clinical performance during Pediatric Advanced Life Support simulation scenarios.  Resuscitation. 2010;81(3):331-336PubMedArticle
33.
LeFlore JL, Anderson M, Michael JL, Engle WD, Anderson J. Comparison of self-directed learning versus instructor-modeled learning during a simulated clinical experience.  Simul Healthc. 2007;2(3):170-177PubMedArticle
34.
LeFlore JL, Anderson M. Alternative educational models for interdisciplinary student teams.  Simul Healthc. 2009;4(3):135-142PubMedArticle
35.
Capella J, Smith S, Philp A,  et al.  Teamwork training improves the clinical care of trauma patients.  J Surg Educ. 2010;67(6):439-443PubMedArticle
36.
Finer N, Rich W. Neonatal resuscitation for the preterm infant: evidence versus practice.  J Perinatol. 2010;30:(suppl)  S57-S66PubMedArticle
37.
Mazzocco K, Petitti DB, Fong KT,  et al.  Surgical team behaviors and patient outcomes.  Am J Surg. 2009;197(5):678-685PubMedArticle
38.
Studnek JR, Fernandez AR, Shimberg B, Garifo M, Correll M. The association between emergency medical services field performance assessed by high-fidelity simulation and the cognitive knowledge of practicing paramedics.  Acad Emerg Med. 2011;18(11):1177-1185PubMedArticle
39.
Wayne DB, Butter J, Siddall VJ,  et al.  Mastery learning of advanced cardiac life support skills by internal medicine residents using simulation technology and deliberate practice.  J Gen Intern Med. 2006;21(3):251-256PubMedArticle
40.
Wayne DB, Didwania A, Feinglass J, Fudala MJ, Barsuk JH, McGaghie WC. Simulation-based education improves quality of care during cardiac arrest team responses at an academic teaching hospital: a case-control study.  Chest. 2008;133(1):56-61PubMedArticle
41.
Edelson DP, Abella BS, Kramer-Johansen J,  et al.  Effects of compression depth and pre-shock pauses predict defibrillation failure during cardiac arrest.  Resuscitation. 2006;71(2):137-145PubMedArticle
42.
Cheng A, Nadkarni V, Hunt EA, Qayumi K.EXPRESS Investigators.  A multifunctional online research portal for facilitation of simulation-based research: a report from the EXPRESS pediatric simulation research collaborative.  Simul Healthc. 2011;6(4):239-243PubMedArticle
43.
Issenberg SB, McGaghie WC, Petrusa ER, Lee Gordon D, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review.  Med Teach. 2005;27(1):10-28PubMedArticle
44.
Raemer D, Anderson M, Cheng A, Fanning R, Nadkarni V, Savoldelli G. Research regarding debriefing as part of the learning process.  Simul Healthc. 2011;6:(suppl)  S52-S57PubMedArticle
45.
Petranek CF. Written debriefing: the next vital step in learning with simulations.  Simul Gaming. 2000;31(1):108-118Article
46.
Fanning RM, Gaba DM. The role of debriefing in simulation-based learning.  Simul Healthc. 2007;2(2):115-125PubMedArticle
47.
Petranek C. A maturation in experiential learning: principles of simulation and gaming.  Simul Gaming. 1994;25(4):513-523Article
48.
 Pediatric Advanced Life Support Provider Manual. Dallas, TX: American Heart Association; 2011
49.
 Conducting the learning scenarios (simulations). In: Pediatric Advanced Life Support Instructor Manual. Dallas, TX: American Heart Association; 2011:60
50.
Wayne DB, Butter J, Siddall VJ,  et al.  Simulation-based training of internal medicine residents in advanced cardiac life support protocols: a randomized trial.  Teach Learn Med. 2005;17(3):210-216PubMedArticle
51.
Owen H, Mugford B, Follows V, Plummer JL. Comparison of three simulation-based training methods for management of medical emergencies.  Resuscitation. 2006;71(2):204-211PubMedArticle
52.
Donoghue AJ, Durbin DR, Nadel FM, Stryjewski GR, Kost SI, Nadkarni VM. Perception of realism during mock resuscitations by pediatric housestaff: the impact of simulated physical features.  Simul Healthc. 2010;5(1):16-20PubMedArticle
×