Each axis of the radar graph represents a separate metric; clockwise from top: teamwork, sepsis adherence, cardiac arrest adherence, and seizure adherence. The darker shade represents the mean score on each metric by general emergency departments and the lighter shade represents the mean score on each metric by pediatric emergency departments.
eTable 1. Adherence guidelines by each case
eTable 2. Component scores by presence of pediatric care coordinator
eFigure 1. Pediatric Patient Volume vs. Composite Quality Score
eFigure 2. Pediatric Readiness Score vs. Composite Quality Score
Customize your JAMA Network experience by selecting one or more topics from the list below.
Auerbach M, Whitfill T, Gawel M, et al. Differences in the Quality of Pediatric Resuscitative Care Across a Spectrum of Emergency Departments. JAMA Pediatr. 2016;170(10):987–994. doi:10.1001/jamapediatrics.2016.1550
The quality of pediatric resuscitative care delivered across the spectrum of emergency departments (EDs) in the United States is poorly described. In a recent study, more than 4000 EDs completed the Pediatric Readiness Survey (PRS); however, the correlation of PRS scores with the quality of simulated or real patient care has not been described.
To measure and compare the quality of resuscitative care delivered to simulated pediatric patients across a spectrum of EDs and to examine the correlation of PRS scores with quality measures.
Design, Setting, and Participants
This prospective multicenter cohort study evaluated 58 interprofessional teams in their native pediatric or general ED resuscitation bays caring for a series of 3 simulated critically ill patients (sepsis, seizure, and cardiac arrest).
Main Outcomes and Measures
A composite quality score (CQS) was measured as the sum of 4 domains: (1) adherence to sepsis guidelines, (2) adherence to cardiac arrest guidelines, (3) performance on seizure resuscitation, and (4) teamwork. Pediatric Readiness Survey scores and health care professional demographics were collected as independent data. Correlations were explored between CQS and individual domain scores with PRS.
Overall, 58 teams from 30 hospitals participated (8 pediatric EDs [PEDs], 22 general EDs [GEDs]). The mean CQS was 71 (95% CI, 68-75); PEDs had a higher mean CQS (82; 95% CI, 79-85) vs GEDs (66; 95% CI, 63-69) and outperformed GEDs in all domains. However, when using generalized estimating equations to estimate CQS controlling for clustering of the data, PED status did not explain a higher CQS (β = 4.28; 95% CI, −4.58 to 13.13) while the log of pediatric patient volume did explain a higher CQS (β = 9.57; 95% CI, 2.64-16.49). The correlation of CQS to PRS was moderate (r = 0.51; P < .001). The correlation was weak for cardiac arrest (r = 0.24; P = .07), weak for sepsis (ρ = 0.45; P < .001) and seizure (ρ = 0.43; P = .001), and strong for teamwork (ρ = 0.71; P < .001).
Conclusions and Relevance
This multicenter study noted significant differences in the quality of simulated pediatric resuscitative care across a spectrum of EDs. The CQS was higher in PEDs compared with GEDs. However, when controlling for pediatric patient volume and other variables in a multivariable model, PED status does not explain a higher CQS while pediatric patient volume does. The correlation of the PRS was moderate for simulation-based measures of quality.
In 2006 the Institute of Medicine described emergency care for children in the United States as “uneven.”1 Three years later key stakeholders formed a national coalition to improve pediatric readiness and published a set of guidelines to address the gaps described by the Institute of Medicine.2-6 In 2013, this group administered the National Pediatric Readiness Project, a web-based survey measuring compliance with these guidelines.7,8 This assessment was completed by 4149 hospitals, representing 24 million of the 25.5 million annual US pediatric emergency department (ED) visits.9,10
There are limited measures describing the quality of pediatric resuscitative care in the ED.11 Quality measures have been published for selected high acuity pediatric conditions.12 The unpredictability and low frequency of pediatric resuscitation in any individual ED, as well as the logistical and ethical challenges of data collection, have limited research on this topic. A simulation-based study noted that the quality of cardiopulmonary resuscitation is poor.13,14 A comprehensive review comparing practice patterns between pediatric EDs (PEDs) and general EDs (GEDs) yielded only 20 publications, and none reported data on resuscitation.15
The recent publication on the Pediatric Readiness Survey (PRS) provided vital information on ED pediatric readiness in the United States.10 However, there are no studies examining the correlation of PRS scores with patient outcomes or quality of care. Examining the correlation of PRS scores with patient outcomes would be ideal. However, owing to the low frequency of resuscitation events in each ED and the paucity of prospective research in this area, we decided to leverage simulation to measure quality. Simulation provides realism and standardization of patients through preprogramming of trends in vital signs over time, physiologic responses to interventions and scripting of parent actors to answer diverse research questions that cannot otherwise be feasibly assessed—particularly in high stakes, low frequency events such as pediatric resuscitations.16,17 In situ simulation involves bringing the simulator into the clinical environment to measure the quality of care delivered by intact care teams using real-world equipment.17 The use of video-based data abstraction after simulations allows for robust review and measurement. There is a growing body of evidence supporting the validity of using simulation to measure the quality of care.18-22
Our primary aim was to measure and compare differences in the quality of simulated pediatric resuscitative care provided by interprofessional teams across a spectrum of EDs. A secondary aim was to assess the correlation of quality and PRS scores. We hypothesized that quality scores would be higher in PEDs compared with GEDs and that PRS would correlate with quality.
Question Are there differences in the quality of pediatric resuscitative care across a spectrum of emergency departments (EDs)?
Findings This study evaluated 58 interprofessional teams in their native resuscitation bay caring for a series of 3 simulated critically ill patients (sepsis, seizure, and cardiac arrest). There was a mean comprehensive quality score of 82% in 8 pediatric EDs compared with a score of 66% in 22 general EDs; when controlling for pediatric volume this difference lost statistical significance.
Meaning Differences in the quality of pediatric resuscitation measured by simulation exist across a spectrum of EDs.
This prospective, multicenter, in situ, simulation-based cohort study measured the performance of interprofessional teams caring for a series of 3 simulated pediatric patients. Sessions were announced and involved a parent actor presenting with the simulator to the resuscitation bays in 8 PEDs and 22 GEDs. Institutional review board approval was obtained from Yale University and each collaborating site. Participants provided signed consent to be videotaped.
Investigators from 8 academic medical centers within INSPIRE23,24 recruited 2 teams of health care professionals from their institutions’ PED and 2 additional teams from at least 1 GED in their respective geographic region. We purposefully sampled EDs of different sizes, location, and staffing models. Pediatric EDs were defined as EDs in children’s hospitals, staffed by board-certified pediatric emergency medicine physicians and affiliated with an academic medical center. General EDs were defined as EDs staffed by board-certified emergency medicine physicians (not pediatric emergency medicine) and not located in a children’s hospital. Two interprofessional teams were recruited from each ED. Teams were composed of 1 to 2 physicians (pediatric emergency medicine or emergency-medicine board certified), 3 to 5 nurses, and 2 to 3 nursing assistants or emergency medical technicians. The team size varied to mirror the typical team size of each ED. Students and residents were not recruited to avoid confounding by variations in training level. Participants were protected from clinical responsibilities during the simulations. Recruitment was performed by a designated liaison at each site via an email sent to all staff 1 month prior to the simulation and a sign-up document distributed on a weekly basis until the maximum number of participants had volunteered.
Teams were enrolled over a 30-month period (April 18, 2013, through October 13, 2015). Sessions took place in the ED resuscitation room using each department’s actual equipment (eg, infusion pumps), supplies (eg, syringes), resources (eg, cognitive aids), and policies and/or guidelines (eg, sepsis protocol). To avoid contamination of simulated drugs into clinical practice, a standardized drawer was created with labeled blue medications that matched standard concentrations and appearance (PocketNurse).25
Each team participated in a 2.5-hour simulation session that involved 4 scenarios in the following order: (1) infant foreign body, (2) infant sepsis, (3) infant seizure, and (4) child cardiac arrest. The foreign body session was a warm-up case for each team to familiarize simulation environment and specific function of the simulator, and these data were not included in the analyses. Each session began with a standardized orientation to introduce the research team, describe the format for the day, and communicate the rules and expectations related to their performance. Participants were oriented to the functionality of the simulators (SimBaby, MegaCode Kid [Laerdal]), including demonstrating the mechanisms by which the simulator could be placed on a monitor and how to administer medications and fluids. The team was also introduced to the “parent,” played by a professional actor. The parent-actor was provided a script with statements to make at designated times and standardized responses to questions. Laboratory data were provided on request on preprinted laminated cards, including standard point-of-care testing (eg, venous blood gas, dextrose, electrolytes). The principal investigator provided this scripted introduction and verbally reported scripted prompts during the simulation on request from team members (eg, capillary refill time) and facilitated a scripted debriefing after each case.26 The principal investigator has extensive training and more than 10 years of experience in debriefing.
All simulations were video recorded from 2 standard angles (overhead view of the baby and a panoramic view of the room) with integration of the patient monitor output using the B-line Live Capture Ultraportable System (B-Line Medical). The research team from Yale University (M.A., principal investigator; M.G., nurse-researcher; a research associate; and an actor) traveled to each site, set up equipment in situ (simulators, cameras, technical equipment), conducted the simulations, and collected data. This team was joined at each GED site by the designated collaborating investigator(s) from each respective academic medical center. A single research nurse (M.G.) scored performance on a standardized data collection instrument during the case. Subsequent to the simulation day, video reviews were conducted by the research nurse and principal investigator. During review the team was provided a concurrent stream of the 2 video angles, the vital signs, and the simulator data output. These reviews were used to score teamwork and other variables that could not be collected in real time (eg, compression rate). When discrepancies were noted in the scoring, both reviewers met to concurrently score the video and discuss the scoring until consensus was achieved. The raters were blinded to health care professional factors such as experience but not to PED or GED status of the team.
Health care professional–level data were collected via a survey. At each site a nurse and/or physician not participating in the simulations completed the PRS. All sites were surveyed for this study via in-person data collection on the same day as the simulations. This survey was developed for a multiphase quality improvement initiative to ensure that all EDs have the essential guidelines and resources to provide effective emergency care to pediatric patients.8,18,27,28 The research team had permission to use the PRS.29 Each site was resurveyed for this study in person on the same day as the simulations. The 6 domains of the PRS are coordination of care, physician and/or nurse staffing, quality improvement, patient safety, policies and/or procedures, and equipment and/or supplies.10 A subset of questions on the PRS described the presence of a pediatric care coordinator.
The primary outcome was a composite quality score (CQS) calculated as the sum of 4 distinct domain scores: (1) adherence to sepsis guidelines, (2) adherence to pediatric advanced life support guidelines, (3) performance on seizure resuscitation, and (4) the mean teamwork score for each team across the 3 cases.
Performance measures were iteratively developed over 6 months. Content validity evidence was provided through adaptation of existing guidelines and a modified Delphi review process involving 8 pediatric emergency medicine physicians, 4 pediatric intensive care physicians, and 1 pediatric emergency nurse via 6 conference calls and 2 in-person meetings. The response process for the assessment instrument was improved through pilot application and iterative changes to the cases and checklists during 20 simulations with teams of health care professionals in training at each site (who were not eligible for the study). The sepsis measures were derived from international guidelines.28 The cardiac arrest measures were derived from the American Heart Association pediatric advanced life support (PALS) guidelines.30 The seizure performance measures were developed based on established best practices related to the management of hypoglycemic seizures. Each case performance score was calculated using equal weighting for all subcomponents and divided by the total number of possible elements to derive a score on a scale of 0 to 100. The total composite quality score was calculated as the average of the 4 domain scores. The component metrics and time-critical performance checklists for each of the cases are listed in eTable 1 in the Supplement.
Teamwork was measured using the Simulation Team Assessment Tool (STAT) teamwork domain for each case and represented as the mean score across all 3 cases. The STAT is a validated pediatric simulation-based assessment tool.31 Both raters completed 4 hours of training with the team that developed STAT prior to using it in this study.
All data were manually entered into Microsoft Excel version 14.0 (Microsoft) and transferred into SPSS version 22.0 (IBM Corp) with which all statistical analyses were performed. We examined differences in survey responses and simulation data by pediatric patient volume using bivariate analyses. Data were examined for normality and homogeneity in each analysis.
All data were examined for missing values. Only the teamwork measure had missing data. On examination, 11 of the 58 teams lacked teamwork scores owing to either lack of consent for videotaping or technical issues involving difficulty in hearing the audio feed to evaluate communication. We considered the data as missing at random. Imputed scores vs scored deleted did not render any difference in outcome analyses. After this sensitivity analyses we treated the data points as missing at random and used imputed scores to replace missing data.
We conducted Pearson χ2 or Fisher exact tests for categorical data as appropriate, independent t tests for normal continuous data, and Wilcoxon-Mann Whitney U tests for nonparametric data. We report unadjusted CQS when stratified by PEDs compared with GEDs based on our primary hypothesis of PEDs scoring higher CQS.
We tested correlation between PRS and teamwork scores and scores on each of the cases using a Pearson correlation coefficient (r) and Spearman correlation coefficients (ρ), respectively. We used the following cut-points for correlation: 0.8 or greater for strong, 0.5 to 0.79 for moderate, 0.20 to 0.49 for weak, and 0 to 0.19 for negligible.32 Lastly, we used generalized estimating equations (GEE) with a linear identity link to model CQS as the dependent variable with a robust variance estimator to account for within-hospital correlation. The GEE model examined which variables explained variability in the CQS. We included the following potential covariates in the model: PED or GED status, pediatric patient volume (log10 transformed for interpretability), PRS, team experience, team composition of participants holding MDs (percentage), team members with experience with simulation (percentage), as well as team members with PALS training (percentage) as a continuous variable.
Fifty-eight teams from 30 EDs (8 PEDs, 22 GEDs) participated, and ED characteristics are reported in Table 1. Pediatric EDs had higher pediatric patient volumes, total PRS scores, ratio of physicians per team, and percentages of team members that participated in frequent (at least monthly) pediatric simulations. Team experience did not significantly differ between PEDs and GEDs, nor did median percentage of team members with PALS training.
The unadjusted data in Table 2 report the CQS and the 4 domain scores (with the component elements of each) for PEDs and GEDs. The mean (SD) CQS was 71 (11) across all sites. Pediatric EDs had significantly higher overall CQS (mean [SD], 82 ) compared with GEDs (mean [SD], 66 ) (P < .001), as well as higher individual domain scores compared with GEDs: sepsis (100 [interquartile range (IQR) 100-100] vs 67 [IQR, 67-83]; P < .001), cardiac arrest (64 [IQR, 57-75] vs 50 [IQR, 36-64]; P = .006), and seizure (71 [IQR, 57-71] vs 71 [IQR, 71-93]; P = .04) and teamwork (mean [SD], 87  vs mean [SD], 72 ; P < .001). We also explored removing teamwork as a dependent variable in the CQS; the difference in CQS without teamwork was similar to the reported CQS between GEDs and PEDs (mean [SD], 65  vs mean [SD], 82 , respectively; P < .001). The Figure shows a spider diagram representing the score for each CQS domain for PEDs and GEDs.
The results of the GEE model presented in Table 3 show that after adjusting for GED or PED status that did not predict CQS (β = 4.28; 95% CI, −4.58 to 13.13), the log of pediatric volume significantly explained a higher CQS (β = 9.57; 95% CI, 2.64-16.49), as did PRS (β = 0.14; 95% CI, 0.01-0.27). Team members with PALS training significantly explained a slightly lower CQS score (β = −0.08; 95% CI, −0.15 to −0.02). A moderate correlation was noted between CQS and pediatric patient volume (r = 0.68; P < .001) and a graphical representation of this relationship is depicted in eFigure 1 in the Supplement.
A moderate correlation was noted between CQS and PRS (r = 0.51; P < .001) and a graphical representation of this relationship is depicted in eFigure 2 in the Supplement. Table 4 reports the correlations of quality domain scores and the PRS: strong for teamwork (r = 0.71; P < .001), weak for sepsis adherence (ρ = 0.45; P < .001) and seizure performance (ρ = 0.43; P = .001), and weak for cardiac arrest adherence (ρ = 0.24; P = .073). Composite quality score and PRS correlation was attenuated to weak when adjusting for teamwork scores (r = 0.45; P < .001).
This study revealed higher total CQSs and higher subcomponent scores across all domains in PEDs compared with GEDs. However, when controlling for pediatric volume, PEDs did not explain a higher CQS, indicating that pediatric volume is more indicative of quality than GED or PED distinction. The greatest differences in care between GEDs and PEDs were noted for the sepsis and cardiac arrest cases and the teamwork scores. A detailed analysis of performance on the sepsis case has been published by our group.33 In the care of the patient with hypoglycemia who had a seizure, PEDs were more likely to select the appropriate concentration and administer the correct dose of glucose.
There are limited granular data describing the quality of pediatric resuscitative care in real patients, and existing data are retrospective (eg, quality of cardiopulmonary resuscitation, time to fluid resuscitation in septic patients).34,35 Novel methods have been described to better evaluate the quality of resuscitative care including the structured panel process36 and implicit review process.37 Surveys are a feasible method to measure ED pediatric readiness. The PRS was not designed to measure the quality of care; however, a correlation of the PRS with the quality of resuscitative care could obviate the need for additional measurements to evaluate this construct. Unfortunately, our results demonstrated only weak to moderate correlations between the PRS score and quality of care measured by simulation. The performance of each of the participating EDs in these simulations could be used to guide local improvement interventions. Future work should be conducted to describe the correlation between these simulations and patient or population-level outcomes.
Current guidelines advise hospitals to appoint a nurse and/or physician pediatric emergency care coordinators (PECC) to provide pediatric leadership.1 The recent study by Gausche-Hill and colleagues10 described a strong correlation between PRS scores and the presence of PECCs. In this study, we explored the effect of the PECC on simulation-based quality scores (adjusting PRS scores from our study population) and found that the presence of a nurse or physician only mildly increased quality or PRS scores. However, the presence of both a nurse and physician resulted in much higher quality and PRS scores (eTable 2 in the Supplement); but when looking at GEDs alone, this relationship is severely attenuated, and the differences between the scores are nonsignificant. This was unexpected and suggests that there is more complexity to the role of the PECC in quality of care.
Our recruitment methods likely led to selection bias with individuals who agreed to participate being more or less skilled than other staff; however, this bias would be present in all EDs. Pediatric EDs had more experience with pediatric simulation. This may have resulted in improved performance on a simulation-based assessment and biased our results. However, this was not significantly associated with CQS in a multivariable GEE model. The checklists we used have limited validity evidence in the domains of internal structure, relation to other variables, and consequences. Lastly, reviewers in our study were not blinded to PED or GED status, and this may have affected their ratings. The initial study protocol planned to use blinded reviewers; however, after conducting the first series of simulations, we recognized that collecting the quantitative data for cases required both in-person and video-based data collection. To ensure consistency, 2 investigators were present for all simulations and scored all cases independently using in-person and video-based review. We noted that true blinding was unachievable owing to the presence of hospital names on signage and participants’ clothing.
This multicenter study noted differences in the quality of simulated pediatric resuscitative care across a spectrum of EDs in the United States. The overall quality of care was higher in PEDs compared with GEDs. However, when controlling for pediatric patient volume, PED distinction did not significantly explain higher CQS. The PRS score did not correlate well with simulation-based measures of quality. Additional work is needed to explore whether differences in quality are associated with variability in patient outcomes.
Corresponding Author: Marc Auerbach, MD, MSci, Division of Pediatric Emergency Medicine, Department of Pediatrics, Yale University School of Medicine, 100 York St, Ste 1F, New Haven, CT 06511 (firstname.lastname@example.org).
Accepted for Publication: May 5, 2016.
Published Online: August 29, 2016. doi:10.1001/jamapediatrics.2016.1550
Author Contributions: Dr Auerbach had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Auerbach, Gawel, Kessler, Walsh, Gangadharan, Hamilton, Schultz, Nishisaki, Tay, Lavoie, Katznelson, Nadkarni, Brown.
Acquisition, analysis, or interpretation of data: Auerbach, Whitfill, Gawel, Kessler, Walsh, Gangadharan, Hamilton, Nishisaki, Lavoie, Katznelson, Dudas, Baird, Nadkarni, Brown.
Drafting of the manuscript: Auerbach, Whitfill, Gawel, Walsh, Gangadharan, Schultz, Lavoie, Nadkarni, Brown.
Critical revision of the manuscript for important intellectual content: Auerbach, Whitfill, Gawel, Kessler, Walsh, Gangadharan, Hamilton, Nishisaki, Tay, Lavoie, Katznelson, Dudas, Baird, Nadkarni, Brown.
Statistical analysis: Auerbach, Whitfill, Walsh, Gangadharan, Nishisaki, Lavoie, Baird.
Obtained funding: Auerbach, Gangadharan, Nadkarni.
Administrative, technical, or material support: Auerbach, Whitfill, Gawel, Walsh, Gangadharan, Hamilton, Schultz, Nishisaki, Tay, Lavoie, Brown.
Study supervision: Auerbach, Kessler, Nishisaki, Dudas, Nadkarni, Brown.
Conflct of Interest Disclosures: None reported.
Funding/Support: This study was supported by a grant from RBaby Foundation (Rbabyfoundation.org) to Yale University with subcontracts to the collaborating academic medical centers. An AHRQ R18 HS20286-03 from the National Institutes of Health was used to develop these cases.
Role of the Funder/Sponsor: The funders/sponsors had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Additional Contributions: We acknowledge the contributions of members of the International Network for Simulation-based Pediatric Innovation, Research and Education (INSPIRE), as well as the Society for Simulation in Healthcare, and the International Pediatric Simulation Society for providing the INSPIRE/ImPACTS investigators with space at their annual meetings. We wish to acknowledge Charmin Gohel, MBBS, for editorial assistance.
Create a personal account or sign in to: