[Skip to Content]
Sign In
Individual Sign In
Create an Account
Institutional Sign In
OpenAthens Shibboleth
[Skip to Content Landing]
Figure.
Composite Quality Score of Pediatric Emergency Departments to General Emergency Departments
Composite Quality Score of Pediatric Emergency Departments to General Emergency Departments

Each axis of the radar graph represents a separate metric; clockwise from top: teamwork, sepsis adherence, cardiac arrest adherence, and seizure adherence. The darker shade represents the mean score on each metric by general emergency departments and the lighter shade represents the mean score on each metric by pediatric emergency departments.

Table 1.  
Baseline Variables Across Spectrum of Hospitals and Emergency Department Types
Baseline Variables Across Spectrum of Hospitals and Emergency Department Types
Table 2.  
Composite Quality Score Domains
Composite Quality Score Domains
Table 3.  
Estimates From GEE Models of Indicators of CQS
Estimates From GEE Models of Indicators of CQS
Table 4.  
Correlation Between Composite Quality Score Domains and Pediatric Readiness Survey Componentsa
Correlation Between Composite Quality Score Domains and Pediatric Readiness Survey Componentsa
1.
Institute of Medicine CotFoECitUHS.  Emergency Care for Children: Growing Pains. Washington, DC: National Academy Press; 2006.
2.
McGirr  J, Williams  JM, Prescott  JE.  Physicians in rural West Virginia emergency departments: residency training and board certification status.  Acad Emerg Med. 1998;5(4):333-336.PubMedGoogle ScholarCrossref
3.
Remick  K, Snow  S, Gausche-Hill  M.  Emergency department readiness for pediatric illness and injury.  Pediatr Emerg Med Pract. 2013;10(12):1-13.PubMedGoogle Scholar
4.
Horeczko  T, Marcin  JP, Kahn  JM, Sapien  RE; Consortium Of Regionalization Efforts in Emergency Medical Services for Children (CORE-EMSC).  Urban and rural patterns in emergent pediatric transfer: a call for regionalization.  J Rural Health. 2014;30(3):252-258.PubMedGoogle ScholarCrossref
5.
Bourgeois  FT, Shannon  MW.  Emergency care for children in pediatric and general emergency departments.  Pediatr Emerg Care. 2007;23(2):94-102.PubMedGoogle ScholarCrossref
6.
Schenk  E, Edgerton  EA.  A tale of two populations: addressing pediatric needs in the continuum of emergency care.  Ann Emerg Med. 2015;65(6):673-678.PubMedGoogle ScholarCrossref
7.
Go  AS, Mozaffarian  D, Roger  VL,  et al; American Heart Association Statistics Committee and Stroke Statistics Subcommittee.  Heart disease and stroke statistics—2013 update: a report from the American Heart Association.  Circulation. 2013;127(1):e6-e245.PubMedGoogle ScholarCrossref
8.
American Academy of Pediatrics Committee on Pediatric Emergency Medicine; American College of Emergency Physicians Pediatric Committee; Emergency Nurses Association Pediatric Committee.  Joint policy statement—guidelines for care of children in the emergency department.  Ann Emerg Med. 2009;54(4):543-552.PubMedGoogle ScholarCrossref
9.
Centers for Disease Control and Prevention. Visits to physician offices, hospital outpatient departments, and hospital emergency departments, by age, sex, and race: United States, selected years 1995–2011. http://www.cdc.gov/nchs/data/hus/hus14.pdf. Accessed December 4, 2015.
10.
Gausche-Hill  M, Ely  M, Schmuhl  P,  et al.  A national assessment of pediatric readiness of emergency departments.  JAMA Pediatr. 2015;169(6):527-534.PubMedGoogle ScholarCrossref
11.
Alessandrini  E, Varadarajan  K, Alpern  ER,  et al; Pediatric Emergency Care Applied Research Network.  Emergency department quality: an analysis of existing pediatric measures.  Acad Emerg Med. 2011;18(5):519-526.PubMedGoogle ScholarCrossref
12.
Stang  AS, Straus  SE, Crotts  J, Johnson  DW, Guttmann  A.  Quality indicators for high acuity pediatric conditions.  Pediatrics. 2013;132(4):752-762.PubMedGoogle ScholarCrossref
13.
Cheng  A, Hunt  EA, Grant  D,  et al; International Network for Simulation-based Pediatric Innovation, Research, and Education CPR Investigators.  Variability in quality of chest compressions provided during simulated cardiac arrest across nine pediatric institutions.  Resuscitation. 2015;97:13-19.PubMedGoogle ScholarCrossref
14.
Cheng  A, Brown  LL, Duff  JP,  et al; International Network for Simulation-Based Pediatric Innovation, Research, & Education (INSPIRE) CPR Investigators.  Improving cardiopulmonary resuscitation with a CPR feedback device and refresher simulations (CPR CARES Study): a randomized clinical trial.  JAMA Pediatr. 2015;169(2):137-144.PubMedGoogle ScholarCrossref
15.
Chime  NO, Katznelson  J, Gangadharan  S,  et al. Comparing practice patterns between pediatric and general emergency medicine physicians: a scoping review. Pediatr Emerg Care.2015. http://journals.lww.com/pec-online/Fulltext/publishahead/Comparing_Practice_Patterns_Between_Pediatric_and.99103.aspx.
16.
Cheng  A, Auerbach  M, Hunt  EA,  et al.  Designing and conducting simulation-based research.  Pediatrics. 2014;133(6):1091-1101.PubMedGoogle ScholarCrossref
17.
Patterson  MD, Geis  GL, Falcone  RA, LeMaster  T, Wears  RL.  In situ simulation: detection of safety threats and teamwork training in a high risk emergency department.  BMJ Qual Saf. 2013;22(6):468-477.PubMedGoogle ScholarCrossref
18.
Brydges  R, Hatala  R, Zendejas  B, Erwin  PJ, Cook  DA.  Linking simulation-based educational assessments and patient-related outcomes: a systematic review and meta-analysis.  Acad Med. 2015;90(2):246-256.PubMedGoogle ScholarCrossref
19.
Nishisaki  A, Keren  R, Nadkarni  V.  Does simulation improve patient safety? self-efficacy, competence, operational performance, and patient safety.  Anesthesiol Clin. 2007;25(2):225-236.PubMedGoogle ScholarCrossref
20.
Gordon  JA, Alexander  EK, Lockley  SW,  et al; Harvard Work Hours, Health, and Safety Group (Boston, Massachusetts).  Does simulator-based clinical performance correlate with actual hospital behavior? the effect of extended work hours on patient care provided by medical interns.  Acad Med. 2010;85(10):1583-1588.PubMedGoogle ScholarCrossref
21.
Boulet  JR, Murray  D, Kras  J, Woodhouse  J, McAllister  J, Ziv  A.  Reliability and validity of a simulation-based acute care skills assessment for medical students and residents.  Anesthesiology. 2003;99(6):1270-1280.PubMedGoogle ScholarCrossref
22.
Bond  WF, Spillane  L.  The use of simulation for emergency medicine resident assessment.  Acad Emerg Med. 2002;9(11):1295-1299.PubMedGoogle ScholarCrossref
23.
Schwartz  A, Young  R, Hicks  PJ; Appd Learn.  Medical education practice-based research networks: facilitating collaborative research.  Med Teach. 2016;38(1):64-74.PubMedGoogle ScholarCrossref
24.
International Network for Simulation-based Pediatric Innovation. Research, & Education (INSPIRE) Network. http://www.inspiresim.com. Accessed July 26, 2016.
25.
US Food and Drug Administration. FDA’s investigation into patients being injected with simulated IV fluids continues. http://www.fda.gov/Drugs/DrugSafety/ucm428431.htm. Accessed December 4, 2015.
26.
Cheng  A, Hunt  EA, Donoghue  A,  et al; EXPRESS Investigators.  Examining pediatric resuscitation education using simulation and scripted debriefing: a multicenter randomized trial.  JAMA Pediatr. 2013;167(6):528-536.PubMedGoogle ScholarCrossref
27.
Brierley  J, Carcillo  JA, Choong  K,  et al.  Clinical practice parameters for hemodynamic support of pediatric and neonatal septic shock: 2007 update from the American College of Critical Care Medicine.  Crit Care Med. 2009;37(2):666-688.PubMedGoogle ScholarCrossref
28.
Dellinger  RP, Levy  MM, Rhodes  A,  et al; Surviving Sepsis Campaign Guidelines Committee including The Pediatric Subgroup.  Surviving Sepsis Campaign: international guidelines for management of severe sepsis and septic shock, 2012.  Intensive Care Med. 2013;39(2):165-228.PubMedGoogle ScholarCrossref
29.
Emergency Medical Services for Children Innovation and Improvement Center. Pediatric Readiness Assessment and Scoring. http://www.pediatricreadiness.org/files/PDF/Assessment_and_Scoring.pdf. Accessed November 29, 2015.
30.
Kleinman  ME, Chameides  L, Schexnayder  SM,  et al.  Part 14: pediatric advanced life support: 2010 American Heart Association Guidelines for cardiopulmonary resuscitation and emergency cardiovascular care.  Circulation. 2010;122(18)(suppl 3):S876-S908.PubMedGoogle ScholarCrossref
31.
Reid  J, Stone  K, Brown  J,  et al.  The Simulation Team Assessment Tool (STAT): development, reliability and validation.  Resuscitation. 2012;83(7):879-886.PubMedGoogle ScholarCrossref
32.
Zou  KH, Tuncali  K, Silverman  SG.  Correlation and simple linear regression.  Radiology. 2003;227(3):617-622.PubMedGoogle ScholarCrossref
33.
Kessler  DO, Walsh  B, Whitfill  T,  et al; INSPIRE ImPACTS investigators.  Disparities in adherence to pediatric sepsis guidelines across a spectrum of emergency departments: a multicenter, cross-sectional observational in situ simulation study.  J Emerg Med. 2016;50(3):403-15.e1, 3.PubMedGoogle ScholarCrossref
34.
Lin  YR, Li  CJ, Wu  TK,  et al.  Post-resuscitative clinical features in the first hour after achieving sustained ROSC predict the duration of survival in children with non-traumatic out-of-hospital cardiac arrest.  Resuscitation. 2010;81(4):410-417.PubMedGoogle ScholarCrossref
35.
Lin  YR, Wu  HP, Chen  WL,  et al.  Predictors of survival and neurologic outcomes in children with traumatic out-of-hospital cardiac arrest during the early postresuscitative period.  J Trauma Acute Care Surg. 2013;75(3):439-447.PubMedGoogle ScholarCrossref
36.
Guttmann  A, Razzaq  A, Lindsay  P, Zagorski  B, Anderson  GM.  Development of measures of the quality of emergency department care for children using a structured panel process.  Pediatrics. 2006;118(1):114-123.PubMedGoogle ScholarCrossref
37.
Dharmar  M, Marcin  JP, Kuppermann  N,  et al.  A new implicit review instrument for measuring quality of care delivered to pediatric patients in the emergency department.  BMC Emerg Med. 2007;7:13.PubMedGoogle ScholarCrossref
Original Investigation
Caring for the Critically Ill Patient
October 2016

Differences in the Quality of Pediatric Resuscitative Care Across a Spectrum of Emergency Departments

Author Affiliations
  • 1Division of Pediatric Emergency Medicine, Department of Pediatrics, Yale University School of Medicine, New Haven, Connecticut
  • 2Department of Pediatrics, Columbia University Medical Center, New York, New York
  • 3Division of Pediatric Emergency Medicine, Department of Pediatrics, University of Massachusetts Medical Center, Worcester
  • 4Division of Critical Care Medicine, Department of Pediatrics, Long Island Jewish Medical Center, New Hyde Park, New York
  • 5Department of Critical Care Medicine and Pediatrics, Children’s Hospital of Pittsburgh, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania
  • 6Division of Pediatric Emergency Medicine, Department of Pediatrics, Children’s Hospital of Pittsburgh of UPMC, Pittsburgh, Pennsylvania
  • 7Department of Anesthesiology and Critical Care Medicine, University of Pennsylvania Perelman School of Medicine, The Children’s Hospital of Philadelphia, Philadelphia
  • 8Division of Pediatric Emergency Medicine, Department of Pediatrics, University of Pennsylvania Perelman School of Medicine, The Children’s Hospital of Philadelphia, Philadelphia
  • 9Division of General Pediatrics and Adolescent Medicine, Department of Pediatrics, Johns Hopkins University School of Medicine, Baltimore, Maryland
  • 10Department of Emergency Medicine, Alpert School of Medicine at Brown University, Providence, Rhode Island
 

Copyright 2016 American Medical Association. All Rights Reserved. Applicable FARS/DFARS Restrictions Apply to Government Use.

JAMA Pediatr. 2016;170(10):987-994. doi:10.1001/jamapediatrics.2016.1550
Abstract

Importance  The quality of pediatric resuscitative care delivered across the spectrum of emergency departments (EDs) in the United States is poorly described. In a recent study, more than 4000 EDs completed the Pediatric Readiness Survey (PRS); however, the correlation of PRS scores with the quality of simulated or real patient care has not been described.

Objective  To measure and compare the quality of resuscitative care delivered to simulated pediatric patients across a spectrum of EDs and to examine the correlation of PRS scores with quality measures.

Design, Setting, and Participants  This prospective multicenter cohort study evaluated 58 interprofessional teams in their native pediatric or general ED resuscitation bays caring for a series of 3 simulated critically ill patients (sepsis, seizure, and cardiac arrest).

Main Outcomes and Measures  A composite quality score (CQS) was measured as the sum of 4 domains: (1) adherence to sepsis guidelines, (2) adherence to cardiac arrest guidelines, (3) performance on seizure resuscitation, and (4) teamwork. Pediatric Readiness Survey scores and health care professional demographics were collected as independent data. Correlations were explored between CQS and individual domain scores with PRS.

Results  Overall, 58 teams from 30 hospitals participated (8 pediatric EDs [PEDs], 22 general EDs [GEDs]). The mean CQS was 71 (95% CI, 68-75); PEDs had a higher mean CQS (82; 95% CI, 79-85) vs GEDs (66; 95% CI, 63-69) and outperformed GEDs in all domains. However, when using generalized estimating equations to estimate CQS controlling for clustering of the data, PED status did not explain a higher CQS (β = 4.28; 95% CI, −4.58 to 13.13) while the log of pediatric patient volume did explain a higher CQS (β = 9.57; 95% CI, 2.64-16.49). The correlation of CQS to PRS was moderate (r = 0.51; P < .001). The correlation was weak for cardiac arrest (r = 0.24; P = .07), weak for sepsis (ρ = 0.45; P < .001) and seizure (ρ = 0.43; P = .001), and strong for teamwork (ρ = 0.71; P < .001).

Conclusions and Relevance  This multicenter study noted significant differences in the quality of simulated pediatric resuscitative care across a spectrum of EDs. The CQS was higher in PEDs compared with GEDs. However, when controlling for pediatric patient volume and other variables in a multivariable model, PED status does not explain a higher CQS while pediatric patient volume does. The correlation of the PRS was moderate for simulation-based measures of quality.

Introduction

In 2006 the Institute of Medicine described emergency care for children in the United States as “uneven.”1 Three years later key stakeholders formed a national coalition to improve pediatric readiness and published a set of guidelines to address the gaps described by the Institute of Medicine.2-6 In 2013, this group administered the National Pediatric Readiness Project, a web-based survey measuring compliance with these guidelines.7,8 This assessment was completed by 4149 hospitals, representing 24 million of the 25.5 million annual US pediatric emergency department (ED) visits.9,10

There are limited measures describing the quality of pediatric resuscitative care in the ED.11 Quality measures have been published for selected high acuity pediatric conditions.12 The unpredictability and low frequency of pediatric resuscitation in any individual ED, as well as the logistical and ethical challenges of data collection, have limited research on this topic. A simulation-based study noted that the quality of cardiopulmonary resuscitation is poor.13,14 A comprehensive review comparing practice patterns between pediatric EDs (PEDs) and general EDs (GEDs) yielded only 20 publications, and none reported data on resuscitation.15

The recent publication on the Pediatric Readiness Survey (PRS) provided vital information on ED pediatric readiness in the United States.10 However, there are no studies examining the correlation of PRS scores with patient outcomes or quality of care. Examining the correlation of PRS scores with patient outcomes would be ideal. However, owing to the low frequency of resuscitation events in each ED and the paucity of prospective research in this area, we decided to leverage simulation to measure quality. Simulation provides realism and standardization of patients through preprogramming of trends in vital signs over time, physiologic responses to interventions and scripting of parent actors to answer diverse research questions that cannot otherwise be feasibly assessed—particularly in high stakes, low frequency events such as pediatric resuscitations.16,17 In situ simulation involves bringing the simulator into the clinical environment to measure the quality of care delivered by intact care teams using real-world equipment.17 The use of video-based data abstraction after simulations allows for robust review and measurement. There is a growing body of evidence supporting the validity of using simulation to measure the quality of care.18-22

Our primary aim was to measure and compare differences in the quality of simulated pediatric resuscitative care provided by interprofessional teams across a spectrum of EDs. A secondary aim was to assess the correlation of quality and PRS scores. We hypothesized that quality scores would be higher in PEDs compared with GEDs and that PRS would correlate with quality.

Box Section Ref ID

Key Points

  • Question Are there differences in the quality of pediatric resuscitative care across a spectrum of emergency departments (EDs)?

  • Findings This study evaluated 58 interprofessional teams in their native resuscitation bay caring for a series of 3 simulated critically ill patients (sepsis, seizure, and cardiac arrest). There was a mean comprehensive quality score of 82% in 8 pediatric EDs compared with a score of 66% in 22 general EDs; when controlling for pediatric volume this difference lost statistical significance.

  • Meaning Differences in the quality of pediatric resuscitation measured by simulation exist across a spectrum of EDs.

Methods
Design

This prospective, multicenter, in situ, simulation-based cohort study measured the performance of interprofessional teams caring for a series of 3 simulated pediatric patients. Sessions were announced and involved a parent actor presenting with the simulator to the resuscitation bays in 8 PEDs and 22 GEDs. Institutional review board approval was obtained from Yale University and each collaborating site. Participants provided signed consent to be videotaped.

Study Setting and Population

Investigators from 8 academic medical centers within INSPIRE23,24 recruited 2 teams of health care professionals from their institutions’ PED and 2 additional teams from at least 1 GED in their respective geographic region. We purposefully sampled EDs of different sizes, location, and staffing models. Pediatric EDs were defined as EDs in children’s hospitals, staffed by board-certified pediatric emergency medicine physicians and affiliated with an academic medical center. General EDs were defined as EDs staffed by board-certified emergency medicine physicians (not pediatric emergency medicine) and not located in a children’s hospital. Two interprofessional teams were recruited from each ED. Teams were composed of 1 to 2 physicians (pediatric emergency medicine or emergency-medicine board certified), 3 to 5 nurses, and 2 to 3 nursing assistants or emergency medical technicians. The team size varied to mirror the typical team size of each ED. Students and residents were not recruited to avoid confounding by variations in training level. Participants were protected from clinical responsibilities during the simulations. Recruitment was performed by a designated liaison at each site via an email sent to all staff 1 month prior to the simulation and a sign-up document distributed on a weekly basis until the maximum number of participants had volunteered.

Study Protocol

Teams were enrolled over a 30-month period (April 18, 2013, through October 13, 2015). Sessions took place in the ED resuscitation room using each department’s actual equipment (eg, infusion pumps), supplies (eg, syringes), resources (eg, cognitive aids), and policies and/or guidelines (eg, sepsis protocol). To avoid contamination of simulated drugs into clinical practice, a standardized drawer was created with labeled blue medications that matched standard concentrations and appearance (PocketNurse).25

Each team participated in a 2.5-hour simulation session that involved 4 scenarios in the following order: (1) infant foreign body, (2) infant sepsis, (3) infant seizure, and (4) child cardiac arrest. The foreign body session was a warm-up case for each team to familiarize simulation environment and specific function of the simulator, and these data were not included in the analyses. Each session began with a standardized orientation to introduce the research team, describe the format for the day, and communicate the rules and expectations related to their performance. Participants were oriented to the functionality of the simulators (SimBaby, MegaCode Kid [Laerdal]), including demonstrating the mechanisms by which the simulator could be placed on a monitor and how to administer medications and fluids. The team was also introduced to the “parent,” played by a professional actor. The parent-actor was provided a script with statements to make at designated times and standardized responses to questions. Laboratory data were provided on request on preprinted laminated cards, including standard point-of-care testing (eg, venous blood gas, dextrose, electrolytes). The principal investigator provided this scripted introduction and verbally reported scripted prompts during the simulation on request from team members (eg, capillary refill time) and facilitated a scripted debriefing after each case.26 The principal investigator has extensive training and more than 10 years of experience in debriefing.

All simulations were video recorded from 2 standard angles (overhead view of the baby and a panoramic view of the room) with integration of the patient monitor output using the B-line Live Capture Ultraportable System (B-Line Medical). The research team from Yale University (M.A., principal investigator; M.G., nurse-researcher; a research associate; and an actor) traveled to each site, set up equipment in situ (simulators, cameras, technical equipment), conducted the simulations, and collected data. This team was joined at each GED site by the designated collaborating investigator(s) from each respective academic medical center. A single research nurse (M.G.) scored performance on a standardized data collection instrument during the case. Subsequent to the simulation day, video reviews were conducted by the research nurse and principal investigator. During review the team was provided a concurrent stream of the 2 video angles, the vital signs, and the simulator data output. These reviews were used to score teamwork and other variables that could not be collected in real time (eg, compression rate). When discrepancies were noted in the scoring, both reviewers met to concurrently score the video and discuss the scoring until consensus was achieved. The raters were blinded to health care professional factors such as experience but not to PED or GED status of the team.

Health care professional–level data were collected via a survey. At each site a nurse and/or physician not participating in the simulations completed the PRS. All sites were surveyed for this study via in-person data collection on the same day as the simulations. This survey was developed for a multiphase quality improvement initiative to ensure that all EDs have the essential guidelines and resources to provide effective emergency care to pediatric patients.8,18,27,28 The research team had permission to use the PRS.29 Each site was resurveyed for this study in person on the same day as the simulations. The 6 domains of the PRS are coordination of care, physician and/or nurse staffing, quality improvement, patient safety, policies and/or procedures, and equipment and/or supplies.10 A subset of questions on the PRS described the presence of a pediatric care coordinator.

Outcome Measures
Composite Quality Score

The primary outcome was a composite quality score (CQS) calculated as the sum of 4 distinct domain scores: (1) adherence to sepsis guidelines, (2) adherence to pediatric advanced life support guidelines, (3) performance on seizure resuscitation, and (4) the mean teamwork score for each team across the 3 cases.

Case Performance

Performance measures were iteratively developed over 6 months. Content validity evidence was provided through adaptation of existing guidelines and a modified Delphi review process involving 8 pediatric emergency medicine physicians, 4 pediatric intensive care physicians, and 1 pediatric emergency nurse via 6 conference calls and 2 in-person meetings. The response process for the assessment instrument was improved through pilot application and iterative changes to the cases and checklists during 20 simulations with teams of health care professionals in training at each site (who were not eligible for the study). The sepsis measures were derived from international guidelines.28 The cardiac arrest measures were derived from the American Heart Association pediatric advanced life support (PALS) guidelines.30 The seizure performance measures were developed based on established best practices related to the management of hypoglycemic seizures. Each case performance score was calculated using equal weighting for all subcomponents and divided by the total number of possible elements to derive a score on a scale of 0 to 100. The total composite quality score was calculated as the average of the 4 domain scores. The component metrics and time-critical performance checklists for each of the cases are listed in eTable 1 in the Supplement.

Teamwork

Teamwork was measured using the Simulation Team Assessment Tool (STAT) teamwork domain for each case and represented as the mean score across all 3 cases. The STAT is a validated pediatric simulation-based assessment tool.31 Both raters completed 4 hours of training with the team that developed STAT prior to using it in this study.

Data Analysis

All data were manually entered into Microsoft Excel version 14.0 (Microsoft) and transferred into SPSS version 22.0 (IBM Corp) with which all statistical analyses were performed. We examined differences in survey responses and simulation data by pediatric patient volume using bivariate analyses. Data were examined for normality and homogeneity in each analysis.

All data were examined for missing values. Only the teamwork measure had missing data. On examination, 11 of the 58 teams lacked teamwork scores owing to either lack of consent for videotaping or technical issues involving difficulty in hearing the audio feed to evaluate communication. We considered the data as missing at random. Imputed scores vs scored deleted did not render any difference in outcome analyses. After this sensitivity analyses we treated the data points as missing at random and used imputed scores to replace missing data.

We conducted Pearson χ2 or Fisher exact tests for categorical data as appropriate, independent t tests for normal continuous data, and Wilcoxon-Mann Whitney U tests for nonparametric data. We report unadjusted CQS when stratified by PEDs compared with GEDs based on our primary hypothesis of PEDs scoring higher CQS.

We tested correlation between PRS and teamwork scores and scores on each of the cases using a Pearson correlation coefficient (r) and Spearman correlation coefficients (ρ), respectively. We used the following cut-points for correlation: 0.8 or greater for strong, 0.5 to 0.79 for moderate, 0.20 to 0.49 for weak, and 0 to 0.19 for negligible.32 Lastly, we used generalized estimating equations (GEE) with a linear identity link to model CQS as the dependent variable with a robust variance estimator to account for within-hospital correlation. The GEE model examined which variables explained variability in the CQS. We included the following potential covariates in the model: PED or GED status, pediatric patient volume (log10 transformed for interpretability), PRS, team experience, team composition of participants holding MDs (percentage), team members with experience with simulation (percentage), as well as team members with PALS training (percentage) as a continuous variable.

Results
Participating Hospital and Team Characteristics

Fifty-eight teams from 30 EDs (8 PEDs, 22 GEDs) participated, and ED characteristics are reported in Table 1. Pediatric EDs had higher pediatric patient volumes, total PRS scores, ratio of physicians per team, and percentages of team members that participated in frequent (at least monthly) pediatric simulations. Team experience did not significantly differ between PEDs and GEDs, nor did median percentage of team members with PALS training.

Outcomes

The unadjusted data in Table 2 report the CQS and the 4 domain scores (with the component elements of each) for PEDs and GEDs. The mean (SD) CQS was 71 (11) across all sites. Pediatric EDs had significantly higher overall CQS (mean [SD], 82 [7]) compared with GEDs (mean [SD], 66 [9]) (P < .001), as well as higher individual domain scores compared with GEDs: sepsis (100 [interquartile range (IQR) 100-100] vs 67 [IQR, 67-83]; P < .001), cardiac arrest (64 [IQR, 57-75] vs 50 [IQR, 36-64]; P = .006), and seizure (71 [IQR, 57-71] vs 71 [IQR, 71-93]; P = .04) and teamwork (mean [SD], 87 [7] vs mean [SD], 72 [8]; P < .001). We also explored removing teamwork as a dependent variable in the CQS; the difference in CQS without teamwork was similar to the reported CQS between GEDs and PEDs (mean [SD], 65 [10] vs mean [SD], 82 [8], respectively; P < .001). The Figure shows a spider diagram representing the score for each CQS domain for PEDs and GEDs.

The results of the GEE model presented in Table 3 show that after adjusting for GED or PED status that did not predict CQS (β = 4.28; 95% CI, −4.58 to 13.13), the log of pediatric volume significantly explained a higher CQS (β = 9.57; 95% CI, 2.64-16.49), as did PRS (β = 0.14; 95% CI, 0.01-0.27). Team members with PALS training significantly explained a slightly lower CQS score (β = −0.08; 95% CI, −0.15 to −0.02). A moderate correlation was noted between CQS and pediatric patient volume (r = 0.68; P < .001) and a graphical representation of this relationship is depicted in eFigure 1 in the Supplement.

Relationships Between Quality Domain Scores and PRS Components

A moderate correlation was noted between CQS and PRS (r = 0.51; P < .001) and a graphical representation of this relationship is depicted in eFigure 2 in the Supplement. Table 4 reports the correlations of quality domain scores and the PRS: strong for teamwork (r = 0.71; P < .001), weak for sepsis adherence (ρ = 0.45; P < .001) and seizure performance (ρ = 0.43; P = .001), and weak for cardiac arrest adherence (ρ = 0.24; P = .073). Composite quality score and PRS correlation was attenuated to weak when adjusting for teamwork scores (r = 0.45; P < .001).

Discussion

This study revealed higher total CQSs and higher subcomponent scores across all domains in PEDs compared with GEDs. However, when controlling for pediatric volume, PEDs did not explain a higher CQS, indicating that pediatric volume is more indicative of quality than GED or PED distinction. The greatest differences in care between GEDs and PEDs were noted for the sepsis and cardiac arrest cases and the teamwork scores. A detailed analysis of performance on the sepsis case has been published by our group.33 In the care of the patient with hypoglycemia who had a seizure, PEDs were more likely to select the appropriate concentration and administer the correct dose of glucose.

There are limited granular data describing the quality of pediatric resuscitative care in real patients, and existing data are retrospective (eg, quality of cardiopulmonary resuscitation, time to fluid resuscitation in septic patients).34,35 Novel methods have been described to better evaluate the quality of resuscitative care including the structured panel process36 and implicit review process.37 Surveys are a feasible method to measure ED pediatric readiness. The PRS was not designed to measure the quality of care; however, a correlation of the PRS with the quality of resuscitative care could obviate the need for additional measurements to evaluate this construct. Unfortunately, our results demonstrated only weak to moderate correlations between the PRS score and quality of care measured by simulation. The performance of each of the participating EDs in these simulations could be used to guide local improvement interventions. Future work should be conducted to describe the correlation between these simulations and patient or population-level outcomes.

Current guidelines advise hospitals to appoint a nurse and/or physician pediatric emergency care coordinators (PECC) to provide pediatric leadership.1 The recent study by Gausche-Hill and colleagues10 described a strong correlation between PRS scores and the presence of PECCs. In this study, we explored the effect of the PECC on simulation-based quality scores (adjusting PRS scores from our study population) and found that the presence of a nurse or physician only mildly increased quality or PRS scores. However, the presence of both a nurse and physician resulted in much higher quality and PRS scores (eTable 2 in the Supplement); but when looking at GEDs alone, this relationship is severely attenuated, and the differences between the scores are nonsignificant. This was unexpected and suggests that there is more complexity to the role of the PECC in quality of care.

Limitations

Our recruitment methods likely led to selection bias with individuals who agreed to participate being more or less skilled than other staff; however, this bias would be present in all EDs. Pediatric EDs had more experience with pediatric simulation. This may have resulted in improved performance on a simulation-based assessment and biased our results. However, this was not significantly associated with CQS in a multivariable GEE model. The checklists we used have limited validity evidence in the domains of internal structure, relation to other variables, and consequences. Lastly, reviewers in our study were not blinded to PED or GED status, and this may have affected their ratings. The initial study protocol planned to use blinded reviewers; however, after conducting the first series of simulations, we recognized that collecting the quantitative data for cases required both in-person and video-based data collection. To ensure consistency, 2 investigators were present for all simulations and scored all cases independently using in-person and video-based review. We noted that true blinding was unachievable owing to the presence of hospital names on signage and participants’ clothing.

Conclusions

This multicenter study noted differences in the quality of simulated pediatric resuscitative care across a spectrum of EDs in the United States. The overall quality of care was higher in PEDs compared with GEDs. However, when controlling for pediatric patient volume, PED distinction did not significantly explain higher CQS. The PRS score did not correlate well with simulation-based measures of quality. Additional work is needed to explore whether differences in quality are associated with variability in patient outcomes.

Back to top
Article Information

Corresponding Author: Marc Auerbach, MD, MSci, Division of Pediatric Emergency Medicine, Department of Pediatrics, Yale University School of Medicine, 100 York St, Ste 1F, New Haven, CT 06511 (marc.auerbach@yale.edu).

Accepted for Publication: May 5, 2016.

Published Online: August 29, 2016. doi:10.1001/jamapediatrics.2016.1550

Author Contributions: Dr Auerbach had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Study concept and design: Auerbach, Gawel, Kessler, Walsh, Gangadharan, Hamilton, Schultz, Nishisaki, Tay, Lavoie, Katznelson, Nadkarni, Brown.

Acquisition, analysis, or interpretation of data: Auerbach, Whitfill, Gawel, Kessler, Walsh, Gangadharan, Hamilton, Nishisaki, Lavoie, Katznelson, Dudas, Baird, Nadkarni, Brown.

Drafting of the manuscript: Auerbach, Whitfill, Gawel, Walsh, Gangadharan, Schultz, Lavoie, Nadkarni, Brown.

Critical revision of the manuscript for important intellectual content: Auerbach, Whitfill, Gawel, Kessler, Walsh, Gangadharan, Hamilton, Nishisaki, Tay, Lavoie, Katznelson, Dudas, Baird, Nadkarni, Brown.

Statistical analysis: Auerbach, Whitfill, Walsh, Gangadharan, Nishisaki, Lavoie, Baird.

Obtained funding: Auerbach, Gangadharan, Nadkarni.

Administrative, technical, or material support: Auerbach, Whitfill, Gawel, Walsh, Gangadharan, Hamilton, Schultz, Nishisaki, Tay, Lavoie, Brown.

Study supervision: Auerbach, Kessler, Nishisaki, Dudas, Nadkarni, Brown.

Conflct of Interest Disclosures: None reported.

Funding/Support: This study was supported by a grant from RBaby Foundation (Rbabyfoundation.org) to Yale University with subcontracts to the collaborating academic medical centers. An AHRQ R18 HS20286-03 from the National Institutes of Health was used to develop these cases.

Role of the Funder/Sponsor: The funders/sponsors had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Additional Contributions: We acknowledge the contributions of members of the International Network for Simulation-based Pediatric Innovation, Research and Education (INSPIRE), as well as the Society for Simulation in Healthcare, and the International Pediatric Simulation Society for providing the INSPIRE/ImPACTS investigators with space at their annual meetings. We wish to acknowledge Charmin Gohel, MBBS, for editorial assistance.

References
1.
Institute of Medicine CotFoECitUHS.  Emergency Care for Children: Growing Pains. Washington, DC: National Academy Press; 2006.
2.
McGirr  J, Williams  JM, Prescott  JE.  Physicians in rural West Virginia emergency departments: residency training and board certification status.  Acad Emerg Med. 1998;5(4):333-336.PubMedGoogle ScholarCrossref
3.
Remick  K, Snow  S, Gausche-Hill  M.  Emergency department readiness for pediatric illness and injury.  Pediatr Emerg Med Pract. 2013;10(12):1-13.PubMedGoogle Scholar
4.
Horeczko  T, Marcin  JP, Kahn  JM, Sapien  RE; Consortium Of Regionalization Efforts in Emergency Medical Services for Children (CORE-EMSC).  Urban and rural patterns in emergent pediatric transfer: a call for regionalization.  J Rural Health. 2014;30(3):252-258.PubMedGoogle ScholarCrossref
5.
Bourgeois  FT, Shannon  MW.  Emergency care for children in pediatric and general emergency departments.  Pediatr Emerg Care. 2007;23(2):94-102.PubMedGoogle ScholarCrossref
6.
Schenk  E, Edgerton  EA.  A tale of two populations: addressing pediatric needs in the continuum of emergency care.  Ann Emerg Med. 2015;65(6):673-678.PubMedGoogle ScholarCrossref
7.
Go  AS, Mozaffarian  D, Roger  VL,  et al; American Heart Association Statistics Committee and Stroke Statistics Subcommittee.  Heart disease and stroke statistics—2013 update: a report from the American Heart Association.  Circulation. 2013;127(1):e6-e245.PubMedGoogle ScholarCrossref
8.
American Academy of Pediatrics Committee on Pediatric Emergency Medicine; American College of Emergency Physicians Pediatric Committee; Emergency Nurses Association Pediatric Committee.  Joint policy statement—guidelines for care of children in the emergency department.  Ann Emerg Med. 2009;54(4):543-552.PubMedGoogle ScholarCrossref
9.
Centers for Disease Control and Prevention. Visits to physician offices, hospital outpatient departments, and hospital emergency departments, by age, sex, and race: United States, selected years 1995–2011. http://www.cdc.gov/nchs/data/hus/hus14.pdf. Accessed December 4, 2015.
10.
Gausche-Hill  M, Ely  M, Schmuhl  P,  et al.  A national assessment of pediatric readiness of emergency departments.  JAMA Pediatr. 2015;169(6):527-534.PubMedGoogle ScholarCrossref
11.
Alessandrini  E, Varadarajan  K, Alpern  ER,  et al; Pediatric Emergency Care Applied Research Network.  Emergency department quality: an analysis of existing pediatric measures.  Acad Emerg Med. 2011;18(5):519-526.PubMedGoogle ScholarCrossref
12.
Stang  AS, Straus  SE, Crotts  J, Johnson  DW, Guttmann  A.  Quality indicators for high acuity pediatric conditions.  Pediatrics. 2013;132(4):752-762.PubMedGoogle ScholarCrossref
13.
Cheng  A, Hunt  EA, Grant  D,  et al; International Network for Simulation-based Pediatric Innovation, Research, and Education CPR Investigators.  Variability in quality of chest compressions provided during simulated cardiac arrest across nine pediatric institutions.  Resuscitation. 2015;97:13-19.PubMedGoogle ScholarCrossref
14.
Cheng  A, Brown  LL, Duff  JP,  et al; International Network for Simulation-Based Pediatric Innovation, Research, & Education (INSPIRE) CPR Investigators.  Improving cardiopulmonary resuscitation with a CPR feedback device and refresher simulations (CPR CARES Study): a randomized clinical trial.  JAMA Pediatr. 2015;169(2):137-144.PubMedGoogle ScholarCrossref
15.
Chime  NO, Katznelson  J, Gangadharan  S,  et al. Comparing practice patterns between pediatric and general emergency medicine physicians: a scoping review. Pediatr Emerg Care.2015. http://journals.lww.com/pec-online/Fulltext/publishahead/Comparing_Practice_Patterns_Between_Pediatric_and.99103.aspx.
16.
Cheng  A, Auerbach  M, Hunt  EA,  et al.  Designing and conducting simulation-based research.  Pediatrics. 2014;133(6):1091-1101.PubMedGoogle ScholarCrossref
17.
Patterson  MD, Geis  GL, Falcone  RA, LeMaster  T, Wears  RL.  In situ simulation: detection of safety threats and teamwork training in a high risk emergency department.  BMJ Qual Saf. 2013;22(6):468-477.PubMedGoogle ScholarCrossref
18.
Brydges  R, Hatala  R, Zendejas  B, Erwin  PJ, Cook  DA.  Linking simulation-based educational assessments and patient-related outcomes: a systematic review and meta-analysis.  Acad Med. 2015;90(2):246-256.PubMedGoogle ScholarCrossref
19.
Nishisaki  A, Keren  R, Nadkarni  V.  Does simulation improve patient safety? self-efficacy, competence, operational performance, and patient safety.  Anesthesiol Clin. 2007;25(2):225-236.PubMedGoogle ScholarCrossref
20.
Gordon  JA, Alexander  EK, Lockley  SW,  et al; Harvard Work Hours, Health, and Safety Group (Boston, Massachusetts).  Does simulator-based clinical performance correlate with actual hospital behavior? the effect of extended work hours on patient care provided by medical interns.  Acad Med. 2010;85(10):1583-1588.PubMedGoogle ScholarCrossref
21.
Boulet  JR, Murray  D, Kras  J, Woodhouse  J, McAllister  J, Ziv  A.  Reliability and validity of a simulation-based acute care skills assessment for medical students and residents.  Anesthesiology. 2003;99(6):1270-1280.PubMedGoogle ScholarCrossref
22.
Bond  WF, Spillane  L.  The use of simulation for emergency medicine resident assessment.  Acad Emerg Med. 2002;9(11):1295-1299.PubMedGoogle ScholarCrossref
23.
Schwartz  A, Young  R, Hicks  PJ; Appd Learn.  Medical education practice-based research networks: facilitating collaborative research.  Med Teach. 2016;38(1):64-74.PubMedGoogle ScholarCrossref
24.
International Network for Simulation-based Pediatric Innovation. Research, & Education (INSPIRE) Network. http://www.inspiresim.com. Accessed July 26, 2016.
25.
US Food and Drug Administration. FDA’s investigation into patients being injected with simulated IV fluids continues. http://www.fda.gov/Drugs/DrugSafety/ucm428431.htm. Accessed December 4, 2015.
26.
Cheng  A, Hunt  EA, Donoghue  A,  et al; EXPRESS Investigators.  Examining pediatric resuscitation education using simulation and scripted debriefing: a multicenter randomized trial.  JAMA Pediatr. 2013;167(6):528-536.PubMedGoogle ScholarCrossref
27.
Brierley  J, Carcillo  JA, Choong  K,  et al.  Clinical practice parameters for hemodynamic support of pediatric and neonatal septic shock: 2007 update from the American College of Critical Care Medicine.  Crit Care Med. 2009;37(2):666-688.PubMedGoogle ScholarCrossref
28.
Dellinger  RP, Levy  MM, Rhodes  A,  et al; Surviving Sepsis Campaign Guidelines Committee including The Pediatric Subgroup.  Surviving Sepsis Campaign: international guidelines for management of severe sepsis and septic shock, 2012.  Intensive Care Med. 2013;39(2):165-228.PubMedGoogle ScholarCrossref
29.
Emergency Medical Services for Children Innovation and Improvement Center. Pediatric Readiness Assessment and Scoring. http://www.pediatricreadiness.org/files/PDF/Assessment_and_Scoring.pdf. Accessed November 29, 2015.
30.
Kleinman  ME, Chameides  L, Schexnayder  SM,  et al.  Part 14: pediatric advanced life support: 2010 American Heart Association Guidelines for cardiopulmonary resuscitation and emergency cardiovascular care.  Circulation. 2010;122(18)(suppl 3):S876-S908.PubMedGoogle ScholarCrossref
31.
Reid  J, Stone  K, Brown  J,  et al.  The Simulation Team Assessment Tool (STAT): development, reliability and validation.  Resuscitation. 2012;83(7):879-886.PubMedGoogle ScholarCrossref
32.
Zou  KH, Tuncali  K, Silverman  SG.  Correlation and simple linear regression.  Radiology. 2003;227(3):617-622.PubMedGoogle ScholarCrossref
33.
Kessler  DO, Walsh  B, Whitfill  T,  et al; INSPIRE ImPACTS investigators.  Disparities in adherence to pediatric sepsis guidelines across a spectrum of emergency departments: a multicenter, cross-sectional observational in situ simulation study.  J Emerg Med. 2016;50(3):403-15.e1, 3.PubMedGoogle ScholarCrossref
34.
Lin  YR, Li  CJ, Wu  TK,  et al.  Post-resuscitative clinical features in the first hour after achieving sustained ROSC predict the duration of survival in children with non-traumatic out-of-hospital cardiac arrest.  Resuscitation. 2010;81(4):410-417.PubMedGoogle ScholarCrossref
35.
Lin  YR, Wu  HP, Chen  WL,  et al.  Predictors of survival and neurologic outcomes in children with traumatic out-of-hospital cardiac arrest during the early postresuscitative period.  J Trauma Acute Care Surg. 2013;75(3):439-447.PubMedGoogle ScholarCrossref
36.
Guttmann  A, Razzaq  A, Lindsay  P, Zagorski  B, Anderson  GM.  Development of measures of the quality of emergency department care for children using a structured panel process.  Pediatrics. 2006;118(1):114-123.PubMedGoogle ScholarCrossref
37.
Dharmar  M, Marcin  JP, Kuppermann  N,  et al.  A new implicit review instrument for measuring quality of care delivered to pediatric patients in the emergency department.  BMC Emerg Med. 2007;7:13.PubMedGoogle ScholarCrossref
×