Error bars indicate 95% CIs.
eTable 1. Intervention Rationale and Implementation
eTable 2. Intervention Effectiveness Evaluation
eTable 3. Direct Observation and Database Samples
eTable 4. Summary of Main Measures
eTable 5. Regression Models for Main Outcome Measures
eTable 6. Trauma Medication Travel Pack Guideline
Catchpole K, Ley E, Wiegmann D, Blaha J, Shouhed D, Gangi A, Blocker R, Karl R, Karl C, Taggart B, Starnes B, Gewertz B. A Human Factors Subsystems Approach to Trauma Care . JAMA Surg. 2014;149(9):962-968. doi:10.1001/jamasurg.2014.1208
Copyright 2014 American Medical Association. All Rights Reserved. Applicable FARS/DFARS Restrictions Apply to Government Use.
A physician-centered approach to systems design is fundamental to ameliorating the causes of many errors, inefficiencies, and reliability problems.
To use human factors engineering to redesign the trauma process based on previously identified impediments to care related to coordination problems, communication failures, and equipment issues.
Design, Setting, and Participants
This study used an interrupted time series design to collect historically controlled data via prospective direct observation by trained observers. We studied patients from a level I trauma center from August 1 through October 31, 2011, and August 1 through October 31, 2012.
A range of potential solutions based on previous observations, trauma team engagement, and iterative cycles identified the most promising subsystem interventions (headsets, equipment storage, medication packs, whiteboard, prebriefing, and teamwork training). Five of the 6 subsystem interventions were successfully deployed. Communication headsets were found to be unsuitable in simulation.
Main Outcomes and Measures
The primary outcome measure was flow disruptions, with treatment time and length of stay as secondary outcome measures.
A total of 86 patients were observed before the intervention and 120 after the intervention. Flow disruptions increased if the patient had undergone computed tomography (CT) (F1200 = 20.0, P < .001) and had been to the operating room (F1200 = 63.1, P < .001), with an interaction among the intervention, trauma level, and CT (F1200 = 6.50, P = .01). For total treatment time, there was an effect of the intervention (F1200 = 21.7, P < .001), whether the patient had undergone CT (F1200 = 43.0, P < .001), and whether the patient had been to the operating room (F1200 = 85.8, P < .001), with an interaction among the intervention, trauma level, and CT (F1200 = 15.1, P < .001), reflecting a 20- to 30-minute reduction in time in the emergency department. Length of stay was reduced significantly for patients with major mortality risk (P = .01) from a median of 8 to 5 days.
Conclusions and Relevance
Deployment of complex subsystem interventions based on detailed human factors engineering and a systems analysis of the provision of trauma care resulted in reduced flow disruptions, treatment time, and length of stay.
Application of human factors engineering principles in trauma care may reduce flow disruptions (FDs), treatment time, and length of stay. In health care, unintentional harm is frequent1,2 and often caused by faulty systems that allow errors to perpetuate, permitting injury to occur.3 Rather than focusing solely on who made an error, a systems analysis of how, when, where, and why errors occur provides a window through which it is possible to understand the weaknesses of the modern health care provision process.4 As with many other specialties, trauma care benefits from studies of safety, quality, efficiency, and error.
Human factors engineering is based on the principle that system performance and human well-being can be improved through an integrated approach to individual skills, teamwork, equipment, task, environment, and organizational design.5 Considerable evidence supports improving work systems through interventions such as checklists,6- 8 briefings,9,10 standardized care pathways,11 formal protocols,12 team resource management training,13 and technological development14 to improve teamwork, shared knowledge, workflow, and outcomes. The greatest successes are usually achieved by involving physicians, nurses, and other practitioners in the process of developing improvements15,16 and by designing systems around human needs.17 This person-centered approach to systems design is fundamental to ameliorating the causes of many errors, inefficiencies, and reliability problems.18
Previous studies19- 26 have sought to use human factors engineering to redesign the trauma process using a multidisciplinary team of experts in process improvement, human factors research, and trauma care. To identify key areas for improvement of our trauma system, Blocker et al19 and Catchpole et al20 previously conducted process mapping, interviews, safety culture questionnaires, and direct observation of FDs and process timings. Flow disruptions are defined as “deviations from the natural progression of an operation thereby potentially compromising the safety of the operation”21(p 660) and have been empirically linked with surgical errors. The study of FDs has helped identify systems problems in a variety of other long-term care clinical settings.22- 25 Within our trauma system, the most common FDs were coordination problems, communication failures, and equipment issues, with significantly higher numbers and rates of disruption in the computed tomography (CT) imaging room and the operating room (OR). These FDs were observed most frequently in more seriously injured patients.26
This total systems analysis, combined with theoretical and practical expertise, generated a range of task-, team-, environment-, and equipment-related solutions. We hypothesized that each could be individually effective and together would reduce FDs, reduce treatment time within the first hour of patient care, and reduce length of stay.
All data collection studies were individually reviewed and approved by the institutional review board of the Cedars-Sinai Medical Center and the US Army Medical Research and Materiel Command Human Research Protection Office. Given that it was not possible for patients to consent to be part of the study, this requirement was waived by the institutional review board with the stipulation that no patient details were to be recorded. This prevented the direct comparison of observed treatment and outcomes but still allowed the use of deidentified hospital-collected outcome data.
The study was conducted at a 968-bed, nonprofit, academic, tertiary care medical center with more than 1000 annual trauma activations within the level I trauma center. The emergency department (ED) was staffed and managed by ED physicians, nurses, and technicians. The trauma service consisted of 6 attending physicians, 2 trauma and critical care fellows, and 4 surgical residents. The trauma team was activated by the mobile intensive care nurse after a radio call was received from emergency medical services that provided the details of an inbound patient. The mobile intensive care nurse activated a pager system, which displays the trauma level (high or low acuity), after which the trauma team assembled in the ED. The hospital already had a dedicated performance improvement and organizational change team, but they were not directly involved.
This study had an interrupted time series design that used historically controlled data collected via prospective observation of FDs. The primary outcome measure was FDs, with treatment time and length of stay as secondary outcome measures. Seven human factors researchers and medical students with training in human factors collected prospective observational data from trauma patients during the 10-week preintervention and postintervention phases from August 1 through October 31, 2011, and August 1 through October 31, 2012, respectively. They observed the trauma care process from the time the patient arrived in the ED until the patient was admitted to the main hospital, admitted to the intensive care unit, held in the ED for further consultation, or discharged home.20 Observations were conducted in multiple trauma bays, CT imaging rooms, and ORs within the hospital while following up the patient. In general use for trauma patients were 4 bays in the ED, 2 CT imaging rooms, and 2 ORs, although others were available, and no record was taken of which were used. Events that disrupted the flow of the trauma care process were collected using a tablet personal computer data collection tool,27 which provided the total number of FDs. Entry and exit times for the ED, CT imaging room, and OR were also recorded, which allowed treatment times and FD rates to be calculated. Multiple trauma teams were observed throughout the observation period. Team members came from a pool of 6 trauma surgeons and 8 or more residents. Because teams were comprised of an ad hoc group and rarely the same, the number of different teams observed was at least 12 within the core trauma team, with many more permutations when taking into consideration the supporting ED and technician members.
To assess interrater reliability, 11 trauma patients had 2 observers, whose responses were compared using a Cronbach α test. Data were obtained from the University Healthcare Consortium database for all trauma patients during the preintervention and postintervention phases.
Analysis of FDs suggested that communication and coordination, leadership and teamwork, patient factors, and equipment issues could benefit from targeted interventions. Data gathered from interviews found that coordination and protocol deviations were common causes of frustration; interviewees expressed some confusion over leadership. The FDs, particularly in the form of superfluous noise, reduced the amount of information transferred among team members. Role confusion was reported, especially with task sharing and leadership between the ED and trauma staff.
Having collected and analyzed the FD data, a multidisciplinary team that consisted of 8 physicians (including E.L., B.G., A.G., and D.S.), 6 human factors scientists (including K.C., D.W., and R.B.), 3 nurses, and 2 health care improvement experts (including J.B.) was brought together for one and a half days to define problems and identify solutions. The main problem areas were identified, and a range of potential solutions to each were generated. Then, a short list was generated based on practical considerations or the projected time needed for implementation. This short list was framed within the components of the Systems Engineering in Patient Safety human factors model,5 which also assisted in down-selection, to ensure coverage of task, team, environment, and technology. After the meeting, members of the ED and trauma teams were invited to discuss the short list and be involved in the studies. As implementation moved forward, we used small, iterative plan-do-study-act cycles to develop each intervention to a level where it was practical and deliverable. We then developed effectiveness and uptake measures, followed by full deployment of the interventions from May 1 through September 30, 2012. Figure 1 illustrates the general process by which the interventions were developed, with the rationale and implementation strategy in eTable 1 in the Supplement.
Observational measures of the uptake of each subintervention were used to gauge the effectiveness of the whiteboard, prebriefing, and teamwork behaviors (eTable 1 and eTable 2 in the Supplement).28- 33 Sixty-nine patients in the postintervention phase were studied with an additional observer who used an observation template that collected a range of measures to ascertain the use of these interventions. For the other interventions, appropriate evaluation methods were chosen to demonstrate effectiveness or uptake. All intervention evaluation metrics are summarized in eTable 2 in the Supplement.
All data were positively skewed so means (SDs) and medians (ranges) were calculated. For statistical analysis, the log-transformation function was used, which generated a more appropriate distribution for parametric analysis. The main observational data (treatment times and FDs) were studied in stepwise multivariable linear regression models, which took into account intervention period, trauma level (high or low), whether the patient had been to the CT imaging room and the OR, and interaction among the intervention, trauma level, and CT imaging room. For patient outcome data (length of stay and intensive care unit stay), separate before-and-after comparisons were made with Kruskal-Wallis tests for each risk of mortality estimate (minor, moderate, major or extreme; as defined in the University Healthcare Consortium database).
Five of the 6 subsystem interventions were deployed. All deployed interventions were measured as being effectively used to some degree, although reliability differed. All intervention evaluation metrics are summarized in eTable 2 in the Supplement. The equipment storage standardization provided time and movement benefits. Although the transport medication pack was rarely used, the presence of the guidance provided extra benefits. The whiteboard was used and completed in a timely manner in 70% of cases and usually had all key information documented. There was not always sufficient time to conduct a pretrauma briefing, and the variable amount of time available to the team was a clear limitation, but anecdotal subjective views were positive. The teamwork training was well received and resulted in significant improvements in observed teamwork and explicit teamwork behaviors.
In the preintervention phase, 86 patients were observed. In the postintervention phase, 120 patients were observed (from which the 69 patients used for effectiveness evaluation were also taken). The samples, results, and statistical modeling are provided in eTables 3, 4, and 5 in the Supplement. The 11 dual-observed patients had a Cronbach α of 0.846, indicating good internal consistency.
For FDs in high-acuity patients, there is a reduction in mean, median, and range. The FDs in lower-acuity patients had a reduction in range. Taking the log10 of the FD total to address skewness, FDs (r2 = 0.31) increased if the patient had been to the CT imaging room (F1200 = 20.0, P < .001) and the OR (F1200 = 63.1, P < .001), with an interaction among the intervention, trauma level, and CT (F1200 = 6.50, P < .01). This finding reflects the particular benefit for high-acuity patients undergoing CT (Figure 2).
For total treatment time, there was a significant effect of the intervention (F1200 = 21.7, P < .001) if the patient went to the CT imaging room (F1200 = 43.0, P < .001) or the OR (F1200 = 85.8, P < .001), with a significant interaction among the trauma level, CT, and the intervention (F1200 = 15.1, P < .001). Figure 3 shows a mean 20- to 30-minute reduction in time spent in the ED.
Length of stay and intensive care unit stay data were collected for 510 surgical and trauma patients before the intervention and 508 after the intervention. Mean length of stay by mortality risk is shown in Figure 4. The Kruskal-Wallis tests revealed a significant difference in length of stay for patients at major mortality risk (74 before and 69 after, z = −2.49, P = .01) but no other effects. Median length of stay decreased from 8 days in the preintervention phase to 5 days in the postintervention phase.
Interventions based on detailed human factors engineering and systems analysis of trauma care provision led to measured benefits, including reduced FDs, treatment time, and length of stay. We built on previous single-intervention studies6- 14 by implementing a combination of interventions to address several problems from multiple perspectives. Improvements were designed to address the broad range of barriers to effective care identified by our prior systems analysis. Physician-centered considerations and iterative approaches were brought to all the interventions developed to engage and adapt our solutions to local requirements and system complexity. Although many studies6- 11 exhibit complex system safety interventions, far fewer15 have attempted change of multiple dimensions, and even fewer16 have evaluated improvement and the greater whole. For example, in 2 particularly well-regarded studies,7,34 the multiple intervention dimensions used have only emerged subsequent to the main findings,35 which has resulted in replication problems.36,37 To our knowledge, although teamwork-level human factors studies in trauma care abound, this comprehensive systems-level human factors analysis and intervention has never been previously attempted in trauma care.
Given that the reductions in FDs did not suggest a strong effect size, the reduction in ED time and the length-of-stay effects are perhaps more surprising. This length of stay was not compared with a nontreatment control group, and we were unable to track individual patients through their entire care and thus were unable to directly relate outcomes to our study population. However, there are a number of reasons to consider this outcome as a result of some importance. First, we were careful to select similar periods and similar patient populations. Second, although all interventions were not used all the time, it is likely that they were deployed in the trauma patients we did not directly observe in similar proportions. For example, the teamwork training presumably benefited all trauma patients, not just those we observed, whereas the briefing was well received and the whiteboard was used independently of our studied patients. Third, our process changes were most beneficial for the higher-risk patients, as evidenced in the length-of-stay and FD findings. Finally, the reduction in ED time is a reflection of the well-observed value in the provision of early effective trauma care. We suspect that this time benefit is associated with the small reduction in FDs and further additional interventional benefits that were not directly measured. For example, when teamwork, communication, and coordination are improved, we might expect improved decision making,38 faster response to CT and OR procedures,39 and better ability to provide more effective and timely care rather than simply the avoidance of FDs.
It is worth noting that a proportion of FDs may be advantageous (eg, there may be a coordination advantage in minor pauses or timeouts) and that skilled teams are able to use such pauses to prevent adverse effects on patients.40 This context also provides further support for the strength of the time and length-of-stay effects over the reduction in FDs themselves. Thus, although a multisite controlled study would be needed to confirm these results, we believe the present results are sufficient to indicate the value of such an endeavor.
We also measured the effectiveness and uptake of all our subsystem interventions, affording extra detail regarding the mechanism and replicability of overall effects. Although not all subsystem interventions were used all the time, most were used effectively most of the time. This variation in use is a realistic part of system function and suggests that the present study is a conservative estimate of the benefits of this type of whole systems approach if greater reliability can be achieved. In this analysis, we did not test the contribution of the interventions individually, instead seeking to examine the overall effect of the combination because the evidence for each was sufficient alone, whereas no studies, to our knowledge, have examined multidimensional human factors interventions. The subsystems approach is an example of how improvements that address multiple dimensions of team (training), task (briefing), equipment (medical transport packs), and environment (equipment storage standardization) may be more effective than focusing on one dimension.5,41 In particular, the typical focus in health care improvement on training and other methods of direct behavioral change are limited but frequently deployed even though other system considerations might also offer performance benefits.42 As a final observation, although our approach afforded a complex systems analysis and an expert team, the underlying themes of our interventions that emerged were simplification, teamwork, and integration and management of information.
This study was not masked because the interventions were overt. Observers were not masked to the interventions but had not been extensively involved in their development and implementation, and in some cases they may have been unaware (only R.B. was part of the project team). The observer who recorded the effects of the interventions was more involved with the interventions and thus aware of the changes that had been made but was not directly involved in their generation or implementation and was provided with a strictly structured set of items to score. We also cannot ignore the Hawthorne effect.43 However, the presence of observers was largely unobtrusive and identical in the preintervention and postintervention phases. In addition, previous experiences suggest that their influence, at least in the OR, does not preclude the observation of system problems.24,27 In terms of sampling, there was a difference in the overall sample sizes in the preintervention and postintervention phases, although in the high-acuity patients, in whom the most benefits were observed, there is near parity (sample sizes of 74 and 69 in the preintervention and postintervention phases, respectively).
This is the first study, to our knowledge, to objectively examine the effects of multiple subsystem interventions in trauma care. The detailed study of the trauma system and the collection of data prospectively were central in guiding us toward the largest improvement opportunities. By reviewing hospital policy documentation, we mapped the process and conducted interviews and focus groups with a broad range of physicians to discover their impressions of the problems. Using the human factors and systems performance improvement methods, we collected data on the entire trauma process. Through a combination of statistical analysis and multidisciplinary consensus, we identified key aspects of process, workplace modification, teamwork, technology, and information management that would benefit from reengineering. By piecing together all the collected data elements, we were able to target our interventions to have the greatest positive effect on the process. These interventions were developed, integrated, and evaluated for their relevance and effectiveness. This early study in a complex field suggests benefits in reduced FDs, reduced treatment time, and the potential for reduced length of stay.
Accepted for Publication: April 25, 2014.
Corresponding Author: Ken Catchpole, PhD, Department of Surgery, Cedars-Sinai Medical Center, 8700 Beverly Blvd, Los Angeles, CA 90048 (email@example.com).
Published Online: August 6, 2014. doi:10.1001/jamasurg.2014.1208.
Author Contributions: Dr Catchpole had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Ley, Wiegmann, Blaha, Blocker, R. Karl, C. Karl, Taggart, Starnes, Gewertz.
Acquisition, analysis, or interpretation of data: Catchpole, Ley, Wiegmann, Blaha, Shouhed, Gangi, Blocker, Taggart, Starnes, Gewertz.
Drafting of the manuscript: Catchpole, Ley, C. Karl, Taggart, Gewertz.
Critical revision of the manuscript for important intellectual content: Catchpole, Ley, Wiegmann, Blaha, Shouhed, Gangi, Blocker, R. Karl, Starnes, Gewertz.
Statistical analysis: Catchpole, Blocker.
Obtained funding: Catchpole, Wiegmann, Gewertz.
Administrative, technical, or material support: Ley, Blaha, Shouhed, Gangi, Taggart.
Study supervision: Catchpole, Ley, Blaha, Shouhed, Gewertz.
Conflict of Interest Disclosures: None reported.
Funding/Support: This study was supported by grant W81XWH-10-1-1039 from the Telemedicine and Advanced Technology Research Center of the US Department of Defense, which seeks to reengineer teamwork and technology for 21st-century trauma care (Dr Gewertz).
Role of the Sponsor: The funding source had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Previous Presentation: This study was presented at the Pacific Coast Surgical Association Annual Meeting; February 16, 2014; Dana Point, California.
Additional Contributions: We thank our other project contributors: Ray Chu, MD, Heidi Hotz, RN, Steven Rudd, MD, Shannon Webert, RN, Brittany Dixon, BS, Elena Fomchenko, BS, Jean-Phillip Okhovat, BS, Mark Paulsen, BS, Tracy Reese, BS, and Cynthia Huang, BS, Department of Surgery, Cedars-Sinai Medical Center, Los Angeles, California; Robert M. Rush, MD, Madigan Army Medical Center, Tacoma, Washington; Eduardo Salas, PhD, Institute for Simulation and Training, University of Central Florida, Orlando; and Sacha Duff, MS, Department of Industrial and Systems Engineering, University of Wisconsin, Madison. Drs Chu, Rudd, and Salas and Ms Hotz were coinvestigators on the grant and were compensated for their work. Dr Rush was a grant coinvestigator but was not compensated (as a federal employee). Ms Webert was employed to assist in data collection. Mr Duff was a PhD student funded through the project. Mss Dixon, Fomchenko, Reese, and Huang and Messrs Okhovat and Paulsen were also employed to assist in data collection. We also thank all the emergency department and trauma staff who allowed us to observe them at work.