Study bay 40-in liquid crystal display screen displaying demographics, action prompts, vital signs, diagnoses, and interventions.
Trauma Reception and Resuscitation (TRR) System video audit tool.
Customize your JAMA Network experience by selecting one or more topics from the list below.
Fitzgerald M, Cameron P, Mackenzie C, et al. Trauma Resuscitation Errors and Computer-Assisted Decision Support. Arch Surg. 2011;146(2):218–225. doi:10.1001/archsurg.2010.333
This project tested the hypothesis that computer-aided decision support during the first 30 minutes of trauma resuscitation reduces management errors.
Ours was a prospective, open, randomized, controlled interventional study that evaluated the effect of real-time, computer-prompted, evidence-based decision and action algorithms on error occurrence during initial resuscitation between January 24, 2006, and February 25, 2008.
A level I adult trauma center.
Severely injured adults.
Main Outcome Measures
The primary outcome variable was the error rate per patient treated as demonstrated by deviation from trauma care algorithms. Computer-assisted video audit was used to assess adherence to the algorithms.
A total of 1171 patients were recruited into 3 groups: 300 into a baseline control group, 436 into a concurrent control group, and 435 into the study group. There was a reduction in error rate per patient from the baseline control group to the study group (2.53 to 2.13, P = .004) and from the control group to the study group (2.30 to 2.13, P = .04). The difference in error rate per patient from the baseline control group to the concurrent control group was not statistically different (2.53 to 2.30, P = .21). A critical decision was required every 72 seconds, and error-free resuscitations were increased from 16.0% to 21.8% (P = .049) during the first 30 minutes of resuscitation. Morbidity from shock management (P = .03), blood use (P < .001), and aspiration pneumonia (P = .046) were decreased.
Computer-aided, real-time decision support resulted in improved protocol compliance and reduced errors and morbidity.
clinicaltrials.gov Identifier: NCT00164034
Multidisciplinary trauma teams coordinate the reception and resuscitation of the seriously injured at the time when patients are most at risk of preventable morbidity and mortality by simultaneously determining therapeutic and diagnostic pathways.1,2 Errors occur because of time pressure, inexperience, reliance on memory, multitasking, and failures in trauma team coordination, especially during the initial minutes of patient reception and resuscitation.3-6
Human variables that confound a standardized environment and lead to avoidable errors have been addressed by the airline and other industries. For example, computerized prompts are built into flight-control systems, providing immediate feedback and error avoidance.7-9 However, medical practice lags in the implementation of such support.
Despite guidelines, protocols, and continuous performance improvement, basic management errors occur even in established trauma centers with experienced trauma care professionals.10 Protocol compliance was only 53% when measured prospectively using video audit.11 This lack of compliance with protocols has led experienced trauma care professionals to conclude that “Standardized protocols for resuscitation . . . widely implemented as interactive computerized applications among trauma centers . . . will probably provide the next generation of improvements in shock resuscitation.”12(p1988) Linking computer-generated prompts through visual and auditory displays within the resuscitation bay may enhance trauma care professionals' interaction and reduce errors of omission and miscommunication. Compliance can be documented via video audit.13,14 The primary aim of this study was to test the hypothesis that real-time, computer-aided decision and action support would reduce the incidence of management errors for severely injured patients in the first 30 minutes after admission to an adult trauma center.
The study was undertaken at the Alfred Trauma Center, a level I adult trauma center in Melbourne, Victoria, Australia.15 There were 4 trauma resuscitation bays within the trauma center. The layout, equipment, and trauma teams were identical in each bay. Rostered trauma teams attended each patient admitted and consisted of 6 medical and nursing personnel, with an emergency physician as team leader. A senior trauma nurse acted as a scribe to document the resuscitation.
This interventional study had 2 arms. First, an open randomized controlled trial using video audit was performed to compare trauma patient resuscitations supported by real-time, computer-prompted algorithms (study group) with resuscitations without real-time, computer-prompted algorithms (control group). Second, a baseline control group was recorded on video immediately before the trial to assess any Hawthorne effect.16
The population studied was composed entirely of injured patients aged 15 years or older who met the trauma callout criteria17 and were transported to the Alfred Trauma Center during the study period. Patients transferred from other hospitals after initial resuscitative efforts were excluded.
Five major subcategories of resuscitation were prospectively defined and addressed with interrelated algorithms. These were airway management, ventilation and chest decompression, shock management, generic, and specialty. The latter included algorithms related to neurotrauma, spinal cord injury, orthopedics, and burns. Thirty-three experienced emergency, anesthesiology, surgical, and critical care medical and nursing staff members spent 9 months analyzing current practice and the medical literature relating to trauma reception and resuscitation. Key decision points in initial trauma resuscitation that required a response from the trauma team were identified through consensus of evidence or best practice.18 Technology advisers designed algorithm formats. The interrelated algorithms developed were limited to the first 30 minutes of trauma resuscitation and were formatted as branch tree logic. The triggers for decision points were clinical findings, diagnoses, physiologic variables, and treatments or interventions. The final algorithms were considered to duplicate the decision processes practiced in the baseline control group, which represented the standard of care.
Software was developed using the algorithms as the logic for management prompts. This was achieved through requirement analysis, design specification, software scripting, unit testing, system testing, and 4-stage acceptance testing (usability, simulation, clinical trial, and randomized clinical trial).19,20 The end product was referred to as the Trauma Reception and Resuscitation System.21 The software version installed was not altered during the study period.
Approval for the study was obtained from the institutional ethics committee. Patient consent for video data collection before video image capture was not possible to obtain from trauma patients or their relatives. Waived consent was obtained under the Australian National Health and Medical Research Council's guidelines.22 One record was removed from the study at the request of relatives.
The computerized Trauma Reception and Resuscitation System decision aid was installed into 2 of the 4 trauma resuscitation bays. These 2 study bays had a 40-in liquid crystal display (LCD) screen showing prehospital data and cumulative physiologic, diagnostic, and treatment data (Figure 1). Input occured directly from the patient's physiologic monitor and from a touch screen operated by the scribe nurse. Computer-generated intervention prompts requiring action from the trauma team were displayed on the LCD and scribe screens. The decision and action intervention prompts were a built-in feature of the resuscitation algorithms against which trauma team performance was measured. The system was activated on or before patient arrival. The study intervention period commenced once the patient had been transferred onto the trauma bay stretcher and it ceased 30 minutes later. The remaining 2 trauma resuscitation bays were the control bays with identical audiovisual recording systems but no LCD screens or computer prompting.
Previous reports that reviewed trauma mortality at the study site attributed an error rate of 3.7 per patient related to the reception and resuscitation phase of care.23 These data had been gathered retrospectively and related to mortality only. On the basis of this reported error rate, it was postulated that a 10.0% reduction in errors would be a clinically significant change. Setting a power of 90.0% and an α value of .04, it was estimated that 822 patients would be required (ie, 411 in each group). A difference of 1 SD in mortality between the groups at a power of 90.0% and an α of .01 would require a total of 50 patients. Therefore, the total patient sample size of the controlled trial was set at 872.
Randomization was performed on the basis of sequential patient attendance to a trauma resuscitation bay by the senior nurse in charge of patient admission. True random allocation of trauma resuscitation bays was limited by service demands in a busy trauma center, with some bays occupied by patients waiting for further imaging or an intensive care unit bed. Sequential allocation removed the potential for clinical staff to bias allocation of trauma patients into study bays or control bays. Bay allocation was monitored by the project team to ensure there was uniform compliance.
Each trauma bay had an identical, automated, audiovisual recording system that was activated before or on arrival of the trauma patient; the recording length was limited to 60 minutes. A custom-built video audit tool (VAT) was developed to allow correlation of video data with data outputs from the Trauma Reception and Resuscitation System (eg, diagnoses and treatments) (Figure 2). Compliance templates (Figure 2; step 1) defined errors and exceptions to ensure concordance across audits. The use of standard algorithm and reference data (Figure 2; steps 2 and 3) permitted conformity across templates. The VAT displayed video and LCD screen capture data simultaneously (Figure 2; steps 4 and 5). This enabled the auditors to compare patient treatment (on the video) with the information displayed to the trauma care professionals. Patient data (Figure 2; step 6) (including diagnoses, treatments, and vital signs) and events were linked to audit templates by the video time code and stored as the audit file (Figure 2; step 7).
If classified as compliant, the event was compliant with the algorithm. Exception indicated the event was not compliant with the algorithm but met the definition criteria for an exception for that flag. Error indicated the event was not compliant with the algorithm, met the definition criteria for error, and did not meet any of the criteria for exceptions.
For example, it was expected that patients experiencing major trauma would be log-rolled during the first 30 minutes of care for posterior and spinal assessment. If this was observed on video audit, this flag would be classified as compliant. The patient may not have been rolled because of an unstable pelvic fracture and associated hypovolemic shock. This would be classified as exception. A classification for error would occur if the patient was not log-rolled and did not meet the predetermined criteria for exception.
The VAT was linked to existing patient databases (Figure 2; step 8) and automatically extracted relevant data for the auditors, including prehospital treatments, incident details, and follow-up data, such as hospital length of stay and outcome. On completion of an audit, a summary (Figure 2; steps 9 and 10) was viewed by the auditor to check that all audit measures had been assessed. All audits were checked by a second auditor. The audit data were then stored in an external database for independent statistical analysis.
The researchers involved in the algorithm and software development were not involved in the audit process. All researchers and auditors were masked to results undertaken by independent statisticians.
The primary outcome variable measured was the rate of noncompliance with agreed algorithms per patient (error rate). A secondary goal was to demonstrate that improved algorithm compliance altered morbidity and mortality. Data regarding the incidence of aspiration pneumonia, sepsis, and adult respiratory distress syndrome, functional independence measure score, intensive care unit and hospital length of stay, and death were prospectively collected. Patient follow-up was limited to the hospital admission after initial trauma presentation.
Continuous measures are presented as mean (SD) or median (interquartile range), as appropriate, and compared using the t test or Wilcoxon rank sum test, as appropriate. Categorical measures are presented as frequencies and compared using the Pearson χ2 test or Fisher exact test, as appropriate.
The error count per patient was modeled using a negative binomial distribution, with the number of compliance events per patient considered to be the exposure. Error counts per patient were compared using incidence rate ratios with 95% confidence intervals. All statistical analyses were performed using Stata statistical software, version 10.1 (StataCorp LP, College Station, Texas), and P < .05 was considered statistically significant.
A total of 2425 videos of trauma patient resuscitation were recorded in the trauma resuscitation bays between January 24, 2006, and February 25, 2008. Of these, 71 were secondary transfers from another hospital and therefore did not meet the inclusion criteria, 485 did not meet the trauma callout criteria, and 445 had inadequate audiovisual quality or data. Nine other videos were excluded because the auditors were unable to agree on their usability. The remaining 1415 videos were classified as usable. There was no retrospective inclusion of videos, and the auditors were masked to data analysis.
The sample size was 1171 patient resuscitations, including 300 baseline control patients (January 24 to October 22, 2006) and 871 control and study group patients (November 20, 2006, to February 25, 2008). Audit lagged behind video acquisition. Therefore, once the audited sample size had been reached, the remaining 244 resuscitations were not analyzed (Figure 3). Analysis of these 244 videos was beyond the project's resources and human research ethics committee approval.
Patients were distributed evenly between trauma resuscitation bays 1 and 2 (control group; 213 and 223 patients, respectively) and bays 3 and 4 (study group; 226 and 210 patients, respectively). The baseline control group patients (n=300) were resuscitated in bays 1 (n=55 patients), 2 (n=66 patients), 3 (n=91 patients), and 4 (n=88 patients).
The mean (SD) age of patients was 37.0 (16.6) years, 75.3% were male, and 88.7% had sustained blunt trauma. The median injury severity score was 13 (interquartile range, 5-24), the same as that of all patients historically mandating trauma callout when checked against the Alfred Hospital Trauma Registry. The overall mortality rate was 5.2% (Table 1).
Ten medical and nursing professionals (8 critical-care nurses and 2 emergency physicians) were employed and trained to perform the video auditing. The audit team took approximately 4000 hours to complete 1171 video audits in 6 months. All audit results were checked by a second auditor. When auditors disagreed, a consensus was eventually reached. The interuser reliability of the VAT was evaluated using error concordance. A sample size of 50 audits with 5226 flag decisions was assessed independently by 2 auditors to confirm error detection concordance. Using the VAT, they achieved a κ concordance of 0.964 (95% confidence interval, 0.957-0.971) against an expected agreement of 0.514.
There were 29 389 compliance events on VAT, an average of 25.1 events per patient. This indicated that in the first 30 minutes of trauma reception and resuscitation, a critical decision was required, on average, every 72 seconds.
Trauma teams were compliant with 22 488 algorithm-generated events (76.5%). There were 4212 algorithm exceptions (14.3%) and 2689 deviations from algorithms (ie, errors) (9.2%). Overall, the incidence of algorithm deviation was 2689 errors per 1171 patients, or 2.3 per patient, in the first 30 minutes of trauma resuscitation.
The study group had the least number of errors per patient (927 errors for 435 patients) compared with the control (1002 for 436 patients) and baseline control (760 for 300 patients) groups. There were fewer errors in the study group than in the control (P = .04) and baseline control groups (P = .004). There was no difference in error rate between the baseline control and control groups (P = .21) (Table 2). Without algorithmic prompts, 16.0% of baseline control group patients had an error-free resuscitation. This increased to 21.8% in the study group (P = .049).
The predicted mortality rate was 11.0%. The study mortality rate was 5.2%, which meant that the study was insufficiently powered to demonstrate a mortality difference. This lower-than-expected mortality occured partly because the study population did not include interhospital transfers (a group with a higher mortality).
The incidence of sepsis and adult respiratory distress syndrome, the functional independence measure score, and the hospital length of stay were not different among the groups. The intensive care unit length of stay of 112 hours in baseline control-group patients was not significantly different from the 70 hours in the study group (P = .07). There was no significant reduction in the incidence of sepsis. Aspiration pneumonia was reduced in the study group from 5.3% in the control group to 2.5% (P = .046).
There was a 26.11% reduction in the shock management error rate per patient in the study group (0.55 error per patient in the study group, 0.58 in the control group, and 0.75 in the baseline control group). In particular, computer prompting increased early hemorrhage control using pressure dressings (P = .03) (Table 3). This variable was associated with a reduction in the amount of blood products transfused (P < .001) in the study group patients.
Ours is the first randomized controlled trial, to our knowledge, to demonstrate that computer-aided decision support for experienced trauma teams results in improved protocol compliance and reduced errors. Errors in trauma resuscitation have persisted during the past 20 years despite major improvements in training, facilities, guidelines, and systems of care.3-6,10,11 It has been expected that technological developments, along with computer-aided decision support, will improve the performance of trauma care professionals in trauma resuscitation and other clinical areas.12,24,25
It was unclear whether decision support would alter trauma care professionals' behavior in this high-volume facility. It seems unlikely that the reason for poor protocol compliance was lack of awareness of protocols. A more likely explanation is the high speed of the decision making and the complex prioritization required in trauma resuscitation. Anecdotally, the clearly displayed computer prompts supported shared awareness among team members,26 facilitated information exchange, highlighted abnormal physiologic variables, and clarified diagnostic and therapeutic decision making. Although the study design anticipated that behavioral change in the study bays would alter performance in adjacent, nonprompted control bays, the error rate reduction between the study group and the control group remained significant (Table 2).
This real-time system is an advance beyond traditional clinical decision support, which tends to target individuals in more static patient care scenarios.27 The effect may be greater for resuscitation teams that are less experienced or work in lower-volume centers. It is also likely that outcomes from this study are applicable to other clinical areas with high-volume, high-intensity decision-making processes, such as the management of sepsis or cardiogenic shock.
The implications for improving trauma care are great. The US Senate Finance Committee has identified the great potential for improving health outcomes, including identifying strategies and best practices to improve patient safety and reduce medical errors.28 However, the resuscitation period is a notoriously difficult one in which to make assessments of clinical performance because of the dual need for emergency treatment and documentation. Further multicenter studies are required to support and promote the development and use of this type of automated electronic health record collection system for trauma patient resuscitation. Standardization of the resuscitation environment through the use of computerized decision support may also help determine the therapeutic value of single interventions whose effect is currently difficult to determine in such complex environments—a key requirement for comparative-effectiveness research in trauma resuscitation.29
This study has some limitations. It is exceptionally difficult to randomize trauma resuscitation bays in a busy trauma center by random number allocation. However, the evenly matched characteristics of patients in each bay indicate unbiased allocation. In subsequent multicenter studies, designed with larger populations to demonstrate mortality differences, improved randomization techniques should be developed for use.
The exclusion of videos recorded after randomization was also a potential limitation. Inadequate audiovisual quality resulted from technical difficulties arising from filming in a working (noisy) trauma center. Individual microphones were considered impractical, with a potential to compromise standard infection control precautions. Therefore, 1 unidirectional and 1 omnidirectional microphone were used and the sounds mixed. It initially took considerable time to fine-tune the audio recording and mixing to ensure voices of the 6 or more trauma team members could be adequately heard for auditing purposes.
If the senior nurse was distracted by the demands of trauma resuscitation bay preparation or by the arrival of the trauma patient, system activation may have been delayed. This meant that a substantial number of videos started after trauma resuscitation had commenced. Therefore, if resuscitative efforts were missed on the recording, these videos had to be excluded to ensure homogeneity of the data. However, videos of complete trauma resuscitations with adequate audiovisual quality were sequentially collected until the numbers required for the study were reached. Subsequent analyses demonstrated no significant difference in injury severity between the groups.
The system was designed to be transportable and miniaturized for in-field use. The use of voice recognition software rather than a nurse scribe may improve the system interface. Other useful future developments may include the addition of outcome predictions based on prehospital vital signs’ waveforms and data.30,31
Medicine has lagged behind aviation and other industries in standardization and error avoidance. The introduction of computer-assisted decision support will reduce error morbidity and improve patient safety and outcome in trauma resuscitation—even with experienced trauma teams. Further large, multicenter trials will determine whether mortality can also be reduced. Such trials will have implications for all clinical areas involving rapid, complex, and critical decision making.
Correspondence: Mark Fitzgerald, MB BS, Trauma Service, Alfred Hospital, 55 Commercial Rd, Melbourne, Victoria 3004, Australia (email@example.com).
Accepted for Publication: February 1, 2010.
Author Contributions: Drs Fitzgerald, Cameron, Bystrzycki, Andrianopoulos, and O’Reilly and Mr Farrow had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: Fitzgerald, Cameron, Mackenzie, Farrow, Scicluna, Gocentas, Bystrzycki, Dziukas, Cooper, Silvers, Mori, Xiao, Stub, McDermott, and Rosenfeld. Acquisition of data: Fitzgerald, Cameron, Farrow, Bystrzycki, and Murray. Analysis and interpretation of data: Fitzgerald, Cameron, Mackenzie, Farrow, Gocentas, Bystrzycki, Lee, O’Reilly, Andrianopoulos, Murray, Smith, and Rosenfeld. Drafting of the manuscript: Fitzgerald, Cameron, Mackenzie, Farrow, Gocentas, Lee, and Andrianopoulos. Critical revision of the manuscript for important intellectual content: Fitzgerald, Cameron, Mackenzie, Scicluna, Gocentas, Bystrzycki, Lee, O’Reilly, Andrianopoulos, Dziukas, Cooper, Silvers, Mori, Murray, Smith, Xiao, Stub, McDermott, and Rosenfeld. Statistical analysis: Fitzgerald, O’Reilly, and Andrianopoulos. Obtained funding: Fitzgerald and Cameron. Administrative, technical, and material support: Fitzgerald, Cameron, Mackenzie, Farrow, Scicluna, Bystrzycki, Lee, Silvers, Murray, Smith, Xiao, Stub, and Rosenfeld. Study supervision: Fitzgerald, Cameron, Cooper, Mori, and McDermott.
Financial Disclosure: None reported.
Funding/Support: This project was funded by the Victorian Transport Accident Commission.