Customize your JAMA Network experience by selecting one or more topics from the list below.
Antonacci AC, Lam S, Lavarias V, Homel P, Eavey RD. Benchmarking Surgical Incident Reports Using a Database and a Triage System to Reduce Adverse Outcomes. Arch Surg. 2008;143(12):1192–1197. doi:10.1001/archsurg.143.12.1192
Copyright 2008 American Medical Association. All Rights Reserved. Applicable FARS/DFARS Restrictions Apply to Government Use.2008
To study the profile of incidents affecting quality outcomes after surgery by developing a usable operating room and perioperative clinical incident report database and a functional electronic classification, triage, and reporting system. Previously, incident reports after surgery were handled on an individual, episodic basis, which limited the ability to perceive actuarial patterns and meaningfully improve outcomes.
Design, Setting, and Participants
Clinical incident reports were experientially generated in the second largest health care system in New York City. Data were entered into a functional classification system organized into 16 categories, and weekly triage meetings were held to electronically review and report summaries on 40 to 60 incident reports per week. System development and deployment reviewed 1041 reports after 19 693 operative procedures. During the next 4 years, 3819 additional reports were generated from 83 988 operative procedures and were reported electronically to the appropriate departments.
Main Outcome Measures
Number of incident reports generated annually.
A significant decrease in volume-adjusted clinical incident reports occurred (from 53 to 39 reports per 1000 procedures) from 2001 to 2005 (P < .001). Reductions in incident reports were observed for ambulatory conversions (74% reduction), wasted implants (65%), skin breakdown (64%), complications in the operating room (42%), laparoscopic conversions (32%), and cancellations (23%) as a result of data-focused process and clinical interventions. Six of 16 categories of incident reports accounted for more than 88% of all incident reports.
These data suggest that effective review, communication, and summary feedback of clinical incident reports can produce a statistically significant decrease in adverse outcomes.
Incident reports have been used for many years to document clinically important adverse events (AEs) that occur in the perioperative and hospital environment. However, information from these reports typically is not comprehensively gathered, memorialized in a codified format for comprehensive review, or universally reported in a systematic, useful manner. Furthermore, serial annual reports are not usually generated, precluding identification of either recurrent themes of system error or recurrent clinical issues with individual services or surgeons. With this current episodic system of reporting, much clinical relevance is lost for effective quality improvement, and a valuable data source for the evaluation of clinical and system performance is unavailable to clinical and administrative leadership.
Having an effective coding scheme allows for improved summary and report generation and facilitates comparisons across institutions. A standardized taxonomy for this type of reporting is still developing, with efforts by The Joint Commission (http://www.jcaho.org) and the National Quality Forum (http://www.qualityforum.org) leading the way in the patient safety taxonomy project. Although standardized nomenclature is an important early step, the true measure of a system's effectiveness is its effect on patient safety. A model for such systems is the Intensive Care Unit Safety Reporting System.1 To document the effectiveness of changes to a system, one must collect baseline data, implement system modifications, and determine the significant effects of the intervention. Implementation of a standardized and effective process integral to the workflow is frequently the missing link. Three of the major obstacles to redesigning these systems are tolerance of stylistic practices, information nonavailability, and a fear of punishment that inhibits reporting.2
As described by Russell et al,3 key features recommended to improve the quality and usefulness of surgical monitoring systems are immediate data capture, matched large-scale collation of procedure-specific trends, and feedback to staff. The present system attempts to perform all 3 tasks. We developed and implemented a perioperative clinical incident report relational database and matured a classification system and standardized review process to study the profile of clinical incidents occurring in the operating room (OR) and perioperative setting at a major academic and community-based medical center. These data were analyzed for patterns of incidents, organized into clinically relevant categories and subcategories, and served as the basis for an attempt to benchmark performance.
Development began on a comprehensive OR and perioperative clinical incident report database in September 2000 as part of an effort by the Beth Israel Medical Center Operating Room Quality Committee to integrate the recent expansion of the hospital network to include 3 hospital sites (1 academic, 1 community, and 1 specialty) and 1 ambulatory surgery facility managing approximately 30 000 operative procedures per year. The hospital systems' intranet infrastructure was used to access a relational database designed to accept data from multiple points in the network. The Operating Room Incident Reporting System database was designed on a Microsoft Access/SQL platform (Microsoft Corp, Redmond, Washington) by Outcome Management Systems (Greenwich, Connecticut).
The first phase of data collection and entry obtained clinical incident reports generated by nurses in the preoperative holding areas (n = 4), OR suites (n = 4), ORs (n = 32), and recovery rooms (n = 4). Data from the collection in 2001 were used to refine a standardized reporting nomenclature. Incident reports were gathered daily and were entered into the system by designated unit personnel within 1 week of the occurrence.
The Incident Report Triage Committee reviewed all incident reports in their electronic format from multiple sites in the network and via conference call at its weekly meeting. The committee, composed of a quality nurse (V.L.), a risk manager, the administrative chief resident for quality, and the nurse managers of the contributing units, used a consensus approach to evaluate each incident. The committee would validate the accuracy of the information, determine the clinical categories under which they would be best classified, identify the departments to be notified, and request follow-up as appropriate. Once these data were confirmed, an electronic notification and summary of the incident was immediately generated and sent to the appropriate department chairperson, the designated departmental quality improvement representative, and any other relevant responsible individuals. Summary reports were generated for the department chairpersons and the chief medical officer annually, and the data were expressed by incident report category, facility, and department/division.
Omnibus comparison of incident rates across 4 years was performed using χ2 analysis. Specific comparison between years was performed using logistic regression. Year was coded in terms of an indicator contrast that compared the odds ratio of each year's incidence rate from 2003 to 2005 relative to the baseline incident rate in 2002 (α = .05; 95% confidence limits; SPSS 13.0).
Data distillation from 1041 incident reports collected during 2001 was used to construct a standardized nomenclature. This nomenclature was refined during the subsequent 4 years with minor variation to comprise 16 general categories and 164 specific incident types (Table 1). Analysis of the data was restricted to 4343 reports filed between January 1, 2001, and December 31, 2005. These data were studied for the pattern and distribution of incidents (Table 2). The 6 most common incident categories composed 88.19% of all incidents reported. Each category, except for planned returns to the OR, was analyzed for component thematic elements and was used as the organizational basis for comparison of departmental differences and general benchmarking criteria (Table 3).
Combined planned (26.6%) and unplanned (24.3%) returns to the OR composed more than 50% of all clinical incidents (50.9%; 2212 of 4343) (Table 2). Planned returns to the OR composed 26.6% of all reported incidents (1156 of 4343) and represented indicated operative interventions for conditions newly developed during hospital admission not directly related to the original admitting diagnosis or operative procedure. The findings may serve as a benchmark for accurate reporting of unplanned returns to the OR.
Clinical incidents associated with unplanned returns to the OR (24.3%; 1056 of 4343) were all recognized in the postoperative period and could be directly related to the original operative procedure (Table 3). The most common issues were hemorrhage (290 events [27.8%]), wound and infectious complications requiring reexploration (254 [24.3%]), complications directly attributable to technical difficulties or errors associated with the original procedure (202 [19.3%]), and surgical interventions required because of a device-related failure or mishap (139 [13.3%]). Exploration for retained foreign bodies composed 10% of this category, or 1.3% of all unplanned returns.
Complications recognized during the operative procedure composed 14.8% of reported incidents (n = 643), including (1) unanticipated organ injuries (44.5%), (2) acute medical problems in the OR (22.7%), and (3) technical problems with the surgical procedure resulting in an alteration in the expected surgical outcome (9.2%) (Table 3).
Ambulatory conversions to be admitted represented cases originally scheduled to be discharged after a planned ambulatory procedure that ultimately required admission. These cases composed 11.7% (n = 510) of all 4343 reported incidents and were related to (1) a modification in the planned surgical procedure or therapy (eg, antibiotics and drains) that the surgeon deemed a justification for admission (33.7%), (2) an acute medical problem recognized during the recovery period judged to be significant for admission (32.4%), and (3) a pain management issue believed to require admission (30.8%) (Table 3).
Laparoscopic procedures converted to open procedures were monitored as to the reason for conversion. These procedures occurred predominantly in general surgery, obstetrics/gynecology, urology, and thoracic surgery. Conversions composed 5.89% (n = 256) of all 4343 reported incidents and were related to (1) adhesions (41.7%); (2) anatomy (21.7%); (3) other technical problems, such as failure to maintain pneumoperitoneum or technical difficulty (24.61%); (4) bleeding requiring exploration to safely control (10.2%); and (5) unanticipated and recognized organ injury (1.6%) (Table 3).
Scheduled operative procedures cancelled the day of the planned surgery made up 4.84% (n = 210) of 4343 reported incidents. Acute medical problems (47.1%) and inadequate medical preparation for surgery (21.0%), that is, lack of required medical information, such as laboratory data or complete history and physical examination, composed 68.1% of all cancellations. The remaining incidents in this category were related to a change in the surgical indication or finding requiring reevaluation (8.1%), an anesthesia-related issue or cancellation (5.7%), patient refusal or anxiety (4.3%), or equipment- or supply-related cancellation (3.8%) (Table 4).
The remaining categories compose 11.8% of all incidents (n = 513) reported during the 4-year period and could not be subdivided into codified subcategories of significant statistical power.
Seven clinical departments and 6 divisions of general surgery were monitored during the 4-year study: general surgery (breast, cardiac, general, plastics, thoracic, and vascular), neurosurgery, obstetrics/gynecology, ophthalmology, orthopedics/spine, otolaryngology, and urology. Data summaries from incident reports were e-mailed weekly to department chairpersons, the designated quality improvement representative for that department, or both. Annual reports were generated for each department and the chief medical officer that listed in tabular form the number of incidents occurring in each category and represented annually for 4 years. A second department report, similar to the first, was generated by specific surgeon, followed by a brief synopsis of the incident. A third department report collated the incidents by hospital site, clinical category, and number of incidents. These reports served as robust management tools for department chairpersons to assess the prevalence, frequency, and severity of clinical incidents occurring in their departments and allowed for reasonably easy and focused reviews of relevant clinical topics, services, and surgeons.
A new report style, represented in Table 4, is constructed as a dashboard comparing the mean percentage of clinical incidents across 4 years for the entire institution vs individual departments. These data are useful in identifying trends. A statistical power has not been established in this example, but analysis of the reports by surgical specialty suggests that differences exist that may be important to quality initiatives.
A total of 4860 clinical incident reports were generated across 5 years from 103 681 operative procedures (Table 5). A significant decrease occurred in volume-adjusted clinical incident reports (from 53 to 39 reports per 1000 procedures) (P < .001). Analysis of the reports by surgical specialty demonstrates differences potentially important to quality initiatives (P < .005). Across 3 years of use after the baseline year (2002), reductions in incident reports were observed in ambulatory conversions (74% reduction), wasted implants (65%), skin breakdown (64%), complications in the OR (42%), laparoscopic conversions (32%), and cancellations (23%).
Physicians take pride in making clinical decisions for an individual patient based, whenever available, on data gleaned from expert opinion, randomized clinical trials, and evidence-based studies. Ironically, the same physicians, when confronted with an AE for an individual patient, use episodic judgment rather than systematic data analysis because crucial, actuarialized data are lacking. First, this study demonstrates the utility4 of a database to reveal patterns of problems that can subsequently be targeted for systemwide improvement before similar future events even occur. With systematic attention, fewer reports were generated, which not only aids patient quality but also diminishes the future time required to manage AEs. Second, the concept of continuous quality improvement is now embedded in the culture of manufacturing with, for example, Six Sigma quality initiatives.5 At the start of this study, the baseline number of reportable events was 53 per 1000 “opportunities,” more than 10 000 times that allowable for the production of household appliances, a glaring contrast that underscores the critical need to radically improve variance toward perfect inpatient care quality. Significant improvement was achieved in select categories (eg, ambulatory conversions, 74%; wasted implants, 65%; skin breakdown, 64%; and complications in the OR, 42%), yet substantial improvement still will be required, and the system described allows for continuous improvement. Third, memorializing data highlights another management tool, the Pareto principle, which suggests that most problems stem from a minority of sources rather than an equal distribution of causes.6 In this study, approximately 88% of incident reports stemmed from only 6 incident categories. Therefore, efforts to improve future patient care quality and to prevent incidents can be focused in a logical, more efficient manner compared with episodic attention to individual AEs. Fourth, the Hawthorne effect reveals that productivity can be improved merely through the act of observing individuals.7 Possibly, part of the reason for the increasing improvement in this study was the personal reporting system in addition to the administrative improvements for each problem category.
Another reason to develop a database is that, without data collection and analysis, the responsible evaluator's grasp of episodic information score remains at the lowest level of the Bloom taxonomy.8 Bloom and fellow educators decades ago devised a useful framework for escalating educational abstraction for teaching students and for monitoring levels of understanding. The elevating levels are knowledge, comprehension, analysis, application, synthesis, and evaluation. Without actuarial data, the evaluator will be unable to elevate cognitively to synthesis and evaluation of trends for recommendations about system improvement. Therefore, a functioning database and feedback communication method must be in place to achieve quality improvement objectives.
The method in this study differs from classic incident reporting in that a weekly validation review was required to ensure accuracy and completeness of data and to identify those system elements responsible for or capable of affecting a meaningful intervention. The premise supposed that accurate information delivered to proper authority in an open reporting environment would result in improvement in outcomes. Resolution of each incident ended with immediate electronic notification of the appropriate clinical, administrative, and quality representative. This system functioned well in a large health care matrix organization over several sites, was developed from actual experience, and was practical to implement. We speculate that such a system should be extendable to other health care facilities.
Other efforts to evaluate error have focused on the development of specific taxonomies designed for different clinical settings and using varying methods. These include the Applied Strategies for Improving Patient Safety,9 the Australian Incident Monitoring Study,10 the Medical Error Reporting System–Transfusion Medicine,11,12 the Joint Commission on Accreditation of Healthcare Organizations Patient Safety Event Taxonomy (JCAHO-PSET),13 the Linnaeus Primary Care Collaborative,14 and the cognitive taxonomy.15 Each system classifies and collects events for inpatient or outpatient care settings with varying degrees of validation and using idiosyncratic terminology or complex classification frameworks.16 Data sources vary from retrospective malpractice closed claims10,17 to specific error types or sentinel events (retained foreign bodies, wrong-sided surgery,18 unplanned returns,19 anesthetic events,20 and respiratory events21) to types and locations of clinical care (ambulatory care,14 recovery room,22 or catheterization laboratory).23 The JCAHO-PSET was developed through a literature review of potentially relevant existing taxonomies and was tested by application to hospital-based sentinel events submitted to the JCAHO.13 Each type of evaluation effort is perforce limited to the scope of interest rather than being designed to capture the range of relevant incidents that a patient may sustain during the course of clinical care. This approach, however, also limits by design the communication of error to all relevant parties. This system differs in that we attempt to benchmark a broad range of clinical incidents based on data collected after more than 80 000 operative procedures and to organize them into relevant categories of related elements applicable to the dynamic range of a patient's OR experience. In addition, contemporaneous evaluation of incidents is performed and immediate notification of all involved parties is achieved. Thus, common sequential errors have been identified for the preoperative, operative, and postoperative phases of surgical care, and thematic elements of common error types have been identified. Once issues are validated, involved organizational elements are notified and the diverse process interventions required to sustain a meaningful improvement are simultaneously initiated. This system allows the effective real-time identification of adverse outcomes, evaluates management, improves learning, and establishes lines of accountability.24
This study has limitations. First, the reporting nursing staff might have been subject to an unknown bias to underreport events over time, which could have produced results that simulated success from the new system. However, certain incidents seem to be immune to bias. For example, a return to the OR for hemorrhage carries an obvious reporting imperative. Another concrete example, such as surgical cancellation rate, is definable immediately, is beyond the control of a single individual, and could occur only if a greater proportion of preoperative patients actually did receive surgery. Second, the system does not close the loop on clinical incidents related to the practice of surgery but defers to the clinical leadership to follow up and intervene in specific clinical quality issues. However, because data are reviewed by the chief medical officer and the quality improvement hierarchy, a consistently high hemorrhage rate or incidental organ injury rate would not go unnoticed, driving the quality intervention at the department level. Nonetheless, process issues related to cancellations or ambulatory conversions, once identified, must be dealt with thematically by means of protocolization or guideline implementation at the organization level. Third, a cost-savings analysis was not performed. Initiation of such a plan is costly in fixed expenses and opportunity costs, yet enhanced patient quality yield is essential, and the cost savings almost certainly vastly outweigh the costs of the program. Finally, although the data gathered in this study were validated by the Incident Report Triage Committee, organized peer review and contemporaneous sharing of any identified process, physician, or system error was left to department leadership and was only indirectly observed by the annual reduction in incidents. The implications of this designed “failure to close the loop” on the organization's ability to maximally improve quality may be significant.
This article describes the development and deployment of a practical and effective electronic classification and triage system for OR and perioperative clinical incident reports. Implementation of a systematic method for collection, triage, distribution, and analysis resulted in a statistically significant decrease in the number of incident reports generated. These data suggest that actuarialized and effective review, communication, and summary feedback of clinical incident reports can produce a decrease in surgical AEs.
Correspondence: Anthony C. Antonacci, MD, SM, Department of Medical Affairs and Quality Improvement, Christ Hospital, 176 Palisade Ave, Jersey City, NJ 07024 (email@example.com).
Accepted for Publication: September 18, 2007.
Author Contributions:Study concept and design: Antonacci and Lam. Acquisition of data: Antonacci, Lam, and Lavarias. Analysis and interpretation of data: Antonacci, Lavarias, Homel, and Eavey. Drafting of the manuscript: Antonacci and Eavey. Critical revision of the manuscript for important intellectual content: Antonacci, Lam, Lavarias, Homel, and Eavey. Statistical analysis: Homel. Administrative, technical, and material support: Antonacci, Lam, Lavarias, and Eavey. Study supervision: Antonacci.
Financial Disclosure: Dr Antonacci and Mr Lam report holding equity ownership in Outcome Management Systems.
Additional Contributions: Joan Thorsen, Marco Garcia, BA, April Thorne, and Meryl Gold, MBA, MPH, provided tremendous efforts and dedication to improving the quality of care and safety of these patients. Harold Laufman, MD, and Eric Schneider, MD, MSc, provided constructive comments and critique of this manuscript.