[Skip to Navigation]
Sign In
Figure 1. 
The structured Human Reliability Analysis (HRA) feedback form used in the face and content validity survey.

The structured Human Reliability Analysis (HRA) feedback form used in the face and content validity survey.

Figure 2. 
The Human Reliability Analysis of Cataract Surgery tool. AC indicates anterior chamber; EEM, external error modes.

The Human Reliability Analysis of Cataract Surgery tool. AC indicates anterior chamber; EEM, external error modes.

Figure 3. 
The total number of errors performed per task for each surgeon group: group 1 performed fewer than 50 operations; group 2, between 50 and 250 operations; and group 3, more than 250 operations.

The total number of errors performed per task for each surgeon group: group 1 performed fewer than 50 operations; group 2, between 50 and 250 operations; and group 3, more than 250 operations.

Figure 4. 
The effect of surgical experience on the total number of errors observed in the 3 groups.

The effect of surgical experience on the total number of errors observed in the 3 groups.

Table 1. The 10 External Error Modes Used to Describe a Technical Error Observed During the Assessment of Each Recorded Casea
Table 1. The 10 External Error Modes Used to Describe a Technical Error Observed During the Assessment of Each Recorded Casea
Table 2. 
Face and Content Validity Survey Results
Face and Content Validity Survey Results
Table 3. 
Data for Executional and Procedural Errors Observed for Each Generic Task Performed by Each Surgeon Group
Data for Executional and Procedural Errors Observed for Each Generic Task Performed by Each Surgeon Group
1.
Moorthy  KMunz  YSarker  SKDarzi  A Objective assessment of technical skills in surgery.  BMJ 2003;327 (7422) 1032- 1037PubMedGoogle ScholarCrossref
2.
Cuschieri  AFrancis  NCrosby  JHanna  GB What do master surgeons think of surgical competence and revalidation?  Am J Surg 2001;182 (2) 110- 116PubMedGoogle ScholarCrossref
3.
Reznick  RK Teaching and testing technical skills.  Am J Surg 1993;165 (3) 358- 361PubMedGoogle ScholarCrossref
4.
Scott  DJValentine  RJBergen  PC  et al.  Evaluating surgical competency with the American Board of Surgery In-Training Examination, skill testing, and intraoperative assessment.  Surgery 2000;128 (4) 613- 622PubMedGoogle ScholarCrossref
5.
Mills  RPMannis  MJ Report of the American Board of Ophthalmology task force on the competencies.  Ophthalmology 2004;111 (7) 1267- 1268PubMedGoogle ScholarCrossref
6.
Lee  AGVolpe  N The impact of the new competencies on resident education in ophthalmology.  Ophthalmology 2004;111 (7) 1269- 1270PubMedGoogle ScholarCrossref
7.
Stanton  NedHedge  AedBrookhuis  KedSalas  EedHendrick  Hed Handbook of Human Factors and Ergonomics Methods.  Boca Raton, FL CRC Press2005;37-1- 37-3
8.
Kirwin  B Human reliability assessment. Wilson  JRCorlett  EN Evaluation of Human Work A Practical Methodology. 2nd ed. London, England Taylor & Francis1998;921- 968Google Scholar
9.
Tang  BHanna  GBBax  NMCuschieri  A Analysis of technical surgical errors during initial experience of laparoscopic pyloromyotomy by a group of Dutch pediatric surgeons.  Surg Endosc 2004;18 (12) 1716- 1720PubMedGoogle ScholarCrossref
10.
Joice  PHanna  GBCuschieri  A Errors enacted during endoscopic surgery: a human reliability analysis.  Appl Ergon 1998;29 (6) 409- 414PubMedGoogle ScholarCrossref
11.
Tang  BHanna  GBJoice  PCuschieri  A Identification and categorization of technical errors by Observational Clinical Human Reliability Assessment (OCHRA) during laparoscopic cholecystectomy.  Arch Surg 2004;139 (11) 1215- 1220PubMedGoogle ScholarCrossref
12.
Malik  RWhite  PSMacewen  CJ Using human reliability analysis to detect surgical error in endoscopic DCR surgery.  Clin Otolaryngol Allied Sci 2003;28 (5) 456- 460PubMedGoogle ScholarCrossref
13.
Tang  BHanna  GBCuschieri  A Analysis of errors enacted by surgical trainees during skills training courses.  Surgery 2005;138 (1) 14- 20PubMedGoogle ScholarCrossref
14.
Saleh  GMGauba  VMitra  ALitwin  ASChung  AKKBenjamin  L Objective Structured Assessment of Cataract Surgical Skill.  Arch Ophthalmol 2007;125 (3) 363- 366PubMedGoogle ScholarCrossref
Clinical Sciences
February 1, 2008

Human Reliability Analysis of Cataract Surgery

Author Affiliations

Author Affiliations: Departments of Ophthalmology, St James' University Hospital, West Yorkshire (Dr Gauba), and Royal Surrey County Hospital, Guildford (Drs Tsangaris, Tossounis, Mitra, McLean, and Saleh), and Moorfields Eye Hospital, London (Dr Saleh), England.

Arch Ophthalmol. 2008;126(2):173-177. doi:10.1001/archophthalmol.2007.47
Abstract

Objective  To evaluate the use of the Human Reliability Analysis of Cataract Surgery tool to identify the frequency and pattern of technical errors observed during phacoemulsification cataract extraction by surgeons with varying levels of experience.

Design  Observational cohort study. Thirty-three consecutive phacoemulsification cataract operations were performed by 33 different ophthalmic surgeons with varying levels of operative experience: group 1, fewer than 50 procedures; group 2, between 50 and 250 procedures; and group 3, more than 250 procedures. Face and content validity were surveyed by a panel of senior cataract surgeons. The tool was applied to the 33 randomized and anonymous videos by 2 independent assessors trained in error identification and correct tool use. Task analysis using 10 well-defined end points and error identification using 10 external error modes were performed for each case. The main outcome measures were number of errors performed per task, nature of performed errors (executional or procedural), and surgical experience of operating surgeon.

Results  Analysis of 330 constituent steps of 33 operations identified 228 errors, of which 151 (66.2%) were executional and 77 (33.8%) were procedural. The overall highest error probability was associated with sculpting, followed by fragmentation of the nucleus; this was most evident in group 1. Surgeons in group 3 proportionally performed more errors during removal of soft lens matter than those in group 1 or 2. Surgical experience had a significant effect on the number of errors, with a statistically significant difference among the 3 groups (P < .001).

Conclusions  The Human Reliability Analysis of Cataract Surgery tool is useful for identifying where technical errors occur during phacoemulsification cataract surgery. The study findings, including the high executional error rate, could be used to enhance and structure resident surgical training and future assessment tools. Face, content, and construct validity of the tool were demonstrated.

The format of surgical skill assessment has undergone changes in recent years, with a shifting trend from the apprenticeship model to standardized objective methods of assessment. Training schemes in which a senior supervising surgeon assesses the technical proficiency and progress of trainees have been shown to be subjective, with significant interobserver variability even among experts in their field.1-3 Objective methods of surgical performance evaluation and monitoring have been deemed essential to consistently identify and correct technical errors.1,4 Increasing pressure by peers, the media, and the public on surgical performance and outcomes has further highlighted the need for the development of more objective methods of assessment.5,6

Human Reliability Analysis (HRA) is a tool that was originally developed to improve human performance and safety in high-risk industry,7,8 where it has been applied with considerable success for many years. The underlying principle is error analysis. This entails a qualitative and quantitative study of the mechanisms leading to technical error and human performance shaping factors.7 A modified version of the HRA tool, the Observational Clinical Human Reliability Assessment, has been applied to assess technical skill in laparoscopic cholecystectomy and other general surgical procedures; HRA was subsequently established as a validated system of objective surgical skill assessment.9-12

The aim of this study was to evaluate the HRA process in identifying the frequency and pattern of technical errors observed during phacoemulsification cataract extraction performed by surgeons with varying levels of operative experience.

Methods

Video recordings were taken of consecutive phacoemulsification cataract operations performed by different ophthalmic surgeons from 3 UK National Health Service cataract surgery centers. Ethics committee approval was sought before video recording, and individual patient consent was obtained for all recorded videos. Complete cataract surgical operations were recorded through the operating microscope. Patients with ophthalmic comorbidity, poor pupil dilation, mature cataract, previous trauma, axial length beyond 22 to 26 mm, or other high surgical risk factors identified at preoperative assessment were excluded. Therefore, all included cases were deemed by us to have been suitable for the most junior surgeon in the cohort to undertake, thereby promoting comparative consistency. The videotapes were sent to an independent technician who digitized them using videoediting software (Adobe Premier Pro 1.5; FFCSoftware Co Ltd, Kingston, New York). The technician also removed any logos or other characteristics that could identify the surgeon or the training unit. The digitized videotapes were coded, made anonymous, and randomized.

The recorded cases were divided according to the cumulative phacoemulsification cataract operative experience of the surgeon. Group 1 had performed fewer than 50 procedures; group 2, between 50 and 250 procedures; and group 3, more than 250 procedures.

A panel of 16 senior phacoemulsification cataract surgeons was assembled to establish the face and content validity of the tool. The panel also set out to identify thresholds of error identification for this procedure in view of differing surgical techniques. To achieve the latter objective, the panel identified a “gold standard” cataract case, suitable for the most junior cohort of surgeons and free from concurrent pathological features, which was used to set error thresholds. The face and content validity survey was performed using a structured feedback form (Figure 1). There was no preference as to the technique of phacoemulsification performed because the generic end points of the Human Reliability Analysis of Cataract Surgery (HRACS) tool were deemed to be applicable to most phacoemulsification techniques. Following modification and refinement of the HRACS tool based on structured feedback received from the expert panel, the final version of the tool used in the study is shown in Figure 2. Two independent senior ophthalmic surgical trainers then applied the HRACS error analysis tool to the videos. Both assessors were surgical trainers, and each had more than 10 years of cataract surgical experience. They were trained in error identification, including viewing of the gold standard cataract case, and on the correct use of the HRACS tool.

Error identification

The definition of human error used in the present study is “something which has been done which was: (i) not intended by the actor, (ii) not desired by a set of rules or an external observer, or (iii) that led the task or system outside acceptable limits.”11(p1216) This definition was agreed upon at the Bellagio Conference on Human Error and has been previously used in earlier Observational Clinical Human Reliability Assessment surgical studies.9-11 Similar principles of error identification were applied to cataract surgery in this study, in which an error was defined as an action that was clearly not intended by the surgeon; an action that was not performed to a predetermined standard, as set by our panel of expert cataract surgeons; or an action that increased the likelihood of a negative consequence or was not within acceptable safety limits of the surgical procedure.

A manual procedural task may be erroneously performed in a number of ways, referred to as external error modes in the Systemic Human Error Reduction and Prediction Approach, the original project on human error in industry.7 The modified HRA method of error assessment is an observational technique that involves breaking down any manual procedure sequentially into distinct tasks. Ten manual external error modes are used to define and count errors enacted in each task.

Using this as a platform, a categorization of human error was developed by Joice et al10 at the Surgical Skills Unit, University of Dundee, Dundee, Scotland (Table 1).

This comprehensive set of external error modes includes the types of observable errors committed when performing a manual procedure. The 10 external error modes can be divided into 2 groups: 1 to 6, which are procedural error modes, disrupting the correct sequence of procedural steps; and 7 to 10, which are executional error modes, implying incorrect manipulation of instruments and tissues.9-11,13 Grouping errors in this manner determines the nature of corrective measures: procedural error modes may be reduced by initiating a standardized operative task sequence, while executional error modes may be minimized by practicing surgical skills.13

Task analysis

The procedure of cataract extraction was divided into 10 generic tasks similar to the Observational Clinical Human Reliability Assessment tool. These tasks were selected as end points because they are well-defined, observable, and found in all techniques of phacoemulsification surgery; their sequential completion is necessary for the operation to proceed.

Data analysis

For every recorded procedure, each of the 10 components forming the operative tasks was observed to record errors performed and their external modes. Errors were analyzed for each component task and for surgical experience. The error probability for each task was calculated using the following formula: % Error Probability of Task = 100×(No. of Times the Task Was Erroneously Performed/Total No. of Times the Task Was Performed).

Quantitative error data were expressed as mean (SD). The Kruskal-Wallis test was used to evaluate statistical significance, set at P < .05.

Results

The results of the face and content validity survey are shown in Table 2. All feedback received was used to improve the tool. For instance, one panel member suggested we include wound enlargement (if appropriate) to the intraocular lens insertion step; therefore, the tool was amended accordingly. Other feedback received included ensuring that the tool is always provided with the external error modes visible on the same page as the assessment components; this suggestion was put into practice during the study. The final version of the tool was applied to 50 recorded cataract videos; 33 were included in the study following the application of exclusion criteria previously mentioned (9 were in group 1, 7 were in group 2, and 17 were in group 3). Each recorded case was completed by a different operating surgeon in its entirety. None of the procedures used were abandoned or required a more senior surgeon to intervene. All cases used a divide-and-conquer, stop-and-chop, or pure chop technique of phacoemulsification.

Analysis of 330 constituent steps of the 33 operations identified 228 errors. Of these, 151 (66.2%) were executional and 77 (33.8%) were procedural. Figure 3 graphically summarizes the number of errors performed per task by each of the 3 groups. Table 3 shows the mean (SD) of executional and procedural errors performed within each group, along with error probabilities for each task. The overall analysis suggests that the highest proportion of errors occurs during the engagement, sculpting, and rotation or manipulation of the nucleus followed by fragmentation of the nucleus.

Statistical analysis of the impact of surgical experience on number of errors shows a statistically significant difference between the 3 groups (P < .001). This trend is illustrated by the box plots shown in Figure 4.

Comment

The study of human error in surgery seems to provide an attractive quality assurance strategy that may be of interest to surgical trainees and supervising surgeons. Better understanding of such errors could have a positive effect on learning, especially on the acquisition of technical skills.9 This would enable us to refine our existing and future training and assessment systems by identifying underlying surgical performance error mechanisms among surgeons with varying levels of experience. Using the HRACS tool in this study, most errors identified were of the execution type (66.2%), suggesting that the surgeons were largely following the correct sequence of steps within the procedure most of the time, but failed to execute some of the component tasks adequately. The study found a statistically significant difference in the quantity of errors committed between the 3 groups of ophthalmologists studied, which reflects their operative experience. This demonstrates construct validity of the HRACS tool when applied to phacoemulsification surgery.

Setting the acceptable standards for the procedure was a challenging and laborious task because it occasionally involved identifying arbitrary thresholds of error for some tasks based on the cumulative experience and consensus of the expert surgical panel. This was easier to perform for potential procedural errors, but more complex when considering executional errors. For instance, the incorrect sequence of tasks can be easily agreed on if erroneous; however, the threshold speed at which an action is classified as an “error” is not as easy to define. In this study, a gold standard cataract case was agreed on by the senior panel as a baseline on which error identification thresholds were set. It may transpire that this process may need to be emulated within each training program or research environment in which the tool is intended to be used so that appropriate standards can be set locally.

In this study, the assessors were both experienced surgeons and surgical trainers with considerable exposure to different phacoemulsification techniques. Despite this, they were trained in the process of error identification to help reduce the element of subjectivity and increase interassessor consistency. A limitation to HRACS use as an assessment tool is that implementing the system adequately can be labor intensive and requires training in error identification. Further research on the interrater variability and reliability of the tool is under way and will help indicate the impact of this training on the eventual scores obtained.

Observational Clinical Human Reliability Assessment has been shown to enable objective tracking of errors in surgical performance for operations requiring a high level of motor control and human-machine interaction, such as minimal access surgery.11 Cataract extraction by phacoemulsification has a similar technical demand profile to this group of operations: the view available to the surgeon during the procedure is magnified and includes the instruments being used, but not the hands. Another advantage of this method is that it identifies hazard zones within an operation where surgical errors occur frequently and that have an effect on clinical outcome. For phacoemulsification, the HRACS results show that the overall highest error probability was associated with engagement, sculpting, and rotation or manipulation of the nucleus. The error probability with this task rated consistently high throughout the 3 experience levels, suggesting that overall this may be an important hazard zone of phacoemulsification cataract surgery. This suggests that HRACS may be used as a research tool to help identify hazard zones within the procedure, many of which seem to change with experience. The findings suggest that, apart from nucleus sculpting or manipulation, junior surgeons tend to have more difficulty with nucleus fragmentation, capsulorrhexis, and intraocular lens insertion than do senior surgeons. Senior surgeons proportionately performed more errors during removal of soft lens matter than did junior surgeons, an association that may not have otherwise been apparent. Having established such associations, this information can be used to guide assessment tool design, as was done when constructing the Objective Structured Assessment of Cataract Surgical Skill tool.14 The HRA process has also aided in enhancing surgical tool design in other specialties.12 A further application of this information would be in the process of surgical skills training, in which methods to lower the executional error rate could target steps with higher error probability, of which nucleus engagement, sculpting, and manipulation seem to be paramount among junior surgeons. Such skills could be addressed more closely in wet laboratory settings among more junior surgeons to help reduce persistence of these errors as they progress through their surgical careers.

It has not been possible to correlate error rates with overall morbidity because this was beyond the scope of this study. However, the safety record of high-risk industry with operational policies firmly based on industrial HRA suggests that there is an inverse relationship between the two.9

Although we can never eliminate surgical error completely, we can attempt to reduce it. The first stage of this process should be identification of where and which errors actually take place during the surgical procedure. The HRACS tool used in this study provides valuable information to this effect.

Correspondence: Vinod Gauba, FRCOphth, MSc, PAMedEd, 33 Alder Hill Ave, Leeds, West Yorkshire LS6 4JQ, England (vgauba@aol.com).

Submitted for Publication: April 1, 2007; final revision received July 10, 2007; accepted July 11, 2007.

Financial Disclosure: None reported.

References
1.
Moorthy  KMunz  YSarker  SKDarzi  A Objective assessment of technical skills in surgery.  BMJ 2003;327 (7422) 1032- 1037PubMedGoogle ScholarCrossref
2.
Cuschieri  AFrancis  NCrosby  JHanna  GB What do master surgeons think of surgical competence and revalidation?  Am J Surg 2001;182 (2) 110- 116PubMedGoogle ScholarCrossref
3.
Reznick  RK Teaching and testing technical skills.  Am J Surg 1993;165 (3) 358- 361PubMedGoogle ScholarCrossref
4.
Scott  DJValentine  RJBergen  PC  et al.  Evaluating surgical competency with the American Board of Surgery In-Training Examination, skill testing, and intraoperative assessment.  Surgery 2000;128 (4) 613- 622PubMedGoogle ScholarCrossref
5.
Mills  RPMannis  MJ Report of the American Board of Ophthalmology task force on the competencies.  Ophthalmology 2004;111 (7) 1267- 1268PubMedGoogle ScholarCrossref
6.
Lee  AGVolpe  N The impact of the new competencies on resident education in ophthalmology.  Ophthalmology 2004;111 (7) 1269- 1270PubMedGoogle ScholarCrossref
7.
Stanton  NedHedge  AedBrookhuis  KedSalas  EedHendrick  Hed Handbook of Human Factors and Ergonomics Methods.  Boca Raton, FL CRC Press2005;37-1- 37-3
8.
Kirwin  B Human reliability assessment. Wilson  JRCorlett  EN Evaluation of Human Work A Practical Methodology. 2nd ed. London, England Taylor & Francis1998;921- 968Google Scholar
9.
Tang  BHanna  GBBax  NMCuschieri  A Analysis of technical surgical errors during initial experience of laparoscopic pyloromyotomy by a group of Dutch pediatric surgeons.  Surg Endosc 2004;18 (12) 1716- 1720PubMedGoogle ScholarCrossref
10.
Joice  PHanna  GBCuschieri  A Errors enacted during endoscopic surgery: a human reliability analysis.  Appl Ergon 1998;29 (6) 409- 414PubMedGoogle ScholarCrossref
11.
Tang  BHanna  GBJoice  PCuschieri  A Identification and categorization of technical errors by Observational Clinical Human Reliability Assessment (OCHRA) during laparoscopic cholecystectomy.  Arch Surg 2004;139 (11) 1215- 1220PubMedGoogle ScholarCrossref
12.
Malik  RWhite  PSMacewen  CJ Using human reliability analysis to detect surgical error in endoscopic DCR surgery.  Clin Otolaryngol Allied Sci 2003;28 (5) 456- 460PubMedGoogle ScholarCrossref
13.
Tang  BHanna  GBCuschieri  A Analysis of errors enacted by surgical trainees during skills training courses.  Surgery 2005;138 (1) 14- 20PubMedGoogle ScholarCrossref
14.
Saleh  GMGauba  VMitra  ALitwin  ASChung  AKKBenjamin  L Objective Structured Assessment of Cataract Surgical Skill.  Arch Ophthalmol 2007;125 (3) 363- 366PubMedGoogle ScholarCrossref
×