[Skip to Navigation]
Sign In
Figure 1.  The Immersive Mock Operating Theater Environment at University College London Hospital, London, England
The Immersive Mock Operating Theater Environment at University College London Hospital, London, England

Still images from the video feed of the functioning mock operating theater are shown.

Figure 2.  Intertool Reliability of Anesthetists’ Non-Technical Skills (ANTS) vs Non-Technical Skills Scale (NOTECHS)
Intertool Reliability of Anesthetists’ Non-Technical Skills (ANTS) vs Non-Technical Skills Scale (NOTECHS)

All surgeons’ scores who were assessed with both ANTS and NOTECHS are shown to enable a comparison of scores obtained using these 2 tools.

Figure 3.  Intertool Reliability of Anesthetists’ Non-Technical Skills (ANTS) vs Non-Technical Skills for Surgeons (NOTSS)
Intertool Reliability of Anesthetists’ Non-Technical Skills (ANTS) vs Non-Technical Skills for Surgeons (NOTSS)

All surgeons’ scores who were assessed with both ANTS and NOTSS are shown to enable a comparison of scores obtained using these 2 tools.

Figure 4.  Intertool Reliability of Anesthetists’ Non-Technical Skills (ANTS) vs Observational Teamwork Assessment for Surgery (OTAS)
Intertool Reliability of Anesthetists’ Non-Technical Skills (ANTS) vs Observational Teamwork Assessment for Surgery (OTAS)

All surgeons’ scores who were assessed with both ANTS and OTAS are shown to enable a comparison of scores obtained using these 2 tools.

Table.  List of Scenarios Used in the Immersive Simulation Coursea
List of Scenarios Used in the Immersive Simulation Coursea
1.
Vincent  C, Neale  G, Woloshynowych  M.  Adverse events in British hospitals: preliminary retrospective record review.  BMJ. 2001;322(7285):517-519.PubMedGoogle ScholarCrossref
2.
Gawande  AA, Zinner  MJ, Studdert  DM, Brennan  TA.  Analysis of errors reported by surgeons at three teaching hospitals.  Surgery. 2003;133(6):614-621.PubMedGoogle ScholarCrossref
3.
Kohn  LT, Corrigan  JM, Donaldson  MS, eds; Institute of Medicine Committee on Quality of Health Care in America.To Err Is Human: Building a Safer Health System. Washington, DC: National Academies Press; 2000. Medline:25077248
4.
Haynes  AB, Weiser  TG, Berry  WR,  et al; Safe Surgery Saves Lives Study Group.  A surgical safety checklist to reduce morbidity and mortality in a global population.  N Engl J Med. 2009;360(5):491-499.PubMedGoogle ScholarCrossref
5.
Halliday  J, Carpenter  RH.  The effect of cognitive distraction on saccadic latency.  Perception. 2010;39(1):41-50.PubMedGoogle ScholarCrossref
6.
Simons  DJ, Chabris  CF.  Gorillas in our midst: sustained inattentional blindness for dynamic events.  Perception. 1999;28(9):1059-1074.PubMedGoogle ScholarCrossref
7.
Helmreich  RL, Merritt  AC, Wilhelm  JA.  The evolution of Crew Resource Management training in commercial aviation.  Int J Aviat Psychol. 1999;9(1):19-32.PubMedGoogle ScholarCrossref
8.
Hempel  S, Maggard-Gibbons  M, Nguyen  DK,  et al.  Wrong-site surgery, retained surgical items, and surgical fires: a systematic review of surgical never events.  JAMA Surg. 2015;150(8):796-805.PubMedGoogle ScholarCrossref
9.
Holzman  RS, Cooper  JB, Gaba  DM, Philip  JH, Small  SD, Feinstein  D.  Anesthesia Crisis Resource Management: real-life simulation training in operating room crises.  J Clin Anesth. 1995;7(8):675-687.PubMedGoogle ScholarCrossref
10.
Cook  DA, Hatala  R, Brydges  R,  et al.  Technology-enhanced simulation for health professions education: a systematic review and meta-analysis.  JAMA. 2011;306(9):978-988.PubMedGoogle ScholarCrossref
11.
Health & Social Care Information Centre. http://www.hscic.gov.uk. Accessed December 16, 2015.
12.
National Health Service England. Serious incident framework: supporting learning to prevent recurrence. https://www.england.nhs.uk/wp-content/uploads/2015/04/serious-incidnt-framwrk-upd.pdf. Published March 27, 2015. Accessed December 16, 2015.
13.
National Health Service England. Never events list 2015/16. https://www.england.nhs.uk/wp-content/uploads/2015/03/never-evnts-list-15-16.pdf. Published March 27, 2015. Accessed December 16, 2015.
14.
Howard  SK, Gaba  DM, Fish  KJ, Yang  G, Sarnquist  FH.  Anesthesia Crisis Resource Management training: teaching anesthesiologists to handle critical incidents.  Aviat Space Environ Med. 1992;63(9):763-770.PubMedGoogle Scholar
15.
Yule  S, Flin  R, Paterson-Brown  S, Maran  N, Rowley  D.  Development of a rating system for surgeons’ non-technical skills.  Med Educ. 2006;40(11):1098-1104.PubMedGoogle ScholarCrossref
16.
Sevdalis  N, Davis  R, Koutantji  M, Undre  S, Darzi  A, Vincent  CA.  Reliability of a revised NOTECHS scale for use in surgical teams.  Am J Surg. 2008;196(2):184-190.PubMedGoogle ScholarCrossref
17.
Fletcher  G, Flin  R, McGeorge  P, Glavin  R, Maran  N, Patey  R.  Anaesthetists’ Non-Technical Skills (ANTS): evaluation of a behavioural marker system.  Br J Anaesth. 2003;90(5):580-588.PubMedGoogle ScholarCrossref
18.
Healey  AN, Undre  S, Vincent  CA.  Developing observational measures of performance in surgical teams.  Qual Saf Health Care. 2004;13(suppl 1)(suppl 1):i33-i40.PubMedGoogle ScholarCrossref
19.
Saleh  GM, Lamparter  J, Sullivan  PM,  et al.  The International Forum of Ophthalmic Simulation: developing a virtual reality training curriculum for ophthalmology.  Br J Ophthalmol. 2013;97(6):789-792.PubMedGoogle ScholarCrossref
20.
Simon  JW, Ngo  Y, Khan  S, Strogatz  D.  Surgical confusions in ophthalmology.  Arch Ophthalmol. 2007;125(11):1515-1522.PubMedGoogle ScholarCrossref
21.
National safety standards for invasive procedures. http://www.england.nhs.uk/patientsafety/never-events/natssips/. Accessed December 16, 2015.
22.
Golnik  C, Beaver  H, Gauba  V,  et al.  Development of a new valid, reliable, and internationally applicable assessment tool of residents’ competence in ophthalmic surgery (an American Ophthalmological Society thesis).  Trans Am Ophthalmol Soc. 2013;111:24-33.PubMedGoogle Scholar
23.
Saleh  GM, Gauba  V, Mitra  A, Litwin  AS, Chung  AK, Benjamin  L.  Objective Structured Assessment of Cataract Surgical Skill.  Arch Ophthalmol. 2007;125(3):363-366.PubMedGoogle ScholarCrossref
Original Investigation
August 2016

Feasibility of Human Factors Immersive Simulation Training in Ophthalmology: The London Pilot

Author Affiliations
  • 1National Institute for Health Research Biomedical Research Centre at University College London Institute of Ophthalmology, Moorfields Eye Hospital, London, England
  • 2Moorfields Eye Hospital, London, England
  • 3Department of Computing, University of Surrey, Surrey, England
  • 4Simulation and Clinical Skills Centre, University College London Hospital, London, England
JAMA Ophthalmol. 2016;134(8):905-911. doi:10.1001/jamaophthalmol.2016.1769
Abstract

Importance  Human factors training can enhance teamworking and reduce error. It is used regularly in certain medical disciplines, but its use has not been established for ophthalmology to our knowledge.

Objective  To explore the feasibility of providing immersive simulation human factors training for ophthalmic surgical teams.

Design, Setting, and Participants  Prospective scenario-based simulation and concept description at University College London Hospital and Moorfields Eye Hospital, London, England, from December 12, 2013, to March 13, 2014. At both sites, fully immersive simulated operating theater environments were used, comprising live interactive communication with patients and theater staff, full anesthetic and operating facilities, replicated patient notes, active vital signs, and the ability to contact surgical or anesthetic teams outside of the theater via telephone. Participants were consultant (attending) and trainee ophthalmic surgeons and anesthetists, operating department assistants and practitioners, and ophthalmic nursing staff.

Main Outcomes and Measures  The following 4 previously validated rating tools for nontechnical skills were applied to a replicated series of scenarios based on actual patient safety incidents at Moorfields Eye Hospital and in the literature: Observational Teamwork Assessment for Surgery (OTAS), Non-Technical Skills Scale (NOTECHS), Anesthetists’ Non-Technical Skills (ANTS), and Non-Technical Skills for Surgeons (NOTSS). The Pearson product moment correlation coefficient was calculated for each pair of scoring tools. Intertool and interassessor reliability was established. Interassessor consistency was compared by calculating a normalized standard deviation of scores for each tool across all assessors.

Results  Twenty simulation scenarios, including wrong intraocular lens implantation, wrong eye operation, wrong drug administration, and wrong patient, were provided. The intertool correlations were 0.732 (95% CI, 0.271-0.919; P = .01) for NOTECHS vs ANTS, 0.922 (95% CI, 0.814-0.968; P < .001) for NOTSS vs ANTS, 0.850 (95% CI, 0.475-0.964; P < .001) for OTAS vs ANTS, 0.812 (95% CI, 0.153-0.971; P = .03) for OTAS vs NOTECHS, 0.716 (95% CI, −0.079 to 0.955; P = .07) for OTAS vs NOTSS, and 0.516 (95% CI, −0.020 to 0.822; P = .06) for NOTECHS vs NOTSS. The normalized standard deviations of scores obtained using each tool across all assessors were 0.024 (95% CI, 0.014-0.091) for NOTSS, 0.060 (95% CI, 0.034-0.225) for OTAS, 0.068 (95% CI, 0.041-0.194) for ANTS, and 0.072 (95% CI, 0.043-0.206) for NOTECHS.

Conclusions and Relevance  This study describes the feasibility of a high-fidelity immersive simulation course specifically for ophthalmic surgical teams. The ANTS and NOTSS had the highest intertool and interrater consistency, respectively. Human factors simulation in ophthalmology offers a new method of teaching team members, with the potential to reduce serious ophthalmic patient safety events. Further work will define its usefulness and practical applications.

Introduction

Over the past 2 decades, there has been increasing recognition that human error is an important challenge to providing safe health care. Methods of improving safety, such as protocols and recording competencies, have had a positive effect on addressing such errors, but their success is limited by human factors.1-4 While our knowledge and recognition of human factors are rapidly evolving, our training systems have not always kept abreast. This deficit is particularly true in surgical specialties, such as ophthalmology, compared with other medical disciplines and high-risk industries, such as aviation.

Three decades ago, root cause analysis by the National Aeronautics and Space Administration (Washington, DC) revealed that 70% of commercial aviation incidents were due to human factors (poor decision making, interpersonal communication lapses, and leadership failures), as well as loss of situational awareness (the ability to identify, process, and comprehend critical information relating to the aircraft and key team members during a developing situation.5,6 Humans can often fail to notice subtle or rare events when multitasking under pressure, rather favoring prior expectation over current information).5,6 This finding spawned the development of the “Swiss cheese” systems-based model of accident causation. The model proposes that there are multiple barriers in place preventing mistakes from occurring due to human error, but each is imperfect and contains holes, rather like slices of Swiss cheese. When the holes happen to line up in all barriers, an accident occurs. Therefore, the frequency of mistakes occurring through human error can be reduced by adding more barriers. Crew resource management training was invented to directly address such human error and involved simulation of real safety failure events.7 Training airline pilots in a safe, controlled, but realistic immersive simulation environment was effective at reducing errors.

These findings are directly relevant to the operating theater, where working under pressure can diminish alertness to impending complications. Evidence suggests that up to 43% of intraoperative surgical errors can be related to communication problems2 (a trend also seen more widely in patient care2,4). A recent systematic review of 138 studies investigating the root cause of surgical never events found human factors to be the most common cause (as opposed to poor medical knowledge, machine failure, or technical factors), with a substantial need for improved communication skills.8 The introduction of the World Health Organization surgical safety checklist has contributed to a reduction in morbidity and mortality,4 but it does not eliminate human error–related adverse events. Mitigation strategies for such errors in health care are already being implemented,3 and evidence of improved team performance after participating in human factors immersive simulation training has been shown for the highest-risk specialties.9,10

Ophthalmology is at a particularly high risk of human error safety events due to high throughputs of interventional procedures (>396 180 cataract operations in 2014-2015 and >220 000 intravitreous injections of ranibizumab in 2011 in England alone),11 with complex data analysis and rapid decision making required.12 Addressing human factors in ophthalmology has the potential to dramatically improve patient safety.10 In this pilot study, we explore the feasibility of providing immersive simulation training for ophthalmic surgical teams. Scenarios were developed based on real-life error events, which were replicated in a realistic (high fidelity) simulated operating theater.

The ultimate aim of introducing human factors simulation training into ophthalmology is to reduce the number of incidents and never events13 in ophthalmic operating theaters. However, we did not set out to measure the effect of the training on the frequency of such events because whole teams across an entire institution or region would most likely need to be trained before a noticeable change occurred, and it would take substantial time to demonstrate a reduction in actual events. Rather, this study aimed to demonstrate the feasibility of the training and the validity of candidates’ performance metrics, with a view to future work to demonstrate an actual reduction in serious safety events in ophthalmology.

Box Section Ref ID

Key Points

  • Question Is it feasible to provide an immersive surgical simulation course focusing on human causes of error in eye surgery?

  • Findings Twenty simulation scenarios, including wrong intraocular lens implantation, wrong eye operation, wrong drug administration, and wrong patient, were provided. The Anesthetists’ Non-Technical Skills (ANTS) and Non-Technical Skills for Surgeons (NOTSS) assessment tools, previously validated for assessment of human factors in anesthetics and general surgery, were found to have high intertool and interassessor consistency, respectively, when applied to the formative assessment of ophthalmic professionals.

  • Meaning This pilot study demonstrates a model that appears feasible for the provision of human factors training in ophthalmic surgery through immersive simulation.

Methods

The study received approval from the institutional review board at Moorfields Eye Hospital, London, England, by the clinical research governance committee. All individuals gave written informed consent before participation. The study dates were December 12, 2013, to March 13, 2014.

Scenarios

A literature review of serious incidents and never events and a root cause analysis review of all such events occurring at Moorfields Eye Hospital between January 1, 2009, and October 31, 2013, were conducted. Scenarios were developed surrounding the causative factors, which were replicated to explore and measure the team function and human behaviors that determined detection of the problem or a recurrence of the error.

Scenarios focused on cataract surgery and intravitreous injections, with each incorporating a real patient safety incident or near miss that had occurred as a result of human error. Surgical teams were not prewarned about the nature of these errors and were asked to continue their normal routines, which if sufficiently robust would prevent the error from recurring. Debriefing sessions were used to undertake relevant staff training to enhance live performance in the future, and the errors were captured and evaluated by the assessment tools (see the “Rating Tools” subsection below). The scenarios developed were also aimed to serve as a template so that any errors that occur in the future could also be adapted for training using the same principles.

Setting

The settings were the clinical simulation theater at University College London Hospital and the clinical tutorial complex at Moorfields Eye Hospital, both in London, England. The facility at University College London Hospital is an entire replicated operating theater environment. A mannequin serving as the patient interacts with staff, communicating via an integrated speaker system (Figure 1). Pigs’ eyes fixed in the mannequin head allow individuals to perform steps of cataract surgery and intravitreous injections just as they would in a real operating theater.

All standard operating theater equipment was available, including fully equipped ophthalmic instrument trolleys, an anesthetic machine displaying the patient’s simulated physiological parameters, a full set of the patient’s notes, and an operating list on the wall. A telephone was available that could be used to call various people, such as the consultant or attending on call or the cardiac arrest team.

Investigators (G.M.S., C.J., and P.S.) were able to direct the scenarios from a control room adjoining the theater behind a 1-way mirror and were also able to change the physiological parameters of the patient as displayed on the anesthetic machine. Actors simulated the patient’s voice and the voice of any team members who were telephoned.

Scenarios performed within the mock operating theater were filmed through a multicamera capture system with live feeds to a rating room, where participants’ performance was scored by experienced ophthalmologists and nurses (M.H., P.S., and other nonauthors). Playback facilities to another room were used for debriefing with participants immediately after each scenario had finished.

At Moorfields Eye Hospital, a similar setup was used. The virtual operating theater was created using mobile simulation equipment, and an ophthalmic simulator (Eyesi Surgical; VRmagic) was used in lieu of a patient.

Participants

Trainee ophthalmologists, attending (consultant) ophthalmologists, and nurses were scored for their performance on the team. The team also included anesthetists, operating department practitioners, operating department assistants, ophthalmic surgeons, and nurses as actors (G.M.S. and C.J.).

Debriefing

Each scenario was followed by a detailed debriefing with senior attending ophthalmologists (M.H. and P.S.) and video playback to enhance learning. The sessions adhered to the principles of a standard Anesthesia Crisis Resource Management debriefing session.14

Rating Tools

Experienced ophthalmologists and nurses used 4 previously validated rating tools for nontechnical skills that focus on human factors, including decision making, interpersonal communication, situational awareness, task management, and attention to patient safety. For this study, each assessor was asked only to complete 2 assessment tools per candidate per scenario. There were 4 to 5 assessors per individual. Raters had undergone accredited external training in the use of simulation training. They were briefed on the use of the assessment tools. No collaboration was allowed between assessors during the scenarios or rating process. The 4 rating tools were Non-Technical Skills for Surgeons (NOTSS), Non-Technical Skills Scale (NOTECHS), Anesthetists’ Non-Technical Skills (ANTS), and Observational Teamwork Assessment for Surgery (OTAS).

The NOTSS15 was developed in 2003 as a checklist of observable behaviors to assess essential nontechnical skills required of surgeons. The NOTSS tool asks assessors to form a global opinion about 13 aspects of the surgeon’s nontechnical performance. It focuses on the surgeon’s awareness of any problems that might be about to arise, how he or she copes under pressure, whether she communicates effectively as a team leader, and her ability to make sound nonsurgical decisions when there is uncertainty (eg, when it is unclear whether appropriate consent is in place for a given procedure).

The surgical version of NOTECHS originated as an adaptation from an assessment tool of observable behaviors first used in aviation. It was validated in 2008 for assessment of surgical teams.16 The NOTECHS tool asks assessors 22 precise questions about nontechnical tasks that a surgeon is expected to perform, such as whether instructions are clear and polite or whether he or she debriefs with the team appropriately. The broad areas covered by these questions are communication and leadership skills, awareness of any potential problems that may be about to arise, and time and resource management.

The ANTS17 was developed in 2006 for use by anesthetists. Although originally validated for anesthetists, we considered all factors assessed to be also relevant to surgeons. It asks assessors to form a global opinion on 15 aspects of the surgeon’s nontechnical performance. It focuses on leadership, communication, situational awareness, decision making, and planning.

The OTAS18 was designed in 2004 to assess the teamworking skills of different team members simultaneously, in contrast to the other 3 tools that specifically assess just the person performing the procedure. It asks for a global assessment of communication, coordination, cooperation, leadership, and situational awareness for each team member before surgery, during surgery, and after surgery.

Statistical Analysis

Software programs (Python 2.7.0; Python Software Foundation and SciPy 0.16.0; The SciPy Community) were used for statistical analysis. Intertool reliability was evaluated by assessing the correlation between the scores given to the same candidate by each rating tool. The Pearson product moment correlation coefficient was calculated for each pair of scoring tools to provide a measure of how well the different assessment tools agreed with each other.

A comparison of the interrater consistency achieved when using each of the 4 rating tools was assessed by calculating a normalized standard deviation of scores obtained with each tool across all assessors. This value provided a measure of the average discrepancy between different assessors’ ratings of the same surgeon using the same assessment tool.

Results

Twenty simulation scenarios were provided based on a bank of 8 scenarios (Table) over 4 days in the high-fidelity simulated operating theater described above. A total of 20 individuals were recruited, and each performed one scenario.

The development of scenarios followed a cycle format of plan, do, study, and act. Scenarios were planned and executed during each simulation date. Feedback from both participants and trainers were then studied, and the scenarios were improved. The total numbers of assessment tools completed were 70 ANTS by 5 assessors, 42 NOTECHS by 5 assessors, 59 NOTSS by 4 assessors, and 48 OTAS by 4 assessors. The scenarios that were developed are listed in the Table.

Intertool Reliability

Intertool reliability was assessed for each pair of scoring systems using the Pearson product moment correlation coefficient. The results were 0.732 (95% CI, 0.271-0.919; P = .01) for NOTECHS vs ANTS, 0.922 (95% CI, 0.814-0.968; P < .001) for NOTSS vs ANTS, 0.850 (95% CI, 0.475-0.964; P < .001) for OTAS vs ANTS, 0.812 (95% CI, 0.153-0.971; P = .03) for OTAS vs NOTECHS, 0.716 (95% CI, −0.079 to 0.955; P = .07) for OTAS vs NOTSS, and 0.516 (95% CI, −0.020 to 0.822; P = .06) for NOTECHS vs NOTSS.

Interassessor Consistency

Interassessor consistency was compared between tools by calculating a normalized standard deviation of scores obtained with each tool across all assessors. The smaller the normalized standard deviation, the greater is the interassessor consistency. The results were 0.024 (95% CI, 0.014-0.091) for NOTSS, 0.060 (95% CI, 0.034-0.225) for OTAS, 0.068 (95% CI, 0.041-0.194) for ANTS, and 0.072 (95% CI, 0.043-0.206) for NOTECHS.

Discussion

We present a previously unreported pilot study of human factors simulation in ophthalmology. While the development of wet laboratories, dry laboratories, and technical simulation has provided ophthalmic surgeons with a safe environment in which to practice technical skills,19 nontechnical skills surrounding behaviors, teamwork, and communication are less formally addressed by the current training curriculum, and there is a paucity of published literature on this topic related to ophthalmology. The work presented herein shows the feasibility of this type of training in ophthalmology, with the potential to enhance team performance and patient safety as a result.4,9,10,20

Simulators created for human factors training are designed to be both high fidelity (realistic detail) and immersive (the individual is surrounded by a convincing replica of the same environment he or she might expect in a real-life setting). Over the past 2 decades, medical specialties have developed such training with anesthetics and acute life support, leading the field in demonstrating enhanced team performance after participation.9,10 This training required the creation of realistic mock operating theaters that allow anesthetists to practice technical, teamworking, and communication tasks that may be required in an emergency.

As the population ages and the number of ophthalmic procedures performed increases, the chance of periprocedural errors is also likely to rise. These challenges are set against a backdrop of changes to working patterns within health systems. It has become increasingly common for members of surgical teams to be unfamiliar with each other and for the surgeon not to have been involved at every stage of the patient’s pathway. This situation heightens the need for robust systems to prevent communication omissions and patient safety incidents. Although risk management systems, such as adverse event and near miss reporting, have highlighted the causes of many such events, they do not always deliver feedback in a manner that fosters effective learning. The use of simulation may help with this problem. In addition, human factors and multidisciplinary team training is now mandated by the new national safety standards for invasive procedures in England.21

The scenarios in this study were based on real patient safety incidents, mirroring as closely as possible the situation that led to the events. Simulation-based studies often focus on either technical or human factors errors, but the 2 are not mutually exclusive. The present pilot study chiefly addressed human factors but also incorporated a technical element. Ensuring a fair focus on the technical task increased the authenticity of the scenarios. If the scenarios had been based purely on communication and teamworking skills, then the pressures associated with performing surgery would be removed, defeating the realism of the scenarios. Subjective feedback was submitted by participants, who in general found the sessions useful, relevant to their practice, and realistic.

Four assessment tools (NOTSS, NOTECHS, ANTS, and OTAS) were evaluated for their cross-validity in the context of ophthalmic scenarios. Candidates’ scores within ANTS were found to have the strongest correlation with the other 3 tools (Figure 2, 3, and 4). The CI of the true intertool correlation is narrow for ANTS vs NOTSS, whereas it is wider for ANTS vs OTAS and ANTS vs NOTECHS. It is likely that potential future studies with more data points would be able to produce a narrower CI for the true intertool correlation. However, the positive correlation of ANTS with the other tools was statistically significant in all cases. By contrast, there were discrepancies between the scores achieved in NOTSS vs NOTECHS and OTAS. A likely explanation is the subtly different “soft” skills being scrutinized by NOTSS compared with NOTECHS and OTAS. The NOTSS focuses predominantly on information processing, whereas NOTECHS and OTAS focus on team rapport. The discrepancies between the scores produced using these tools suggest that the different emphasis of each may offer complementary information. The ANTS focuses on both team rapport and information processing, explaining its strong correlation with all 3 other tools. These results cross-validate the use of the 4 rating tools during simulated ophthalmic surgery, which has not previously been tested to our knowledge.

The 4 rating tools were also evaluated for their internal and interassessor consistency by calculating a normalized standard deviation of scores obtained using each system. In general, the tools that showed higher internal consistency were also the ones that were perceived by the assessors to be most relevant to our scenarios. The NOTSS provided by far the highest degree of internal consistency, followed by ANTS and OTAS. However, few OTAS scoring sheets were completed fully due its emphasis on preoperative and postoperative performance, rather than looking chiefly at the intraoperative phase that was the main focus of our scenarios. This deficit was in contrast to ANTS, which was thought to be easy to use and relevant to our scenarios. The NOTECHS scores were the least internally consistent of all the 4 rating tools and also had the highest proportion of fields scored as not applicable, implying that it was the least relevant to this task. The NOTECHS has been specifically developed for surgeons (rather than anesthetists), but there were several questions that were less relevant to ophthalmic surgery than to general surgery. For example, there was strong agreement between assessors that consistent monitoring of a patient’s parameters by the surgeon throughout routine cataract surgery or intravitreous injection is not essential.

Taking the validity, internal consistency, and relevance into account, ANTS and NOTSS were the best-performing tools in this pilot study, and we recommend their use in future simulation sessions. Having established good interassessor reliability, only 2 assessors would be required.

Conclusions

This pilot study demonstrates the feasibility of human factors simulation in ophthalmology and provides a template for the continuous development of new scenarios as further patient safety incidents occur. Our pilot study avoids past limitations of simulation training in which either human factors or technical errors are addressed in isolation. A next step in the development of this work will be the determination of the optimal technical skills assessment tools aiming to examine those, such as the International Council of Ophthalmology’s Ophthalmology Surgical Competency Assessment Rubric22 or the Objective Structured Assessment of Cataract Surgical Skill,23 which can be used in addition to a nontechnical NOTSS or ANTS assessment. We believe that learning derived through this model will be of higher educational value and provide a more holistic assessment of surgeons’ and surgical teams’ performance. After further validation, we plan to implement human factors training for entire surgical teams in a large center or region to enable its effect on never event frequency to be assessed. Ultimately, the goal of the simulation program’s development is to become a formative assessment for entire surgical teams, including medical, nursing, and auxiliary staff, with demonstrable improvement in team functioning and measurable reduction in serious ophthalmic safety events.

Back to top
Article Information

Submitted for Publication: December 28, 2015; final revision received April 18, 2016; accepted April 25, 2016.

Corresponding Author: George M. Saleh, BSc, MBBS, FRCS, FRCOphth, National Institute for Health Research Biomedical Research Centre at University College London Institute of Ophthalmology, Moorfields Eye Hospital, 162 City Rd, London EC1V 2PD, England (george.saleh@moorfields.nhs.uk).

Published Online: June 16, 2016. doi:10.1001/jamaophthalmol.2016.1769.

Author Contributions: Drs Saleh and Wawrzynski had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

Study concept and design: Saleh, Wawrzynski, Flanagan, Hingorani, John, Sullivan.

Acquisition, analysis, or interpretation of data: Saleh, Wawrzynski, Saha, Smith, Hingorani, Sullivan.

Drafting of the manuscript: Saleh, Wawrzynski, Hingorani.

Critical revision of the manuscript for important intellectual content: Saleh, Wawrzynski, Saha, Smith, Hingorani, John, Sullivan.

Statistical analysis: Smith.

Obtained funding: Saleh.

Administrative, technical, or material support: Saleh, Wawrzynski, Saha, Flanagan, Hingorani, John, Sullivan.

Study supervision: Saleh, Hingorani, Sullivan.

Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest, and none were reported.

Funding/Support: This study was funded by the National Institute for Health Research Biomedical Research Centre at Moorfields Eye Hospital National Health Service Foundation Trust, Simulation and Technology-Enhanced Learning Initiative (STeLI), and University College London Institute of Ophthalmology. We acknowledge an unrestricted grant from the Special Trustees of Moorfields Eye Hospital.

Role of the Funder/Sponsor: The sponsors had no role in the design or conduct of the study, data collection, management analysis, interpretation, preparation, review, approval of the manuscript, or decision to submit the manuscript for publication.

References
1.
Vincent  C, Neale  G, Woloshynowych  M.  Adverse events in British hospitals: preliminary retrospective record review.  BMJ. 2001;322(7285):517-519.PubMedGoogle ScholarCrossref
2.
Gawande  AA, Zinner  MJ, Studdert  DM, Brennan  TA.  Analysis of errors reported by surgeons at three teaching hospitals.  Surgery. 2003;133(6):614-621.PubMedGoogle ScholarCrossref
3.
Kohn  LT, Corrigan  JM, Donaldson  MS, eds; Institute of Medicine Committee on Quality of Health Care in America.To Err Is Human: Building a Safer Health System. Washington, DC: National Academies Press; 2000. Medline:25077248
4.
Haynes  AB, Weiser  TG, Berry  WR,  et al; Safe Surgery Saves Lives Study Group.  A surgical safety checklist to reduce morbidity and mortality in a global population.  N Engl J Med. 2009;360(5):491-499.PubMedGoogle ScholarCrossref
5.
Halliday  J, Carpenter  RH.  The effect of cognitive distraction on saccadic latency.  Perception. 2010;39(1):41-50.PubMedGoogle ScholarCrossref
6.
Simons  DJ, Chabris  CF.  Gorillas in our midst: sustained inattentional blindness for dynamic events.  Perception. 1999;28(9):1059-1074.PubMedGoogle ScholarCrossref
7.
Helmreich  RL, Merritt  AC, Wilhelm  JA.  The evolution of Crew Resource Management training in commercial aviation.  Int J Aviat Psychol. 1999;9(1):19-32.PubMedGoogle ScholarCrossref
8.
Hempel  S, Maggard-Gibbons  M, Nguyen  DK,  et al.  Wrong-site surgery, retained surgical items, and surgical fires: a systematic review of surgical never events.  JAMA Surg. 2015;150(8):796-805.PubMedGoogle ScholarCrossref
9.
Holzman  RS, Cooper  JB, Gaba  DM, Philip  JH, Small  SD, Feinstein  D.  Anesthesia Crisis Resource Management: real-life simulation training in operating room crises.  J Clin Anesth. 1995;7(8):675-687.PubMedGoogle ScholarCrossref
10.
Cook  DA, Hatala  R, Brydges  R,  et al.  Technology-enhanced simulation for health professions education: a systematic review and meta-analysis.  JAMA. 2011;306(9):978-988.PubMedGoogle ScholarCrossref
11.
Health & Social Care Information Centre. http://www.hscic.gov.uk. Accessed December 16, 2015.
12.
National Health Service England. Serious incident framework: supporting learning to prevent recurrence. https://www.england.nhs.uk/wp-content/uploads/2015/04/serious-incidnt-framwrk-upd.pdf. Published March 27, 2015. Accessed December 16, 2015.
13.
National Health Service England. Never events list 2015/16. https://www.england.nhs.uk/wp-content/uploads/2015/03/never-evnts-list-15-16.pdf. Published March 27, 2015. Accessed December 16, 2015.
14.
Howard  SK, Gaba  DM, Fish  KJ, Yang  G, Sarnquist  FH.  Anesthesia Crisis Resource Management training: teaching anesthesiologists to handle critical incidents.  Aviat Space Environ Med. 1992;63(9):763-770.PubMedGoogle Scholar
15.
Yule  S, Flin  R, Paterson-Brown  S, Maran  N, Rowley  D.  Development of a rating system for surgeons’ non-technical skills.  Med Educ. 2006;40(11):1098-1104.PubMedGoogle ScholarCrossref
16.
Sevdalis  N, Davis  R, Koutantji  M, Undre  S, Darzi  A, Vincent  CA.  Reliability of a revised NOTECHS scale for use in surgical teams.  Am J Surg. 2008;196(2):184-190.PubMedGoogle ScholarCrossref
17.
Fletcher  G, Flin  R, McGeorge  P, Glavin  R, Maran  N, Patey  R.  Anaesthetists’ Non-Technical Skills (ANTS): evaluation of a behavioural marker system.  Br J Anaesth. 2003;90(5):580-588.PubMedGoogle ScholarCrossref
18.
Healey  AN, Undre  S, Vincent  CA.  Developing observational measures of performance in surgical teams.  Qual Saf Health Care. 2004;13(suppl 1)(suppl 1):i33-i40.PubMedGoogle ScholarCrossref
19.
Saleh  GM, Lamparter  J, Sullivan  PM,  et al.  The International Forum of Ophthalmic Simulation: developing a virtual reality training curriculum for ophthalmology.  Br J Ophthalmol. 2013;97(6):789-792.PubMedGoogle ScholarCrossref
20.
Simon  JW, Ngo  Y, Khan  S, Strogatz  D.  Surgical confusions in ophthalmology.  Arch Ophthalmol. 2007;125(11):1515-1522.PubMedGoogle ScholarCrossref
21.
National safety standards for invasive procedures. http://www.england.nhs.uk/patientsafety/never-events/natssips/. Accessed December 16, 2015.
22.
Golnik  C, Beaver  H, Gauba  V,  et al.  Development of a new valid, reliable, and internationally applicable assessment tool of residents’ competence in ophthalmic surgery (an American Ophthalmological Society thesis).  Trans Am Ophthalmol Soc. 2013;111:24-33.PubMedGoogle Scholar
23.
Saleh  GM, Gauba  V, Mitra  A, Litwin  AS, Chung  AK, Benjamin  L.  Objective Structured Assessment of Cataract Surgical Skill.  Arch Ophthalmol. 2007;125(3):363-366.PubMedGoogle ScholarCrossref
×