[Skip to Content]
[Skip to Content Landing]
Commentary
October 2004

Striving for Imperfection: Facing Up to Human Error in Medicine

Arch Otolaryngol Head Neck Surg. 2004;130(10):1149-1151. doi:10.1001/archotol.130.10.1149

All humans make errors.1 All physicians are human. Therefore, all physicians make errors. This Aristotelian syllogism, while simple to understand, contains a wealth of unexplored territory for the medical profession in the 21st century.

Physicians confront error daily. This is hardly unique: every human working on a task confronts the possibility of imperfect performance. But unlike most professions, physicians live with the awareness that when they err, other humans may suffer. This is a concept we grasp intellectually but as long as we genuinely care about our patients will always struggle with emotionally. Every one of us knows we can err and that our errors can cause suffering. But every one of us also knows the natural, human, and incredibly powerful emotional desire to avoid that reality.

Other professions confront "high stakes" decisions daily. An error by an air traffic controller or a nuclear power plant engineer may cause immense suffering. But in most of these fields, errors that cause harm are rare. Very few pilots have to remember that they caused a crash. Almost every physician must remember a harm that they have caused (or may have caused) another human—often someone that they knew well and had come to care for.

Neither we nor our patients believe that we should be so coldly rational and detached that we find it easy to confront and analyze errors that cause harm. We must care enough for our patients that it will always be emotionally difficult for us to acknowledge this. But we must also, somehow, begin to confront, analyze, study, and learn from our errors if we are to take the next great leap forward in quality of medical care.

The human brain and body commit errors in predictable ways.2 Scientific study of these patterns can lead to improvements in how we organize our workplaces, our medical records, and our care patterns to reduce both errors and the harm that errors cause.3-5

The science of human error is, while not in its infancy, not yet fully developed.2,6 Some fields have studied error rigorously for decades: aviation,7 anesthesia,8,9 and chemical manufacturing are examples. Other fields have not yet begun to determine what patterns of errors occur. In otolaryngology, we do not know whether technical errors or cognitive errors are more common, whether errors are more common in the office or the inpatient ward, or whether postcall fatigue or too many shift changes are more likely to cause errors. Nor do we know which types of errors are most likely to lead to injury. If we did know these data, we could begin to devise strategies to anticipate, correct, and remediate the errors that occur in our specialty. This approach has been so extraordinarily fruitful in so many other fields that we believe there is an ethical obligation to begin using it throughout medicine.

The term error in medical practice is incredibly emotionally charged and linked tightly to a sense of failure and blame.10 This is detrimental to improvement. We stress that in fields in which the study of errors has been fruitful, there has been a conscious effort to recognize that all humans make errors. The goal is not to punish "bad" physicians. Instead, we need to design systems to prevent the normal errors made by good physicians from causing patient harm.

Medicine and surgery are incredibly complex activities. The study of human errors within these activities is thus inherently incredibly complex. To fully understand how errors affect our patients, we will need multifaceted and multidisciplinary approaches. One of those approaches will undoubtedly be a more rigorous study of psychomotor skills and techniques. The study by Montague et al11 in this month's issue of the ARCHIVES is an excellent start to that process.

What is a medical error?

There is no universally accepted definition of a medical error. One approach has been to define errors in terms of bad clinical outcomes. There are good arguments for this type of definition. Since most medical care is delivered by individual physicians without direct oversight, clinical outcome is often the only measurable variable. The other rationale for an outcome-based definition is that the goal of reducing medical errors is to improve patient care, that is, outcome. Certainly, clinical outcome is the variable that matters most to patients. However, restricting the definition of errors to outcome-based measurements limits the ability to uncover latent errors and to learn from "near misses."

Error as a deviation from optimal practice

Another approach, used by Montague et al,11 is to define error as deviation from optimal practice, regardless of whether outcome is affected. Using predetermined definitions for potential errors during ventilation tube placement, they observed an average of 1.6 errors during the procedure. One disadvantage of this definition is that, when error is defined as any deviation from optimal practice, the number of errors recorded may seem implausible. Montague et al11 report that only 7% of ventilation tube surgical procedures are error free. Do we interpret this to mean that otolaryngologists are performing bad surgery 93% of the time? Most of us find this intuitively implausible, and it becomes easy to "tune out" such results.

We would argue, with Montague et al,11 that looking at any deviation from optimal care is the more powerful approach. If we are to take a systems approach to reducing errors, we must look at "near misses" and learn from them before major errors occur. If we only look at risk when clinical outcome is affected, by definition we have arrived on the scene too late.

When error has been studied carefully, it has always been found that almost every bad outcome is the result of multiple small errors.2 The errors described by Montague et al11 are individually not typically harmful, but a cascade of these errors can lead to a tube lost in the middle ear space or a clotted tube requiring additional surgery. A similar error chain during a neck dissection could lead to major morbidity or mortality. We believe that the power of this study is not just in the potential for improvement in ventilation tube placement skills, but as a blueprint of a broad methodology for assessing and improving surgical psychomotor performance.

Do faulty individuals cause error?

Until recently, we have assumed that error-free delivery of health care is the natural consequence of nonnegligent care.12 This thinking presupposes that an error is due to a negligent action or inaction: carelessness, forgetfulness, inattention, or another aberrant practice. The involved practitioner is identified and "remediated" by disciplinary action and/or public humiliation. A new regulation may be created. In theory, the error is eradicated and we can go on providing nonnegligent care. This orientation creates a culture of fear and perpetuates a cycle of "name, blame, and shame."13 It prevents us from looking systematically at errors and understanding how they occur. Surgeons are particularly vulnerable to this faulty thinking. We have been taught that we are "captain of the ship" and that we are completely responsible for anything that occurs to our patients. This egocentric view oversimplifies modern medicine where many practitioners with different training and responsibilities function as a team.10 Worst of all, this attitude leads us to a conflict in admitting errors. Since in many cases only a single health care provider is aware of an error, such an error is likely to go undisclosed and therefore unexamined.

There are certainly a minimum number of physicians who cannot perform adequately or who do not discharge their responsibilities conscientiously. These rare individuals need to be remediated or terminated (something we have not done well). But most errors are made by the great majority of competent, conscientious physicians. We cannot learn from these errors until we can talk openly about them.

A systems approach to error prevention

A more useful view of errors considers health care as a system. The assumption is that a well-designed system will catch and prevent errors and reduce the harm from those that occur. An error is viewed as a system failure that requires a system adjustment.

Health care is highly complex and changeable, with variable risk and several parties involved in the end product. The concept of "Normal Accident Theory" focuses on the complexity of a system and the tight coupling of interactions between components of the system.7 This describes a modern hospital where nurse work shifts turn over every 8 hours and residents hastily sign their patients out at the end of the day to a covering resident. These systems are rife with latent errors. Latent errors can be of 2 types. The first is a situation that predisposes to error, such as fatigue or time pressure. The second is a failure in defenses, such faulty alarms or inadequate sign-outs.5

The human and mechanical components of a complex system have an irreducible minimum error rate. Most errors are trivial and do not cause harm. One of the authors was recently hospitalized; their central catheter was often flushed incorrectly. No harm resulted or was likely to result. When a serious error is made, it is usually "caught" by the system's defenses, such as alarms, monitors, protocols, and other health care professionals. The so-called "swiss cheese model" considers these defenses as having "holes" that can never be completely eliminated.5 Some holes are constant and others open and close. If an error unfortunately occurs at a moment when several "holes" are aligned, it may slip past several defenses and cause harm. Most of us have seen that significant errors almost always involve more than a single failure.

Prevention of error

Decades of study of so-called high-reliability organizations have found a group of qualities that characterize organizations with low failure rates.14 (For anyone trying to create or be part of a high-reliability system, we recommend "Managing the Unexpected" by Weick and Sutcliffe.14) Such organizations expect errors and create systems flexible enough to recognize them and adapt to changes that will prevent them. Reliability becomes a "dynamic nonevent." The process of devising reduction strategies is beyond the scope of this article, but we emphasize here that it requires both data and expert input. We must have data about errors in our specialty, and then we otolaryngologists must design prevention and amelioration strategies.

To gather data on errors, we need a series of reporting systems. One model is found in the aviation industry. About 30 000 reports are processed annually at a cost of $75 per report. The active reporting of near misses and errors expose latent errors and help develop system improvements without the overwhelming cost of an airplane crash. This system works because pilots and other airline personnel have come to trust it and to recognize that sharing these events improves safety for everyone.15 Current health care systems have a critical deficit in our reporting systems.16 In an ideal world, confidential and voluntary reporting systems would be readily accessible to every health care provider. Error and near-miss reporting would be encouraged and praised, not punished. Timely investigation of major events would occur, and systems changes would be made to reduce the likelihood of similar future errors.17 In this ideal world, there would also be legal protection for peer review that would allow much wider dissemination of error data than is currently the case.16

While voluntary anonymous reporting is an essential component of data collection, there are certain types of errors that will be undersampled in this methodology. There is also a need for other innovative techniques, such as "trigger tools," which allow rapid and cost-effective review of a large number of medical charts.18 Another important methodology will be to examine one type of interaction or task in detail with rigorous external analysis. Montague et al11 have demonstrated how this can be done. By rigorously analyzing 1 task, they have provided information about the most common errors. These data allow focused improvement efforts. The surgeon who eliminates only the 2 most common errors will halve his or her error probability.

Conclusions

Human, mechanical, and electronic error is inevitable and need not be feared. Through standardized education, certification, and professionalism, otolaryngology has developed an outstanding work corps with a very low human error rate. To reduce the rate further, we will have to look beyond improving individual performance and instead develop systems improvements. The first step is to recognize the tremendous opportunity we have. The next is to gather the data that will allow us to understand the problems and design improvements. Montague et al11 should be commended for helping to start this process in otolaryngology.

Since the Institute of Medicine report To Err is Human: Building a Safer Health System was published in 2000 (Kohn LT, Corrigan JM, Donaldson MS, eds. To Err is Human: Building a Safer Health System. Washington, DC: National Academy Press; 2000), there has been increased attention paid to error reduction and system improvement. The article by Montague et al, published in this issue of the ARCHIVES, has caused the editor to solicit this commentary by Zirkle and Roberson. Reducing variability and errors is to be embraced by all health professionals.

Michael E. Johns, MD

The authors have no relevant financial interest in this article.

Correspondence: Dr Roberson, Department of Otolaryngology and Communication Disorders, Children's Hospital Boston, Fegan 9, Boston, MA 02111 (david.roberson@childrens.harvard.edu).

References
1.
Not Available Not Available  Romans 3:23.
2.
Reason  J Human Error.  Cambridge, Mass: Cambridge University Press; 1990.
3.
Leape  LLLawthers  AGBrennan  TAJohnson  WG Preventing medical injury.  QRB Qual Rev Bull.1993;19:144-149.PubMedGoogle Scholar
4.
Cooper  JB Towards patient safety in anaesthesia.  Ann Acad Med Singapore.1994;23:552-557.PubMedGoogle Scholar
5.
Reason  J Managing the Risks of Organizational Accidents.  Burlington, Vt: Ashgate Publishing Company; 1997.
6.
Perrow  C Normal Accidents.  New York, NY: Basic Books; 1984.
7.
Helmreich  RL Managing human error in aviation.  Sci Am.1997;276:62-67.PubMedGoogle Scholar
8.
Newbower  RSCooper  JBLong  CD Learning from anesthesia mishaps: analysis of critical incidents in anesthesia helps reduce patient risk.  QRB Qual Rev Bull.1981;7:10-16.PubMedGoogle Scholar
9.
Gaba  DM Human error in anesthetic mishaps.  Int Anesthesiol Clin.1989;27:137-147.PubMedGoogle Scholar
10.
Dickey  JDamiano Jr  RJUngerleider  R Our surgical culture of blame: a time for change.  J Thorac Cardiovasc Surg.2003;126:1259-1260.PubMedGoogle Scholar
11.
Montague  M-LLee  MSWHussain  SSM Human error identification—an analysis of myringotomy and tube insertion.  Arch Otolaryngol Head Neck Surg.2004;130:1153-1157.Google Scholar
12.
Gaba  D Structural and organizational issues in patient safety: a comparison of health care to other high hazard industries.  Calif Manage Rev.2000;43:83-101.Google Scholar
13.
Reason  J Human error: models and management.  BMJ.2000;320:768-770.PubMedGoogle Scholar
14.
Weick  KSutcliffe  K Managing the Unexpected: Assuring High Performance in an Age of Complexity.  San Francisco, Calif: Jossey-Bass; 2001.
15.
Spencer  FC Human error in hospitals and industrial accidents: current concepts.  J Am Coll Surg.2000;191:410-418.PubMedGoogle Scholar
16.
Kohn  LCorrigan  JDonaldson  M To Err Is Human: Building a Safer Health System.  Washington, DC: National Academy Press; 2000.
17.
Leape  LL Why should we report adverse incidents?  J Eval Clin Pract.1999;5:1-4.PubMedGoogle Scholar
18.
Rozich  JDHaraden  CRResar  RK Adverse drug event trigger tool: a practical methodology for measuring medication related harm.  Qual Saf Health Care.2003;12:194-200.PubMedGoogle Scholar
×