Customize your JAMA Network experience by selecting one or more topics from the list below.
Get the latest research based on your areas of interest.
Jagsi R, Kitch BT, Weinstein DF, Campbell EG, Hutter M, Weissman JS. Residents Report on Adverse Events and Their Causes. Arch Intern Med. 2005;165(22):2607–2613. doi:10.1001/archinte.165.22.2607
Resident physicians are frontline providers with a unique vantage point from which to comment on patient safety–related events.
We surveyed trainees at 2 teaching hospitals about experiences with adverse events (AEs), mistakes, and near misses, as well as the potential causes.
Responses were obtained from 821 (57%) of 1440 eligible trainees. Analysis was restricted to 689 clinical trainees. More than half (55%) reported ever caring for a patient who had an AE. The most common types of AEs were procedural and medication related. More than two thirds of AEs were considered significant. Of the most recent AEs, 24% were attributed to mistakes. The most common reasons for mistakes, as perceived by residents, were excessive work hours (19%), inadequate supervision (20%), and problems with handoffs (15%). In the last week, 114 respondents (18%) reported having a patient with an AE; of these, 42 (37%) reported AEs involving a mistake for which they considered themselves responsible. In addition, 141 (23%) reported near-miss incidents in the last week for which they considered themselves responsible. In multivariate analyses, significant predictors of AEs in the last week were inpatient rotation, duty hours in the last week, and procedural specialty. Predictors of near-miss errors in the last week were inpatient rotation, days of fatigue in the last month, and postgraduate year 1 status.
These findings support the perception that AEs are commonly encountered by physicians and often associated with errors. Causes of errors in teaching hospitals appear to be multifactorial, and a variety of measures are necessary to improve safety. Eliciting residents’ perspectives is important because residents may perceive events, actions, and causal relationships that medical record reviewers or observers cannot.
More than a decade ago, the Harvard Medical Practice Study1,2 reported that complications of medical care, or adverse events (AEs), were responsible for considerable morbidity, mortality, and expense. More recently, the Institute of Medicine3,4 issued highly publicized reports emphasizing the importance of systems design in reducing error.
Teaching hospitals differ from other health care institutions at the systems level, with potential implications for patient safety. On the one hand, the academic environment may have positive effects by allowing access to medical advances, and the involvement of multiple caregivers in these settings may provide a protective level of redundancy. On the other hand, teaching hospitals require inexperienced providers to work long shifts caring for large numbers of patients with complex illnesses. The multiple caregivers in these settings are sometimes suboptimally coordinated or lack adequate supervision. This creates potential challenges for patient safety.5
Physicians in training are frontline providers with intense exposure to patient care and frequent interactions with other caregivers. Therefore, they have a unique vantage point from which to comment on the nature and causes of AEs in teaching hospitals. We conducted a survey study to explore trainees’ experiences and perceptions further.
In June 2003, we administered a questionnaire to all 1440 residents and fellows (hereafter referred to as “residents”) in 76 accredited programs at the Massachusetts General Hospital and Brigham and Women’s Hospital, Boston. Respondents primarily engaged in research or uninvolved in clinical activities during the last year were excluded from analysis.
The content of the questionnaire was based on literature review and consultation with experts in patient safety, work hours, and fatigue. The instrument was pretested using 2 focus groups attended by residents from different specialties and was revised based on the findings.6
During pretesting, residents had trouble defining different classes of safety-related events. Therefore, we included definitions in the survey, based on definitions in the literature. An AE was defined as the following:
a complication, injury, or harm to a patient resulting from medical management (not from the patient’s underlying condition or disease). An AE may or may not be preventable (ie, due to a mistake). Some examples include pneumothorax, retained objects, and adverse drug reactions, as well as hospital acquired infections, decubitus ulcers, perioperative MIs [myocardial infarctions], line infections, and falls.3
A mistake was defined as “an act or omission by any caregiver which would have been judged wrong by knowledgeable peers at the time it occurred.”7 A near miss or close call was defined as “a mistake that does not reach the patient or if it reaches the patient does not result in injury or harm.”8
We asked residents to report on AEs, mistakes, and near-misses.9,10 To minimize recall bias, we first asked about the most recent AE, its severity, whether it prolonged the patient’s hospital stay, whether it was due to a mistake, and, if so, whether the residents considered themselves at least partly responsible and what factors they believed contributed to the mistake.
We also asked residents to report on the number of incidents during the last week of clinical practice, corresponding to the same period of “exposure” as questions about work hours. Although residents were asked to report on incidents involving patients under their care, this information was not intended to provide a rate of events, because double-counting would result from reports by residents cross-covering others’ patients. Rather, it was designed to provide a measure of experience with events per resident.
Data were also gathered regarding variables that might affect the likelihood of a resident reporting safety-related events, including postgraduate year, specialty, setting of rotation, patient load, and work hours. Questions on work hours were modeled on previous instruments and asked about hours worked during the last week to minimize recall error.11,12 In addition, as a measure of chronic rather than acute fatigue, residents were asked how many days during the last month they had experienced significant fatigue.
A voluntary confidential survey was distributed during June 2003 at scheduled gatherings of residents, such as teaching conferences, in each department. Respondents were given ample protected time to complete the questionnaire, and discussion was prohibited during survey administration. Potential participants not present when surveys were distributed were subsequently contacted via e-mail. Several incentives were offered, including drawings for cash and prizes.13,14 These methods were approved by the Partners HealthCare System institutional review board.
Descriptions of AEs were coded independently by 2 of us (R.J. and B.T.K.), with discrepancies resolved by consensus. Statistical analysis was performed using SAS software version 8.0 for Windows (SAS Institute Inc, Cary, NC). χ2 Analysis was performed to test for differences in the likelihood of reporting an event between groups, based on respondent characteristics. Logistic regression was used to analyze the factors associated with the likelihood of reporting at least 1 AE or near-miss event in the last week. Factors achieving P<.10 on univariate analysis were included in multivariate models.
We obtained 821 responses (response rate, 57%). Analysis was restricted to 689 respondents in a primarily clinical year of training. Table 1 gives the demographic characteristics of the respondents.
More than half (56%) had taken overnight on-call duty in the hospital during the last month. Among these, the median number of hours of sleep on the last on-call night was 3 hours. The median number of patients covered on the last on-call shift was 10; patients cross-covered, 15; admissions, 3; and procedure or operation, 1.
Respondents reported significant fatigue on a median of 4 days while on duty during the last month. The median total duty hours in the last week was 65 hours.
Overall, 381 respondents (55%) reported caring for a patient with an AE some time during their training and provided details on the most recent event. The physician coders invalidated 2 responses as providing descriptions inconsistent with the AE definition provided. Table 2 gives the type of events, causes, and severity reported for the 379 most recent AEs. Procedural complications were the most common event reported (31%), followed by adverse drug events (21%) and infections (11%). The median time since the most recent AE was 21 days. Among the major categories of AEs, infections were least likely considered to be due to mistakes (5%), whereas medication events were considered to be due to mistakes in 36% of cases.
Event severity was reported as fatal by 30 respondents (8% of AEs), life threatening by 55 (15%), significant but not life threatening by 178 (47%), and insignificant by 109 (29%). Surgical respondents described AEs as significant or worse 83% of the time, compared with 66% for medical respondents and 67% for hospital-based respondents (P = .002). Overall, 132 respondents reported that the AE had prolonged a patient’s hospital stay. Surgeons were more likely to report that the event had prolonged the patient’s stay (58%), compared with medical trainees (36%) or hospital-based trainees (25%) (P<.001).
Of the respondents reporting AEs, 90 (24%) considered that the event had been caused by a mistake. This figure was similar across specialties. Of these, 69 (77%) considered that they were at least partially responsible for the mistake. Those who reported that the most recent AE was caused by a mistake were then asked to check all that applied on a list of potential causes. Of the 81 who did so, 15 (19%) thought that they were working too many hours, 16 (20%) thought that there was inadequate resident supervision, 12 (15%) thought that there were problems with handoffs between themselves and other providers, 4 (5%) thought that they were cross-covering too many patients, 10 (12%) though that they were carrying too many patients, and 46 (57%) listed other causes.
Overall, 114 respondents (18%) reported at least 1 AE in a patient under their care in the last week. Of these, 42 (37%) attributed at least 1 AE to a mistake for which they considered themselves partially responsible. In addition, 141 respondents (23%) reported a near miss during the last week for which they considered themselves partially responsible.
Table 3 gives the number of residents reporting AEs in the last week, compared by specialty type, postgraduate year, type of rotation, fatigue, work hours, and patient load. Respondents in surgical specialties were more likely to report AEs in the last week (32% of surgical, 14% of medical, and 8% of hospital-based residents) (P<.001). A higher proportion of surgeons also reported AEs due to mistakes (11% of surgical, 5% of medical, and 3% of hospital-based residents) (P = .008). However, similar proportions of those reporting AEs attributed the events to their own mistakes across the specialties (34% of surgical, 34% of medical, and 44% of hospital-based residents) (P = .76).
Although similar proportions of postgraduate year 1 and more advanced residents reported at least 1 AE in the last week (14% and 16%, respectively) (Table 3), a higher proportion of postgraduate year 1 residents reporting AEs attributed at least 1 of the AEs to their own mistake (57% vs 33%, P = .08). A higher proportion of postgraduate year 1 residents reported at least 1 near-miss mistake in the last week (29% vs 19%, P = .03).
Respondents who experienced more than 4 days of fatigue in the last month were more likely to report AEs (19% vs 13%, P = .04) and near misses (26% vs 16%, P = .002) (Table 3). Respondents on inpatient rotations were more likely to report AEs (23% vs 6%, P<.001), AEs due to mistakes (8% vs 3%, P = .01), and near misses (26% vs 14%, P<.001).
Among respondents on inpatient rotations, patient load was not significantly correlated with the likelihood of reporting events. Those working longer hours, however, were more likely to report at least 1 AE in the last week (18% of those working ≤80 hours vs 30% of those working >80 hours, P = .007) (Table 3). The median hours worked in the last week were 70 and 90 hours for the groups working shorter and longer hours, respectively. Based on observational opportunity, one might expect an increase of 29% (20/70) in the proportion of residents working longer vs shorter hours who reported AEs. The proportion of residents working longer hours reporting AEs, however, was 65% higher, exceeding that which would be predicted based on increased observational opportunity alone.
For the 2 end points with adequate numbers of events to allow further analysis (AEs and near misses), logistic regression models were constructed. Independent variables included type of rotation, days of fatigue, duty hours, specialty type, and postgraduate year. Only 1 work hours variable and 1 specialty variable were included in the model because the different measures of these 2 factors were highly collinear.
Significant predictors of reporting a patient with an AE in the last week were inpatient rotation, duty hours, and procedural specialty in the last week (Table 4). Significant predictors of reporting a near-miss error were inpatient rotation, days of fatigue in the last month, and postgraduate year 1 status, with procedural specialty also trending toward significance.
Table 5 gives responses to questions about the extent to which several potential factors contribute to mistakes in patient care. When asked how often fatigue had a negative impact on the safety of their patients, 26% of the respondents replied “never,” 47% “rarely,” 21% “sometimes,” 4% “frequently,” and 1% “always.”
We believe that these findings are important in several regards. First, this information from frontline trainees adds to existing evidence that AEs are commonly encountered and are often associated with errors. Procedural and medication-related events were the most common AEs reported by residents; the same categories comprise most AEs described in medical record review studies.2,15-17
Second, most AEs reported herein were considered serious. Although these findings contrast with those of the Harvard Medical Practice Study,1,2 in which a minority of events resulted in serious disability—a difference that may be attributable in part to diminished recall of less serious events or, conversely, to reluctance to document more severe AEs in medical records because of liability concerns—they, nevertheless, emphasize trainees’ familiarity with serious AEs. This wealth of exposure supports the inclusion of trainees in quality improvement initiatives.
Third, surgical trainees were more likely to report AEs than medical or hospital-based trainees. Several explanations are possible. Adverse events may be more common in surgical specialties, surgical complications may be more noticeable, or surgeons may be more well versed in recognizing AEs because of a culture of reporting such events.18 In any case, the proportion of AEs attributed to mistakes did not vary significantly by specialty. Approximately one quarter of the AEs reported in this study were judged to have been due to mistakes, similar to the proportion of events attributed to negligence in the Harvard Medical Practice Study.1,2
Fourth, residents on duty more than 80 hours per week were significantly more likely to report having cared for a patient with an AE, in a manner that exceeded the predicted contribution of increased observational opportunity. Near-miss errors were more common when residents reported more days of fatigue during the last month. Therefore, acute and chronic fatigue may be relevant targets for quality improvement. By showing a correlation between self-assessed fatigue and errors in a broad spectrum of specialties and settings outside the intensive care unit, these findings add to the evidence from recent studies19,20 suggesting that long hours of traditional rotations lead to increased errors compared with schedules limiting work hours.
Fifth, several factors other than resident hours were also perceived to contribute to errors in teaching hospitals. When residents were asked about the causes of their most recent error, inadequate supervision and problems with handoffs were as frequently cited as resident work hours. Furthermore, few respondents believed that any single factor—fatigue, supervision, handoffs, or patient load—contributed to a great extent to mistakes in patient care. Therefore, policies focusing exclusively on resident work hours may fail to produce substantial improvements in patient safety. Rather, varied measures, targeting the multiple areas of potential systems-level failures identified by frontline providers, must be undertaken to improve safety. Moreover, resident work hour limits21 should be carefully implemented to avoid adversely affecting work intensity or continuity.22,23
These results raise concerns about residents’ ability to recognize patient safety–related incidents. Almost half denied ever caring for a patient who had experienced any AE, a surprising finding given the frequency with which AEs seem to occur.1 We suspect that residents in this study failed to recognize certain AEs in their patients. Other authors have speculated regarding physicians’ probable insensitivity to AEs,24 although the spread of evidence-based medicine may increase providers’ ability to recognize AEs in the future.
Previous studies have used various methods to assess for AEs,25,26 including medical record review,1,27,28 observation,19,29 computerized screening,30 and interviews31 and surveys of caregivers24,32 and patients.33,34 Each of these has advantages and disadvantages. Even medical record review, often considered the gold standard, has significant limitations. Medical record review not only is resource intensive but also relies on adequate screening criteria and documentation. Given the incentives of the legal system, it is possible that caregivers may fail to record accurately, particularly when AEs result from negligence. Moreover, different reviewers often draw different conclusions based on the same data.35,36 Methodological triangulation from various data sources is likely to provide a more comprehensive understanding. Indeed, techniques relying on provider identification of events appear to provide information complementary to that of medical record review.37
In a study comparing resident reports with medical record review, researchers found that, although both methods identified similar numbers of medical record–verified events, the degree of overlap was relatively low.38 This suggests that eliciting residents’ perspectives may identify events that other methods may miss. In that study, however, resident report was used only to identify potential events. Further characterization of AEs was left to the medical record review, and reviewers were unable to identify AEs in 35 of 124 cases identified by the residents. Although one interpretation is that some residents failed to understand what constitutes an AE, it is also possible that inadequate documentation contributed to the failure of the medical record reviewers to perceive events in many of the identified cases. Therefore, the value of requiring medical record review to validate physician report is debatable, as each technique illuminates different aspects of a complex phenomenon.
Few other safety investigations thus far have focused on eliciting residents’ perspectives, and most have been smaller studies7,39,40 focusing on single specialties. Our study expands on this work by asking a larger population of residents in different specialties to report on a broad range of events.9,10 Near misses, long recognized by aviation safety experts as critical, may be particularly hard to perceive on medical record review but memorable for the providers involved.
This study has certain limitations. First, the study was conducted in only 2 institutions, both large tertiary care hospitals. Therefore, some findings may not be generalizable to settings such as smaller hospitals with fewer programs. Second, there may have been inaccuracies in the self-reported data obtained from the questionnaires. Although it is reassuring that the coders who reviewed the descriptions of AEs agreed with the characterization of virtually all events, they were ultimately dependent on the accuracy of the descriptions provided. Furthermore, although resident self-report of work hours has been validated,41 other items, particularly the question assessing chronic fatigue, may have been vulnerable to biased reporting. Third, although the response rate was reasonable, there was a large number of nonresponders, whose experiences may have differed systematically from those responding. Fourth, the study design may have failed to capture the full spectrum of errors. By focusing on experiences with AEs, we may have induced respondents to report errors of commission over errors of omission, which may not result in easily identifiable AEs.42 In addition, minor events or events not typically treated by physicians (such as decubitus ulcers) may have been systematically underreported. Fifth, our methods did not allow us to provide absolute counts of events. Instead, we focused our efforts on understanding the perception of AEs and errors per resident.
Nevertheless, this study provides important information regarding residents’ perceptions of AEs in teaching hospitals. Understanding the nature and causes of medical complications, as well as the appropriate response when such an event has occurred, is critical in professional education. Teaching hospitals should solicit resident involvement in quality improvement initiatives, and researchers should initiate prospective studies involving residents. Resident involvement not only offers a complementary source of data to those obtained by other methods but also serves as an intervention at an opportune time in physicians’ professional development.
Correspondence: Reshma Jagsi, MD, DPhil, c/o Office for Graduate Medical Education, Partners HealthCare System, Bulfinch 2, 55 Fruit St, Boston, MA 02114 (firstname.lastname@example.org).
Accepted for Publication: August 3, 2005.
Author Contributions: Drs Jagsi and Weissman had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.
Financial Disclosure: None.
Funding/Support: This study was supported by an anonymous donor to the Massachusetts General Hospital and by the Leape Foundation, Boston.
Acknowledgment: We acknowledge the assistance of Jo Shapiro, MD, Laura Schroeder, BA, and Georgi Bland, BA, in data collection, and of Sage, Inc, in data entry and statistical programming.
Create a personal account or sign in to: