[Skip to Navigation]
Sign In
Table 1.  Patient Characteristics
Patient Characteristics
Table 2.  Incidence of Medical Errors and Preventable Adverse Events
Incidence of Medical Errors and Preventable Adverse Events
Table 3.  Patient Outcomes
Patient Outcomes
Table 4.  Educational Outcomes (Time Motion Results and Orders)
Educational Outcomes (Time Motion Results and Orders)
Table 5.  Resident, Intern, and Attending Physician Survey Responsesa
Resident, Intern, and Attending Physician Survey Responsesa
1.
Kennedy  TJ, Regehr  G, Baker  GR, Lingard  LA.  Progressive independence in clinical training: a tradition worth defending?  Acad Med. 2005;80(10)(suppl):S106-S111.PubMedGoogle ScholarCrossref
2.
Nasca  TJ, Day  SH, Amis  ES  Jr; ACGME Duty Hour Task Force.  The new recommendations on duty hours from the ACGME Task Force.  N Engl J Med. 2010;363(2):e3.PubMedGoogle ScholarCrossref
3.
Kennedy  TJ.  Towards a tighter link between supervision and trainee ability.  Med Educ. 2009;43(12):1126-1128.PubMedGoogle ScholarCrossref
4.
Cottrell  D, Kilminster  S, Jolly  B, Grant  J.  What is effective supervision and how does it happen? a critical incident study.  Med Educ. 2002;36(11):1042-1049.PubMedGoogle ScholarCrossref
5.
Bell  BM.  Supervision, not regulation of hours, is the key to improving the quality of patient care.  JAMA. 1993;269(3):403-404.PubMedGoogle ScholarCrossref
6.
 Resident Duty Hours: Enhancing Sleep, Supervision, and Safety. Washington, DC: Institute of Medicine; 2008.
7.
Kennedy  TJ, Lingard  L, Baker  GR, Kitchen  L, Regehr  G.  Clinical oversight: conceptualizing the relationship between supervision and safety.  J Gen Intern Med. 2007;22(8):1080-1085.PubMedGoogle ScholarCrossref
8.
Kilminster  SM, Jolly  BC.  Effective supervision in clinical practice settings: a literature review.  Med Educ. 2000;34(10):827-840.PubMedGoogle ScholarCrossref
9.
Farnan  JM, Petty  LA, Georgitis  E,  et al.  A systematic review: the effect of clinical supervision on patient and residency education outcomes.  Acad Med. 2012;87(4):428-442.PubMedGoogle ScholarCrossref
10.
Kerlin  MP, Small  DS, Cooney  E,  et al.  A randomized trial of nighttime physician staffing in an intensive care unit.  N Engl J Med. 2013;368(23):2201-2209.PubMedGoogle ScholarCrossref
11.
Reriani  M, Biehl  M, Sloan  JA, Malinchoc  M, Gajic  O.  Effect of 24-hour mandatory vs on-demand critical care specialist presence on long-term survival and quality of life of critically ill patients in the intensive care unit of a teaching hospital.  J Crit Care. 2012;27(4):421.e1-421.e7.PubMedGoogle ScholarCrossref
12.
Halpern  SD, Detsky  AS.  Graded autonomy in medical education—managing things that go bump in the night.  N Engl J Med. 2014;370(12):1086-1089.PubMedGoogle ScholarCrossref
13.
Hinchey  KT, Rothberg  MB.  Can residents learn to be good doctors without harming patients?  J Gen Intern Med. 2010;25(8):760-761.PubMedGoogle ScholarCrossref
14.
Landrigan  CP, Muret-Wagstaff  S, Chiang  VW, Nigrin  DJ, Goldmann  DA, Finkelstein  JA.  Effect of a pediatric hospitalist system on housestaff education and experience.  Arch Pediatr Adolesc Med. 2002;156(9):877-883.PubMedGoogle ScholarCrossref
15.
Saint  S, Fowler  KE, Krein  SL,  et al.  An academic hospitalist model to improve healthcare worker communication and learner education: results from a quasi-experimental study at a Veterans Affairs medical center.  J Hosp Med. 2013;8(12):702-710.PubMedGoogle ScholarCrossref
16.
Hauer  KE, Irby  DM. Effective clinical teaching in the inpatient setting. In: Wachter  R, Goldman  L, Hollander  H, eds.  Hospital Medicine. 2nd ed. Philadelphia, PA: Lippincott Williams & Wilkins; 2005:71-78.
17.
Young  JQ, Ranji  SR, Wachter  RM, Lee  CM, Niehaus  B, Auerbach  AD.  “July effect”: impact of the academic year-end changeover on patient outcomes: a systematic review.  Ann Intern Med. 2011;155(5):309-315.PubMedGoogle ScholarCrossref
18.
Bates  DW, Cullen  DJ, Laird  N,  et al; ADE Prevention Study Group.  Incidence of adverse drug events and potential adverse drug events: implications for prevention.  JAMA. 1995;274(1):29-34.PubMedGoogle ScholarCrossref
19.
Kaushal  R.  Using chart review to screen for medication errors and adverse drug events.  Am J Health Syst Pharm. 2002;59(23):2323-2325.PubMedGoogle Scholar
20.
Starmer  AJ, Spector  ND, Srivastava  R,  et al; I-PASS Study Group.  Changes in medical errors after implementation of a handoff program.  N Engl J Med. 2014;371(19):1803-1812.PubMedGoogle ScholarCrossref
21.
Starmer  AJ, Sectish  TC, Simon  DW,  et al.  Rates of medical errors and preventable adverse events among hospitalized children following implementation of a resident handoff bundle.  JAMA. 2013;310(21):2262-2270.PubMedGoogle ScholarCrossref
22.
National Coordinating Council for Medication Error Reporting and Prevention website. http://www.nccmerp.org/types-medication-errors. Accessed June 9, 2017.
23.
Huang  KT, Minahan  J, Brita-Rossi  P,  et al.  All together now: impact of a regionalization and bedside rounding initiative on the efficiency and inclusiveness of clinical rounds.  J Hosp Med. 2017;12(3):150-156.PubMedGoogle ScholarCrossref
24.
Biondi  EA, Varade  WS, Garfunkel  LC,  et al.  Discordance between resident and faculty perceptions of resident autonomy: can self-determination theory help interpret differences and guide strategies for bridging the divide?  Acad Med. 2015;90(4):462-471.PubMedGoogle ScholarCrossref
25.
Resar  RK, Rozich  JD, Classen  D.  Methodology and rationale for the measurement of harm with trigger tools.  Qual Saf Health Care. 2003;12(suppl 2):ii39-ii45.PubMedGoogle ScholarCrossref
26.
Baldwin  DC  Jr, Daugherty  SR, Ryan  PM.  How residents view their clinical supervision: a reanalysis of classic national survey data.  J Grad Med Educ. 2010;2(1):37-45.PubMedGoogle ScholarCrossref
27.
Defilippis  AP, Tellez  I, Winawer  N, Di Francesco  L, Manning  KD, Kripalani  S.  On-site night float by attending physicians: a model to improve resident education and patient care.  J Grad Med Educ. 2010;2(1):57-61.PubMedGoogle ScholarCrossref
28.
Farnan  JM, Burger  A, Boonyasai  RT,  et al; SGIM Housestaff Oversight Subcommittee.  Survey of overnight academic hospitalist supervision of trainees.  J Hosp Med. 2012;7(7):521-523.PubMedGoogle ScholarCrossref
29.
Haber  LA, Lau  CY, Sharpe  BA, Arora  VM, Farnan  JM, Ranji  SR.  Effects of increased overnight supervision on resident education, decision-making, and autonomy.  J Hosp Med. 2012;7(8):606-610.PubMedGoogle ScholarCrossref
30.
Phy  MP, Offord  KP, Manning  DM, Bundrick  JB, Huddleston  JM.  Increased faculty presence on inpatient teaching services.  Mayo Clin Proc. 2004;79(3):332-336.PubMedGoogle ScholarCrossref
31.
Itani  KM, DePalma  RG, Schifftner  T,  et al.  Surgical resident supervision in the operating room and outcomes of care in Veterans Affairs hospitals.  Am J Surg. 2005;190(5):725-731.PubMedGoogle ScholarCrossref
32.
Fallon  WF  Jr, Wears  RL, Tepas  JJ  III.  Resident supervision in the operating room: does this impact on outcome?  J Trauma. 1993;35(4):556-560.PubMedGoogle ScholarCrossref
33.
Gennis  VM, Gennis  MA.  Supervision in the outpatient clinic: effects on teaching and patient care.  J Gen Intern Med. 1993;8(7):378-380.PubMedGoogle ScholarCrossref
34.
Burnham  EL, Moss  M, Geraci  MW.  The case for 24/7 in-house intensivist coverage.  Am J Respir Crit Care Med. 2010;181(11):1159-1160.PubMedGoogle ScholarCrossref
35.
Wallace  DJ, Angus  DC, Barnato  AE, Kramer  AA, Kahn  JM.  Nighttime intensivist staffing and mortality among critically ill patients.  N Engl J Med. 2012;366(22):2093-2101.PubMedGoogle ScholarCrossref
36.
Kerlin  MP, Halpern  SD.  Twenty-four-hour intensivist staffing in teaching hospitals: tensions between safety today and safety tomorrow.  Chest. 2012;141(5):1315-1320.PubMedGoogle ScholarCrossref
37.
Busari  JO, Weggelaar  NM, Knottnerus  AC, Greidanus  PM, Scherpbier  AJ.  How medical residents perceive the quality of supervision provided by attending doctors in the clinical setting.  Med Educ. 2005;39(7):696-703.PubMedGoogle ScholarCrossref
38.
Farnan  JM, Johnson  JK, Meltzer  DO, Humphrey  HJ, Arora  VM.  On-call supervision and resident autonomy: from micromanager to absentee attending.  Am J Med. 2009;122(8):784-788.PubMedGoogle ScholarCrossref
39.
Stevermer  JJ, Stiffman  MN.  The effect of the teaching physician rule on residency education.  Fam Med. 2001;33(2):104-110.PubMedGoogle Scholar
40.
Farnan  JM, Humphrey  HJ, Arora  V.  Supervision: a 2-way street.  Arch Intern Med. 2008;168(10):1117.PubMedGoogle Scholar
41.
Farnan  JM, Johnson  JK, Meltzer  DO, Humphrey  HJ, Arora  VM.  Resident uncertainty in clinical decision making and impact on patient care: a qualitative study.  Qual Saf Health Care. 2008;17(2):122-126.PubMedGoogle ScholarCrossref
42.
Kennedy  TJ, Regehr  G, Baker  GR, Lingard  L.  Preserving professional credibility: grounded theory study of medical trainees’ requests for clinical support.  BMJ. 2009;338:b128.PubMedGoogle ScholarCrossref
43.
Landrigan  CP, Parry  GJ, Bones  CB, Hackbarth  AD, Goldmann  DA, Sharek  PJ.  Temporal trends in rates of patient harm resulting from medical care.  N Engl J Med. 2010;363(22):2124-2134.PubMedGoogle ScholarCrossref
2 Comments for this article
EXPAND ALL
Broadening our Perspective on the Study of Attending Physician Supervissory Practices
Mark Goldszmidt, MD, PhD, FRCPC | Schulich School of Medicine & Dentistry, University of Western Ontario
To begin with, I would like to congratulate the authors on the completion of a complex and challenging study. The scope of the task taken on with this study was truly remarkable. While, as the associated editorial commented, it is possible that their findings may have been different in a different setting and with different measures of error and that perhaps a different design would have yielded more definitive results, I would suggest that there is another perspective that we should consider. Moreover, I would suggest that this perspective represents one of the challenges to studying social phenomena using randomized trial based designs .

In particular, I am interested in practice variability and in the things that were perhaps not measured/observed for and their implications for this type of research. While all of the participating attendings may have been considered good clinical teachers by the residents that they supervise, this does not mean that they supervise in the same way. I would go so far as to suggest that it is highly unlikely that they do. In a recent study that I was involved in, building on Kennedy et. al. 2007 work on frontstage and backstage supervision, we identified four different approaches attendings took to clinical supervision (Goldszmidt et. al. 2015).
While I can imagine all four approaches being used by attendings in the study, I am quite certain that even when carrying out the same tasks (e.g. rounding with the team) they would have enacted them very differently. For example, as Kennedy has also pointed out, not being physically present does not mean that the attending is not actively supervising. Most attendings have backstage practices -things the residents do not see them doing - that help them to ensure patient safety. They do not however all share the same practices. Some have very elaborate practices and others are more easy going. While I could elaborate further, the point I am making is that it is very difficult to be sure that the phenomena of study - rounding with or not - can be studied with a randomized trial design.
I also suggested that other important characteristics could have been observed for or measured that would have greatly enhanced the study findings. One example would have been outliers. Were there attendings, regardless of which style of rounding that they used , who stood out in relation to their colleagues? If so, what did they do differently? While I was impressed with the time-motion observations it left me wondering if the observers could have explored other important differences reflective of attending style. These could have included descriptions of what the discussions focused on, the degree to which attendings differed in their ability to engage the team, think reflexively, role model patient centeredness etc.
In a world increasingly asking for meaningful, measurable patient care relevant outcomes of education, I would suggest that this study provides, once again, a great demonstration of why, even when designed well, this may be an insurmountable task. At the minimum, I would suggest that it argues for the need to educate key stakeholders about the importance of asking different questions and exploring rich description over statistically significant differences.
CONFLICT OF INTEREST: None Reported
READ MORE
The Value Of Studies
Domenic Esposito, M.D. | Università Politecnica Delle Marche
Findings were interesting but not unexpected. Simply participating in a study results in better patient care( in this case less errors). It’s one of the real values of an academic setting.
CONFLICT OF INTEREST: None Reported
Original Investigation
July 2018

Effect of Increased Inpatient Attending Physician Supervision on Medical Errors, Patient Safety, and Resident Education: A Randomized Clinical Trial

Author Affiliations
  • 1Division of General Internal Medicine, Department of Medicine, Massachusetts General Hospital, Boston
  • 2Division of General Pediatrics, Department of Medicine, Boston Children’s Hospital, Boston, Massachusetts
  • 3Division of Sleep and Circadian Disorders, Department of Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, Massachusetts
JAMA Intern Med. 2018;178(7):952-959. doi:10.1001/jamainternmed.2018.1244
Key Points

Question  What is the effect of increased attending physician supervision on a resident inpatient team for both patient safety and educational outcomes?

Findings  In this randomized clinical trial of 22 attending physicians each providing 2 different levels of supervision, increased supervision did not significantly reduce the rate of medical errors but did result in interns speaking less and residents reporting a decreased level of autonomy.

Meaning  Residency training programs should have more flexibility in balancing patient safety, resident autonomy, and learner needs.

Abstract

Importance  While the relationship between resident work hours and patient safety has been extensively studied, little research has evaluated the role of attending physician supervision on patient safety.

Objective  To determine the effect of increased attending physician supervision on an inpatient resident general medical service on patient safety and educational outcomes.

Design, Setting, and Participants  This 9-month randomized clinical trial performed on an inpatient general medical service of a large academic medical center used a crossover design. Participants were clinical teaching attending physicians and residents in an internal medicine residency program.

Interventions  Twenty-two faculty provided either (1) increased direct supervision in which attending physicians joined work rounds on previously admitted patients or (2) standard supervision in which attending physicians were available but did not join work rounds. Each faculty member participated in both arms in random order.

Main Outcomes and Measures  The primary safety outcome was rate of medical errors. Resident education was evaluated via a time-motion study to assess resident participation on rounds and via surveys to measure resident and attending physician educational ratings.

Results  Of the 22 attending physicians, 8 (36%) were women, with 15 (68%) having more than 5 years of experience. A total of 1259 patients (5772 patient-days) were included in the analysis. The medical error rate was not significantly different between standard vs increased supervision (107.6; 95% CI, 85.8-133.7 vs 91.1; 95% CI, 76.9-104.0 per 1000 patient-days; P = .21). Time-motion analysis of 161 work rounds found no difference in mean length of time spent discussing established patients in the 2 models (202; 95% CI, 192-212 vs 202; 95% CI, 189-215 minutes; P = .99). Interns spoke less when an attending physician joined rounds (64; 95% CI, 60-68 vs 55; 95% CI, 49-60 minutes; P = .008). In surveys, interns reported feeling less efficient (41 [55%] vs 68 [73%]; P = .02) and less autonomous (53 [72%] vs 86 [91%]; P = .001) with an attending physician present and residents felt less autonomous (11 [58%] vs 30 [97%]; P < .001). Conversely, attending physicians rated the quality of care higher when they participated on work rounds (20 [100%] vs 16 [80%]; P = .04).

Conclusions and Relevance  Increased direct attending physician supervision did not significantly reduce the medical error rate. In designing morning work rounds, residency programs should reconsider their balance of patient safety, learning needs, and resident autonomy.

Trial Registration  ClinicalTrials.gov Identifier: NCT03318198

Introduction

Graduate physician training is based on the concept of progressive independence. As trainees gain experience, they are provided with decreasing levels of clinical supervision; the goal is resident competence to practice independently.1,2 During training, supervision is critical in ensuring patient safety, yet adult learning theory highlights that learning occurs when trainees are challenged to work beyond their comfort level and there is appropriate space between the teacher and trainee.3 Supervision is therefore a complex activity requiring clinical educators to continuously balance their degree of involvement.4 Beginning with the Bell Commission and the Institute of Medicine’s 2008 report on resident duty hours, there have been increased calls for enhancing supervision because of patient safety concerns.5,6(pp125-158) The evidence for increased supervision, however, is not robust.4,7,8 Two meta-analyses on supervision found studies limited by lack of objective measures and nonrandomized designs.8,9 Two intensive care unit (ICU) studies on 24 hours per day, 7 days per week supervision did not demonstrate patient safety benefits.10,11 Editorials have raised concerns about oversupervision and its effect on hindering trainee competence and longer-term patient safety.3,12,13

The Accreditation Council for Graduate Medical Education defines direct supervision as the presence of the supervising physician with the resident and patient. Indirect supervision occurs when the supervising physician is immediately available but not physically present.2 The growth of the hospitalist movement has increased faculty presence on the inpatient wards and, in turn, has increased direct supervision on patient rounds.14 On some services, attending physicians commonly join both new patient rounds and work rounds on previously admitted patients, which used to be the domain of residents alone.15,16 However, it is unclear what effect this increased direct clinical supervision on work rounds has on patient safety and to what extent it affects progressive trainee independence.7,12

In response to these concerns, we conducted a randomized, crossover clinical trial of 2 levels of supervision on an inpatient general medical teaching service to evaluate patient safety and educational outcomes. We hypothesized that increased direct supervision of resident work rounds would improve patient safety and education.

Methods
Study Design

This study was conducted on the general medical teaching service at Massachusetts General Hospital (MGH), an 1100-bed academic medical center in Boston, Massachusetts, with 188 internal medicine residents. The study was completed over 9 months, from September 30, 2015, to June 22, 2016, avoiding the summer months when new residents begin their residency.17 This study was approved by the MGH Institutional Review Board. The protocol is available in Supplement 1.

Participants

We selected and consented participants from a pool of attending physicians who regularly supervise residents on our medical inpatient service. To ensure consistency and expertise, eligible faculty participants were chosen based on superior ratings by residents for outstanding teaching skills and a career focus in resident education. As such, they were not intended to broadly represent all attending physicians, but rather those well versed in providing supervision.

Intervention

The intervention and control periods were 2 weeks, designed to coincide with the length of an attending physician inpatient rotation. Residents could be on service 2 or 4 weeks and might straddle intervention and control periods, but this was rare. The control arm was standard supervision at MGH, consisting of bedside presentations of newly admitted patients to the attending physician from 8:00 am to 10:00 am. Attending physicians do not join resident work rounds on established patients; instead, they “card flip” and discuss treatment plans for established patients with the supervising resident in the early afternoon. In the intervention arm, attending physicians joined both new patient presentations and resident work rounds 7 days a week, providing direct supervision during work rounds. Attending physician availability and responsibility during the afternoons and evenings was the same in both arms.

Each attending physician participated in 1 2-week block with standard supervision (control) and 1 2-week block with increased direct supervision on work rounds (intervention), with the order of blocks randomly assigned per attending physician at the start of the study. The participating attending physicians received a 1-hour training session on increased direct supervision with a discussion on expectations for joining work rounds.

Patients

Patients were assigned to teams by the admitting office based on bed availability. Only patients with study faculty listed as the attending physician of record were included in the study analysis and all faculty’s patients were on resident teams. If the patient was admitted before the attending physician started the rotation or was discharged after the attending finished, only the days the patient was listed under the study attending were evaluated for medical errors.

Outcomes
Medical Errors

The primary patient safety outcome was medical errors, defined as preventable failures in the process of care, consisting of preventable adverse events and near misses. An adverse event was defined as medical care that led to patient harm; harm was broadly defined as any measured physiologic disturbance due to medical care. A near miss was a failure in a process of care that did not result in patient harm. Using a previously validated approach to collecting and assessing these outcomes, 5 research nurses, blinded to study arm, reviewed the medical records of all study patients, formal incident reports from the hospital incident-reporting system, daily pharmacy reports, and pharmacy paging logs and solicited reports from nurses working on the study units.18-21 Four physician investigators, blinded to study arm, classified each incident as an adverse event, near miss, or exclusion. Physician reviewers further classified all adverse events as preventable or nonpreventable. In a sample of 40 events reviewed by all 4 physicians, the κ statistic was 0.79 (82.5% agreement) for event classification and 0.47 for preventability of adverse events, comparable to other studies.20 Discordant classifications were reconciled by discussion among the 4 reviewers. Examples of medical errors are provided in eTables 1 and 2 in Supplement 2. Severity was rated using the modified National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) Index.22 Two blinded physician reviewers assessed severity with 100% agreement. For secondary outcomes, hospital administrative data were used to collect information on mortality, ICU transfers, and length of stay.

Educational Outcomes

For our primary education outcome, we conducted time-motion observations on both the control and increased-supervision teams to measure the length of rounds and the total speaking time of faculty, residents, and interns. An independent observer recorded duration of rounds and speaking time using an iPad running Microsoft Access Timing Program and database following a previously described protocol.23

For secondary educational outcomes, research nurses collected daily the number of radiology studies obtained, consultations called, and the number of written orders from 7:00 am to 12:00 pm and 12:01 pm to 5:00 pm on every patient. These times were chosen to reflect a period during rounds and postrounds assuming residents might change their orders in the afternoon following discussion with the attending physician, especially on the control teams. Residents, interns, and attending physicians were given an online survey at the end of each 2-week rotation to assess perceptions of education and teaching, length of rounds, patient care, decision making, autonomy, and satisfaction. Surveys were designed based on established literature in the field.24

Data Analysis

For each patient-day, patients’ group assignment was determined by the status of the responsible attending physician for that day. Therefore, it was possible for patient-days within the same hospitalization to span across both standard-supervision and increased-supervision periods. We compared the patient characteristics between all patients with an attending physician of record during the standard- and increased-supervision periods using 2-sample t tests for continuous variables and χ2 tests for categorical variables. Incidence rate was calculated as number of events (overall errors, preventable adverse events, and near misses) per 1000 patient-days. We used the generalized equations estimation approach to account for clustering of patients within each corresponding attending physician. We used Poisson regression models to compare medical error rates between the 2 groups. As a sensitivity analysis, we compared the medical error rates restricted to patients admitted and discharged while under the care of the same attending physician. For secondary patient outcomes, we used a Poisson regression model to compare hospital length of stay, logistic regression models for ICU transfer, and discharge disposition. We used 2-sample t tests to compare duration of rounds and time spent in each type of activity (attending physician, resident, intern, or patient speaking). We used linear regression models to compare number of radiology studies, consultations, and orders written. All analyses were conducted using SAS, version 9.4 (SAS Institute). Assuming an intraclass correlation coefficient of 0.07 for the patients clustered within the corresponding attending physician, the study was originally designed to detect a difference of 110 errors vs 66 errors per 1000 patient-days between the 2 groups with 80% power and a .05 2-sided significance level based on prior published research.21

Results
Study Patients and Attending Physicians

Twenty-two of 24 eligible invited attending physicians participated in the study over a total of 44 2-week teaching blocks. Of the 22 faculty in the study, 8 (36%) were women. Faculty had a wide range of experience, with 7 (32%) having less than 5 years and 7 (32%) having more than 15 years of experience. Seventy-seven percent of attending physicians’ clinical work was solely inpatient care.

During the study period, attending physicians were assigned a total of 1259 patient hospitalizations (5772 patient-days), 666 standard-supervision and 637 increased-supervision hospitalizations, with 44 patients who spanned across periods. Patient-days attributed to each attending physician ranged from 75 to 232 with similar distribution between the 2 groups, and the median difference of patient-days attributed to the same attending physician was 18 days (interquartile range [IQR], 8-43 days). The distribution of age, sex, race, insurance, and medical complexity did not differ significantly between the 2 groups (Table 1).

Medical Errors and Adverse Events

The overall medical error rate was 107.6 per 1000 patient-days in the standard-supervision group vs 91.1 per 1000 patient-days in the increased-supervision group (15% relative reduction; 95% CI, −36% to 9%; P = .21) (Table 2). There was no statistically significant difference in preventable adverse events (80.0 vs 70.9 events per 1000 patient-days; P = .36) or rate of near misses (27.6 vs 20.2 per 1000 patient-days; P = .21). In a subgroup analysis restricted to patients admitted and discharged under the care of the same attending physician, results were similar. In a subgroup analysis comparing attending physicians by years of experience, there was no difference. In categorizing severity of harm of adverse events using MCC MERP severity categories E through H, 216 (88.5%) standard-supervision events and 171 (88.6%) increased-supervision events were only minor harm (P = .50).

Secondary Patient Outcomes

There was no significant difference in standard vs increased supervision in length of stay (median, 6.0 [IQR, 4.0-11.0] vs 6.0 [IQR, 3.0-11.0] days; P = .93), transfers to the ICU (88 [13.2%] vs 101 [15.9%]; P = .22), deaths (17 [2.6%] vs 17 [2.7%]; P = .84), or discharge disposition (281 [42.2%] vs 270 [42.4%] discharged home; P = .85) (Table 3).

Educational Outcomes

Mean total duration of work rounds did not change between standard vs increased supervision (202 [95% CI, 192-212] vs 202 [95% CI, 189-215] minutes; P = .99). New-admission bedside presentations were also the same mean duration (105 [95% CI, 94-116] vs 106 [95% CI, 94-119] minutes; P = .87). On work rounds, junior residents spoke similar mean lengths of time during standard vs increased supervision (58 [95% CI, 54-63] vs 57 [95% CI, 52-61] minutes; P = .58). However, the mean amount of time interns spoke was longer in the standard arm (64 [95% CI, 60-68] vs 55 [95% CI, 49-60] minutes; P = .008). Patients and families spoke the same mean amount of time in work rounds regardless of level of supervision (13 [95% CI, 12-14] vs 13 [95% CI, 11-14] minutes; P = .80) (Table 4).

Residents and interns ordered the same daily mean number of radiology studies (0.39 [95% CI, 0.36-0.43] vs 0.41 [95% CI, 0.38-0.44] studies per patient-day; P = .75) and similar number of consultations (0.78 [95% CI, 0.75-0.82] vs 0.87 [95% CI, 0.83-0.91] per patient-day; P = .28) on their patients regardless of study arm. Trainees placed slightly more orders on the intervention teams than on the control teams both between the hours of 7:00 am and 12:00 pm (4.41 [95% CI, 4.19-4.63] vs 5.35 [95% CI, 5.08-5.61] orders per patient-day; P = .10) and between 12:01 pm and 5:00 pm (3.98 [95% CI, 3.77-4.18] vs 5.13 [95% CI, 4.83-5.44] orders per patient-day; P = .09), but the differences were not significant (Table 4).

In surveys, residents and interns reported that when an attending physician joined work rounds they were less efficient, felt less autonomous, and had less ability to make independent decisions. Without the attending physician on work rounds, residents believed that they were the team’s leader and their comfort in making independent patient care decisions improved. Similarly, in the control arm interns believed that they received more feedback on their decision making and supervision was “just right.” Residents in both the control and intervention arms believed that they provided the same quality of care and rated the learning environments similarly. Conversely, attending physicians believed that they knew the team’s plan of care better, rated the quality of care higher, and felt more satisfied with the care provided when they participated on work rounds. Attending physicians believed that the educational experience was the same in both arms (Table 5).

Discussion

We found that increasing the level of supervision during resident work rounds did not produce a statistically significant reduction in medical errors. While there was a 15.3% reduction in errors in the intervention arm compared with the control arm, this reduction is far less than the 23% to 46% reductions seen in the intervention arm of other safety studies using a similar methodology.20,21,25 Furthermore, analysis of the types of errors detected in our study suggests that this reduction is not clinically meaningful because 88.5% of our detected errors were level E (minor) and corollary patient safety measures, including length of stay, ICU transfers, and mortality, were similar. Our target of 40% reduction in errors for the power calculation, while a relatively large effect size, was chosen to ensure detection of a clinically significant reduction in errors, because it was anticipated that most errors detected would be minor.

The current literature on supervision and patient safety consists of studies with variable outcomes.9,14,24,26-30 A retrospective cohort study of nearly 40 000 surgical cases of direct supervision in the operating room compared with surgeons simply being available found no difference in mortality.31 Yet a smaller retrospective medical record review of 4417 cases found reduced complications and lower mortality rates when attending physicians were present or scrubbed in the operating room.32

It is not clear whether a similar level of supervision is required for procedural skills vs cognitive decision making. In the outpatient setting, Gennis and Gennis33 found that preceptors who directly saw patients judged them to be more severely ill than residents did and made major diagnostic changes in 5.5% of the patients, but there were no patient safety outcomes studied. Patient safety outcomes were evaluated in ICU studies after calls for 24 hours a day, 7 days a week intensivist coverage in ICUs due to higher death rates on weekends and at night.34 Kerlin et al10 randomized ICU staffing to in-house call (direct) vs telephone calls and found no difference in length of stay or ICU mortality. A retrospective cohort study of 65 000 ICU patients also found no difference in mortality with the addition of overnight intensivists.35 Our study provides further evidence that increased supervision may not increase patient safety.

Published editorials have questioned whether increased supervision is educationally beneficial or reduces autonomy, leading to less competent residents.1,12,36 Some survey studies of residents suggest that increased supervision may improve education,1,14,26,29,30,37,38 although others do not.14,38,39 Our study indicates that increased supervision may have negative consequences for resident education and autonomy. Interns spoke less and residents reported less comfort making independent decisions with an attending on work rounds. Studies on learners note that they worry about exposing their gaps in knowledge in front of attending physicians, especially given that the same attending physicians often also evaluate them.29,40-42 Interns may feel more comfortable asking questions to a peer rather than an attending.1,41 Educational theory supports the idea of peer collaborative learning in which trainees learn from each other rather than teachers.3 There are certainly educational reasons for attending physicians to join work rounds, including observation and feedback at the bedside; however, our work suggests that multiple factors need to be weighed when deciding when an attending physician should be present on work rounds, including patient safety, peer-to-peer education, and resident autonomy.

Limitations

This was a single-center study at a large academic residency program with a culture that emphasizes resident autonomy and as such may have limited generalizability. However, all residency programs struggle with the balance between resident autonomy and supervision. It is possible that faculty not skilled in creating a collaborative teaching environment could limit intern speaking time, a phenomenon that additional faculty training might mitigate. The inability to mask level of supervision could affect resident behavior and cause them to increase their level of vigilance; however, given the duration of the study and intensity of workload, we believe that it would be difficult for participants to change their behavior in a sustained enough manner to bias the results. We defined harm broadly, as any physiologic change, which may have contributed to the difference in our proportion of preventable adverse events and near misses compared with other studies. However, we would not expect this to affect differences in rates of errors between the intervention and control arms of the study. Last, while we did observe a slightly reduced rate of errors in the increased-supervision arm, this was not statistically significant. The methodology to detect errors does not have perfect sensitivity and could have missed relevant medical errors and thus reduced our overall power to detect a statistically significant reduction. Arguing against this, our study detected nearly twice as many errors as a key study measuring harm using similar methodology.43 We cannot rule out the possibility that a much larger study would demonstrate a difference in rate of medical errors, but at most, any difference would be modest.

Conclusions

Attending physician participation on work rounds was not associated with an improvement in the rate of medical errors, adding to the body of literature that suggests that increased attending supervision does not necessarily improve patient safety. Conversely, our data suggest that a larger attending physician presence may have negative consequences for resident education because interns spoke less and residents felt less empowered to make independent medical decisions. In contrast, attending physicians rated the quality of care higher when they participated on work rounds, which may be why more attending physicians are joining resident work rounds. Given the importance of graduated autonomy to adult learning and the value of peer learning, the decisions about level of supervision should consider the need for distance between teacher and student for learning to occur. The results of this study suggest that residency training programs reconsider the appropriate level of attending physician supervision in designing their morning rounds, balancing patient safety, excellent care, learner needs, and resident autonomy.

Back to top
Article Information

Accepted for Publication: February 27, 2018.

Corresponding Author: Kathleen M. Finn, MD, Massachusetts General Hospital, Department of Medicine, Core Educator Service, 50 Staniford St, Ste 503B, Boston, MA 02114 (kfinn@partners.org).

Published Online: June 4, 2018. doi:10.1001/jamainternmed.2018.1244

Author Contributions: Dr Finn had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Drs Finn and Iyasere served as co–first authors, each with equal contribution to the manuscript.

Study concept and design: Finn, Iyasere, Chang, Landrigan, Metlay.

Acquisition, analysis, or interpretation of data: All authors.

Drafting of the manuscript: Finn, Iyasere, Chang, Yang.

Critical revision of the manuscript for important intellectual content: Finn, Iyasere, Nagarur, Landrigan, Metlay.

Statistical analysis: Iyasere, Chang.

Obtained funding: Finn, Iyasere, Metlay.

Administrative, technical, or material support: Iyasere, Nagarur.

Study supervision: Finn, Iyasere, Landrigan, Metlay.

Conflict of Interest Disclosures: Dr Landrigan has been supported in part by the Children’s Hospital Association for his work as an Executive Council member of the Pediatric Research in Inpatient Settings (PRIS) network. Dr Landrigan has consulted with and holds equity in the I-PASS Institute, which seeks to train institutions in best handoff practices and aid in their implementation. In addition, Dr Landrigan has received monetary awards, honoraria, and travel reimbursement from multiple academic and professional organizations for teaching and consulting on sleep deprivation, physician performance, handoffs, and safety, and has served as an expert witness in cases regarding patient safety and sleep deprivation. No other disclosures are reported.

Funding/Support: The National Board of Medical Examiners Edward J. Stemmler Grant provided funding for this study. Harvard Catalyst Clinical Research Center provided the clinical research nurse support. Harvard Catalyst receives financial contributions from Harvard University and its affiliated academic health care centers.

Role of the Funder/Sponsor: The National Board of Medical Examiners had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Disclaimer: The content is solely the responsibility of the authors and does not necessarily represent the official views of Harvard Catalyst, Harvard University and its affiliated academic health care centers, or the National Institutes of Health.

Additional Contributions: We thank all the MGH residents and clinical teaching faculty who participated in the study. They received coffee gift cards for their participation. And special thanks to our 2 research assistants, Aria Elahi, BA, and Eric Jacques, BA, both from Northeastern University, for their help with the daily management of the study and the clinical research nurses who abstracted and scored medical records for errors, all of whom received salary support from the Stemmler grant.

References
1.
Kennedy  TJ, Regehr  G, Baker  GR, Lingard  LA.  Progressive independence in clinical training: a tradition worth defending?  Acad Med. 2005;80(10)(suppl):S106-S111.PubMedGoogle ScholarCrossref
2.
Nasca  TJ, Day  SH, Amis  ES  Jr; ACGME Duty Hour Task Force.  The new recommendations on duty hours from the ACGME Task Force.  N Engl J Med. 2010;363(2):e3.PubMedGoogle ScholarCrossref
3.
Kennedy  TJ.  Towards a tighter link between supervision and trainee ability.  Med Educ. 2009;43(12):1126-1128.PubMedGoogle ScholarCrossref
4.
Cottrell  D, Kilminster  S, Jolly  B, Grant  J.  What is effective supervision and how does it happen? a critical incident study.  Med Educ. 2002;36(11):1042-1049.PubMedGoogle ScholarCrossref
5.
Bell  BM.  Supervision, not regulation of hours, is the key to improving the quality of patient care.  JAMA. 1993;269(3):403-404.PubMedGoogle ScholarCrossref
6.
 Resident Duty Hours: Enhancing Sleep, Supervision, and Safety. Washington, DC: Institute of Medicine; 2008.
7.
Kennedy  TJ, Lingard  L, Baker  GR, Kitchen  L, Regehr  G.  Clinical oversight: conceptualizing the relationship between supervision and safety.  J Gen Intern Med. 2007;22(8):1080-1085.PubMedGoogle ScholarCrossref
8.
Kilminster  SM, Jolly  BC.  Effective supervision in clinical practice settings: a literature review.  Med Educ. 2000;34(10):827-840.PubMedGoogle ScholarCrossref
9.
Farnan  JM, Petty  LA, Georgitis  E,  et al.  A systematic review: the effect of clinical supervision on patient and residency education outcomes.  Acad Med. 2012;87(4):428-442.PubMedGoogle ScholarCrossref
10.
Kerlin  MP, Small  DS, Cooney  E,  et al.  A randomized trial of nighttime physician staffing in an intensive care unit.  N Engl J Med. 2013;368(23):2201-2209.PubMedGoogle ScholarCrossref
11.
Reriani  M, Biehl  M, Sloan  JA, Malinchoc  M, Gajic  O.  Effect of 24-hour mandatory vs on-demand critical care specialist presence on long-term survival and quality of life of critically ill patients in the intensive care unit of a teaching hospital.  J Crit Care. 2012;27(4):421.e1-421.e7.PubMedGoogle ScholarCrossref
12.
Halpern  SD, Detsky  AS.  Graded autonomy in medical education—managing things that go bump in the night.  N Engl J Med. 2014;370(12):1086-1089.PubMedGoogle ScholarCrossref
13.
Hinchey  KT, Rothberg  MB.  Can residents learn to be good doctors without harming patients?  J Gen Intern Med. 2010;25(8):760-761.PubMedGoogle ScholarCrossref
14.
Landrigan  CP, Muret-Wagstaff  S, Chiang  VW, Nigrin  DJ, Goldmann  DA, Finkelstein  JA.  Effect of a pediatric hospitalist system on housestaff education and experience.  Arch Pediatr Adolesc Med. 2002;156(9):877-883.PubMedGoogle ScholarCrossref
15.
Saint  S, Fowler  KE, Krein  SL,  et al.  An academic hospitalist model to improve healthcare worker communication and learner education: results from a quasi-experimental study at a Veterans Affairs medical center.  J Hosp Med. 2013;8(12):702-710.PubMedGoogle ScholarCrossref
16.
Hauer  KE, Irby  DM. Effective clinical teaching in the inpatient setting. In: Wachter  R, Goldman  L, Hollander  H, eds.  Hospital Medicine. 2nd ed. Philadelphia, PA: Lippincott Williams & Wilkins; 2005:71-78.
17.
Young  JQ, Ranji  SR, Wachter  RM, Lee  CM, Niehaus  B, Auerbach  AD.  “July effect”: impact of the academic year-end changeover on patient outcomes: a systematic review.  Ann Intern Med. 2011;155(5):309-315.PubMedGoogle ScholarCrossref
18.
Bates  DW, Cullen  DJ, Laird  N,  et al; ADE Prevention Study Group.  Incidence of adverse drug events and potential adverse drug events: implications for prevention.  JAMA. 1995;274(1):29-34.PubMedGoogle ScholarCrossref
19.
Kaushal  R.  Using chart review to screen for medication errors and adverse drug events.  Am J Health Syst Pharm. 2002;59(23):2323-2325.PubMedGoogle Scholar
20.
Starmer  AJ, Spector  ND, Srivastava  R,  et al; I-PASS Study Group.  Changes in medical errors after implementation of a handoff program.  N Engl J Med. 2014;371(19):1803-1812.PubMedGoogle ScholarCrossref
21.
Starmer  AJ, Sectish  TC, Simon  DW,  et al.  Rates of medical errors and preventable adverse events among hospitalized children following implementation of a resident handoff bundle.  JAMA. 2013;310(21):2262-2270.PubMedGoogle ScholarCrossref
22.
National Coordinating Council for Medication Error Reporting and Prevention website. http://www.nccmerp.org/types-medication-errors. Accessed June 9, 2017.
23.
Huang  KT, Minahan  J, Brita-Rossi  P,  et al.  All together now: impact of a regionalization and bedside rounding initiative on the efficiency and inclusiveness of clinical rounds.  J Hosp Med. 2017;12(3):150-156.PubMedGoogle ScholarCrossref
24.
Biondi  EA, Varade  WS, Garfunkel  LC,  et al.  Discordance between resident and faculty perceptions of resident autonomy: can self-determination theory help interpret differences and guide strategies for bridging the divide?  Acad Med. 2015;90(4):462-471.PubMedGoogle ScholarCrossref
25.
Resar  RK, Rozich  JD, Classen  D.  Methodology and rationale for the measurement of harm with trigger tools.  Qual Saf Health Care. 2003;12(suppl 2):ii39-ii45.PubMedGoogle ScholarCrossref
26.
Baldwin  DC  Jr, Daugherty  SR, Ryan  PM.  How residents view their clinical supervision: a reanalysis of classic national survey data.  J Grad Med Educ. 2010;2(1):37-45.PubMedGoogle ScholarCrossref
27.
Defilippis  AP, Tellez  I, Winawer  N, Di Francesco  L, Manning  KD, Kripalani  S.  On-site night float by attending physicians: a model to improve resident education and patient care.  J Grad Med Educ. 2010;2(1):57-61.PubMedGoogle ScholarCrossref
28.
Farnan  JM, Burger  A, Boonyasai  RT,  et al; SGIM Housestaff Oversight Subcommittee.  Survey of overnight academic hospitalist supervision of trainees.  J Hosp Med. 2012;7(7):521-523.PubMedGoogle ScholarCrossref
29.
Haber  LA, Lau  CY, Sharpe  BA, Arora  VM, Farnan  JM, Ranji  SR.  Effects of increased overnight supervision on resident education, decision-making, and autonomy.  J Hosp Med. 2012;7(8):606-610.PubMedGoogle ScholarCrossref
30.
Phy  MP, Offord  KP, Manning  DM, Bundrick  JB, Huddleston  JM.  Increased faculty presence on inpatient teaching services.  Mayo Clin Proc. 2004;79(3):332-336.PubMedGoogle ScholarCrossref
31.
Itani  KM, DePalma  RG, Schifftner  T,  et al.  Surgical resident supervision in the operating room and outcomes of care in Veterans Affairs hospitals.  Am J Surg. 2005;190(5):725-731.PubMedGoogle ScholarCrossref
32.
Fallon  WF  Jr, Wears  RL, Tepas  JJ  III.  Resident supervision in the operating room: does this impact on outcome?  J Trauma. 1993;35(4):556-560.PubMedGoogle ScholarCrossref
33.
Gennis  VM, Gennis  MA.  Supervision in the outpatient clinic: effects on teaching and patient care.  J Gen Intern Med. 1993;8(7):378-380.PubMedGoogle ScholarCrossref
34.
Burnham  EL, Moss  M, Geraci  MW.  The case for 24/7 in-house intensivist coverage.  Am J Respir Crit Care Med. 2010;181(11):1159-1160.PubMedGoogle ScholarCrossref
35.
Wallace  DJ, Angus  DC, Barnato  AE, Kramer  AA, Kahn  JM.  Nighttime intensivist staffing and mortality among critically ill patients.  N Engl J Med. 2012;366(22):2093-2101.PubMedGoogle ScholarCrossref
36.
Kerlin  MP, Halpern  SD.  Twenty-four-hour intensivist staffing in teaching hospitals: tensions between safety today and safety tomorrow.  Chest. 2012;141(5):1315-1320.PubMedGoogle ScholarCrossref
37.
Busari  JO, Weggelaar  NM, Knottnerus  AC, Greidanus  PM, Scherpbier  AJ.  How medical residents perceive the quality of supervision provided by attending doctors in the clinical setting.  Med Educ. 2005;39(7):696-703.PubMedGoogle ScholarCrossref
38.
Farnan  JM, Johnson  JK, Meltzer  DO, Humphrey  HJ, Arora  VM.  On-call supervision and resident autonomy: from micromanager to absentee attending.  Am J Med. 2009;122(8):784-788.PubMedGoogle ScholarCrossref
39.
Stevermer  JJ, Stiffman  MN.  The effect of the teaching physician rule on residency education.  Fam Med. 2001;33(2):104-110.PubMedGoogle Scholar
40.
Farnan  JM, Humphrey  HJ, Arora  V.  Supervision: a 2-way street.  Arch Intern Med. 2008;168(10):1117.PubMedGoogle Scholar
41.
Farnan  JM, Johnson  JK, Meltzer  DO, Humphrey  HJ, Arora  VM.  Resident uncertainty in clinical decision making and impact on patient care: a qualitative study.  Qual Saf Health Care. 2008;17(2):122-126.PubMedGoogle ScholarCrossref
42.
Kennedy  TJ, Regehr  G, Baker  GR, Lingard  L.  Preserving professional credibility: grounded theory study of medical trainees’ requests for clinical support.  BMJ. 2009;338:b128.PubMedGoogle ScholarCrossref
43.
Landrigan  CP, Parry  GJ, Bones  CB, Hackbarth  AD, Goldmann  DA, Sharek  PJ.  Temporal trends in rates of patient harm resulting from medical care.  N Engl J Med. 2010;363(22):2124-2134.PubMedGoogle ScholarCrossref
×