aMost enrolled in American College of Surgeons National Surgical Quality Improvement Program after 2009.
bNo surgery residency programs withdrew during the study period.
cThese patients were included in the sensitivity analyses, which tested alternative definitions of teaching and nonteaching hospitals.
aEach data point represents 6 months of data during the 4-year period. The error bars indicate the 95% confidence intervals. The dotted blue line represents the period in which the 2011 duty hour reform was implemented, beginning July 1, 2011. Models are adjusted for patient demographics, comorbidities, and differences in procedure mix (see eStatistical Methods in the Supplement).
eTable 1. Percentage of General Surgery Cases with a Resident Present during the Operation
eTable 2. Sensitivity Analyses for Death or Serious Morbidity in All General Surgery Patients
eTable 3. Most Frequently Performed Operations in Teaching and Non-Teaching Hospitals
eTable 4. Unadjusted Adverse Outcome Rates in Inpatients
eTable 5. Unadjusted Adverse Outcome Rates in Outpatients
eTable 6. Unadjusted Adverse Outcome Rates in High-Risk Patients
eTable 7. Unadjusted Adverse Outcome Rates in Low-Risk Patients
eTable 8. Inpatient and High-Risk Patient Subgroup Analyses
eStatistical Methods. Statistical Model Details
Rajaram R, Chung JW, Jones AT, Cohen ME, Dahlke AR, Ko CY, Tarpley JL, Lewis FR, Hoyt DB, Bilimoria KY. Association of the 2011 ACGME Resident Duty Hour Reform With General Surgery Patient Outcomes and With Resident Examination Performance. JAMA. 2014;312(22):2374-2384. doi:10.1001/jama.2014.15277
In 2011, the Accreditation Council for Graduate Medical Education (ACGME) restricted resident duty hour requirements beyond those established in 2003, leading to concerns about the effects on patient care and resident training.
To determine if the 2011 ACGME duty hour reform was associated with a change in general surgery patient outcomes or in resident examination performance.
Design, Setting, and Participants
Quasi-experimental study of general surgery patient outcomes 2 years before (academic years 2009-2010) and after (academic years 2012-2013) the 2011 duty hour reform. Teaching and nonteaching hospitals were compared using a difference-in-differences approach adjusted for procedural mix, patient comorbidities, and time trends. Teaching hospitals were defined based on the proportion of cases at which residents were present intraoperatively. Patients were those undergoing surgery at hospitals participating in the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP). General surgery resident performance on the annual in-training, written board, and oral board examinations was assessed for this same period.
National implementation of revised resident duty hour requirements on July 1, 2011, in all ACGME accredited residency programs.
Main Outcomes and Measures
Primary outcome was a composite of death or serious morbidity; secondary outcomes were other postoperative complications and resident examination performance.
In the main analysis, 204 641 patients were identified from 23 teaching (n = 102 525) and 31 nonteaching (n = 102 116) hospitals. The unadjusted rate of death or serious morbidity improved during the study period in both teaching (11.6% [95% CI, 11.3%-12.0%] to 9.4% [95% CI, 9.1%-9.8%], P < .001) and nonteaching hospitals (8.7% [95% CI, 8.3%-9.0%] to 7.1% [95% CI, 6.8%-7.5%], P < .001). In adjusted analyses, the 2011 ACGME duty hour reform was not associated with a significant change in death or serious morbidity in either postreform year 1 (OR, 1.12; 95% CI, 0.98-1.28) or postreform year 2 (OR, 1.00; 95% CI, 0.86-1.17) or when both postreform years were combined (OR, 1.06; 95% CI, 0.93-1.20). There was no association between duty hour reform and any other postoperative adverse outcome. Mean (SD) in-training examination scores did not significantly change from 2010 to 2013 for first-year residents (499.7 [ 85.2] to 500.5 [84.2], P = .99), for residents from other postgraduate years, or for first-time examinees taking the written or oral board examinations during this period.
Conclusions and Relevance
Implementation of the 2011 ACGME duty hour reform was not associated with a change in general surgery patient outcomes or differences in resident examination performance. The implications of these findings should be considered when evaluating the merit of the 2011 ACGME duty hour reform and revising related policies in the future.
Residency duty hour requirements have been under scrutiny for more than 30 years as concerns about medical errors attributable to exhausted residents persist.1,2 In 2003, the Accreditation Council for Graduate Medication Education (ACGME) enacted the first national duty hour requirements for all accredited residency programs, which required residents to work 80 hours or less per week averaged over 4 weeks; have 1 day free per week averaged over 4 weeks; have 10 hours off between shifts; not work more than 24 hours of continuous duty with 6 hours allowed for transfer of care activities; and take call no more than every third night averaged over 4 weeks.3
However, stakeholders continued to express concern about resident fatigue contributing to patient harm.4 Effective July 1, 2011, the ACGME further restricted resident duty hours to include the following provisions: first-year trainees limited to 16 hours of continuous in-hospital duty; residents must have at least 8 hours free between shifts; and residents in-house for 24 hours may have up to 4 hours for transfer of care activities and must have at least 14 hours off between shifts.3
These reforms have been met with mixed opinions. Proponents suggest they may reduce avoidable medical errors and improve resident well-being,5- 7 whereas opponents note that disruptions to continuity of care may worsen outcomes and compromise resident education.8,9 Despite the need for rigorous evaluations of current duty hour policies, most studies to date have been single-institution reports, used administrative data, or focused on the 2003 duty hour reform.
The objectives of this study were to assess whether implementation of the 2011 ACGME duty hour reform was associated with a change in the composite measure of death or serious morbidity in patients undergoing general surgery operations, with other postoperative complications, or in differences in general surgery resident performance on national in-training and board certification examinations.
This study was deemed exempt by the Northwestern University institutional review board.
This was a retrospective observational study of the prospectively maintained American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) database.10,11 In 2013, ACS NSQIP included 452 adult hospitals in the United States, accounting for nearly 10% of all hospitals and 30% of all surgeries performed throughout the country.12 Approximately 76% of hospitals in ACS NSQIP are affiliated with a general surgery residency program. The ACS NSQIP database collects surgical patient information from participating hospitals for the purposes of quality improvement. The details and history of this program have been described.10,11,13 Patient demographics, comorbidities, intraoperative variables, and detailed clinical outcomes are collected by trained on-site surgical clinical reviewers using data definitions that are standardized across participating institutions. Reviewers participate in frequent in-person training, online educational modules, and annual examinations. Hospitals are subject to detailed data audits to try to ensure high reliability between reviewers.14 Clinical outcomes are tracked from the date of surgery until 30 days after surgery. Relevant inpatient and outpatient information is captured based on medical record review and on discussions with physicians, contacting patients, or both.
American College of Surgeons National Surgical Quality Improvement Program data were obtained from 2 years before to 2 years after the 2011 ACGME duty hour reform became effective on July 1, 2011. Patients who underwent any general surgery procedure between July 1, 2009, and June 30, 2013, were identified using the standard ACS NSQIP Current Procedural Terminology (CPT) codes for general surgery, compiled and vetted by surgeons and coding experts.15 We excluded hospitals that did not submit at least 25 general surgery cases per quarter to ensure only continuously enrolled sites were included (Figure 1). Patients from hospitals located in New York state were also excluded given preexisting statewide duty hour regulations.16 We also excluded patients from hospitals that had newly accredited general surgery residency programs, had withdrawn general surgery residence programs during the study period, or had and patients from international ACS NSQIP sites.17
Teaching status of a hospital was determined based on the proportion of general surgery cases in each hospital in which a resident was present intraoperatively (eTable 1 in the Supplement). Several different cutoffs were examined. To compare clearly contrasting groups for the main analysis, teaching hospitals were defined as those with at least 95% of operations involving a resident. Nonteaching hospitals were defined as having a resident present intraoperatively in 1% or fewer cases (Figure 1). Using these stringent criteria allowed for a comparison between 2 dissimilar groups with respect to surgical resident involvement. Previous studies have similarly compared the extremes of measures reflecting teaching status, such as the resident-to-bed ratio.18,19
For comparison, we also evaluated the following alternative criteria of teaching vs nonteaching hospital status using the percentage of operations with resident involvement: 95% or more vs 5% or less, 90% or more vs 10% or less, 75% or more vs 1% or less, 75% or more vs 10% or less, 50% or more vs 1% or less, 25% or more vs 1% or less, and more than 1% vs 1% or less (eTable 2 in the Supplement).
The primary outcome was a composite measure of death or serious morbidity within 30 days of surgery, a National Quality Forum (NQF)–endorsed metric used by the Centers for Medicare & Medicaid Services (NQF, No. 0697) and the main outcome measure for ACS NSQIP.20,21 Secondary outcomes included death, serious morbidity, any morbidity, failure to rescue, surgical-site infection (SSI), and sepsis or septic shock within 30 days of surgery. The ACS NSQIP has greater accuracy for detecting postoperative complications when compared with administrative data, and interrater reliability over time has improved for these measures.14,22Serious morbidity was defined using the standard ACS NSQIP definition and included one or more of the following complications: deep or organ-space SSI, wound or fascial dehiscence, pneumonia, unplanned intubation, prolonged need for mechanical ventilation (>48 hours), acute renal failure, urinary tract infection, cardiac arrest, myocardial infarction, sepsis or septic shock, or reoperation.23 The ACS NSQIP definition of any morbidity includes all serious morbidity events, excluding reoperation, in addition to superficial SSI, cerebrovascular accident, and bleeding requiring transfusion (≥5 units in first 72 hours).23 Failure to rescue indicates the proportion of patients who experienced a serious morbidity and subsequently died.24
Risk-adjusted event rates for teaching and nonteaching hospitals were calculated for each outcome using logistic regression models. Models included variables for 21 patient-level demographic factors and comorbidities, with adjustment for surgical case mix using the standard ACS NSQIP methodology (eStatistical Methods in the Supplement).11 This method uses a hierarchical model for each dependent outcome with the CPT code as the random effect. A linear risk score is derived for each procedure and subsequently used to adjust for procedural differences between hospitals.23 Any missing data are standardly imputed by ACS NSQIP for hospital-level modeling using multivariable regression with single imputation.
To determine the association between the 2011 ACGME duty hour reform and patient outcomes, we used a quasi-experimental study design known as a difference-in-differences analysis (eStatistical Methods in the Supplement).25 This statistical approach is often used to assess changes that occur following natural experiments or policy changes. Using a difference-in-differences model, we compared patient outcomes in teaching hospitals before and after the 2011 ACGME duty hour reform to a contemporaneous control group of nonteaching hospitals during the same period. Because changes to duty hour requirements should not affect patient care at nonteaching hospitals, this comparison adjusts for time trends and other unmeasured variables that may affect both hospital groups similarly.
We estimated logistic regression models with robust cluster-corrected standard errors to account for patient clustering within hospitals. The association between duty hour reform and patient outcomes was identified by the difference between teaching and nonteaching hospitals in pre-post time differences. This required dummy variables indicating (1) whether surgery occurred at a teaching or nonteaching hospital and (2) if the surgery was before or after the duty hour reform was implemented. The interaction term of these 2 variables was the difference-in-differences estimator, and its coefficient reflected the magnitude of association between duty hour reform and the dependent outcome of interest. Models adjusted for patient demographics, comorbidities, and procedure type and also included variables indicating the quarter in which the surgery was performed to control for time trends.
We additionally investigated whether there may have been differential associations with the policy reform in 2 subgroup analyses. First, we tested whether there was a differential policy association in inpatients, defined as having a length of stay of at least 2 days, compared with all other patients. Second, we tested whether there was a differential policy association among high-risk individuals, defined as individuals in the highest quartile of expected probability of risk for each specific adverse outcome, compared with all other patients.26 The expected risk probability for each individual was estimated from logistic regression models adjusted for patient demographic information, comorbidities, and procedure type. To test for subgroup associations, we ran separate logistic regression models for each subgroup analysis that included an interaction term between the subgroup indicator and the differences-in-differences estimator, all lower-order interaction terms, and additional patient-level covariates. This 3-way interaction is the estimator of difference in difference-in-differences, and estimates whether there was a differential association of duty hour reform across inpatients vs outpatients or high-risk vs low-risk patients (eStatistical Methods in the Supplement).27
For each analysis, we pooled data from both postreform year 1 (July 1, 2011-June 30, 2012) and postreform year 2 (July 1, 2012-June 30, 2013) and compared this with the entire prereform period (July 1, 2009-June 30, 2011). Additionally, each postreform year was individually compared with the entire prereform period to test if outcome differences occurred either initially or over time after reform. We report 2-sided P values with the level of statistical significance set to .05. We did not adjust our level of significance for multiple comparisons because we hypothesized that duty hour reform was not associated with a change in outcomes; therefore, using uncorrected P values presents a more conservative approach. Statistical analyses were performed in SAS version 9.3 (SAS Institute Inc).
Difference-in-differences analyses assume that there is no significant change between groups relative to one another prior to the policy change.28 Similar to previous studies, we tested this assumption by performing post hoc analyses for all outcomes comparing teaching and nonteaching hospitals in prereform year 1 with prereform year 2.18,19 Using the model previously described, we tested the significance of the interaction term during these periods. If a significant difference was detected, we retested our difference-in-differences model using prereform year 2, the year immediately preceding duty hour reform, as the reference group.
Sensitivity analyses were performed on the primary outcome measure of death or serious morbidity to test the robustness of our findings. This included (1) substituting alternative variables to control for time trends; (2) using resident-to-bed ratios derived from the 2010 American Hospital Association’s Annual Survey; and (3) substituting a hierarchical model with the hospital as the random effect (eTable 2 in the Supplement). The resident-to-bed ratio is commonly used in studies evaluating duty hour reform.18,19 However, this measure provides only an overall hospital indicator of resident involvement and is not specific to a particular specialty or patient population but was included for comparison with previous research. Hospitals were divided into 1 of 5 groups based on the resident-to-bed ratio: (1) 0.000 (nonteaching); (2) 0.001-0.049 (very minor teaching); (3) 0.050-0.249 (minor teaching); (4) 0.250-0.599 (major teaching); and (5) 0.600 or higher (very major teaching). Teaching hospitals were compared with nonteaching hospitals using the previously described difference-in-differences model.
Duty hour reform may allow residents increased time to study for examinations that would result in better performance, or conversely duty hour limits may lead to less experienced residents and thus worse examination scores.29,30 To assess if the 2011 ACGME duty hour reform was associated with changes in resident education, we evaluated performance on national in-training, written board, and oral board examinations. The American Board of Surgery (ABS) In-Training Examination (ABSITE) is a multiple-choice test taken annually by general surgery residents.31 Scores are standardized each year for both the junior-level (postgraduate years [PGY] 1 and 2) and senior-level (PGY 3, 4, and 5) examinations to account for differences in test difficulty from year to year. To become board certified in general surgery, the ABS requires that candidates pass 2 examinations.31 The written board examination (Qualifying Examination) is an annual multiple-choice test offered to graduates of accredited residency programs. Successful completion of the written examination is required before individuals may take the oral board examination (Certifying Examination).
Categorical general surgery resident ABSITE scores and performance of first-time US-trained examinees on the written and oral board examinations were obtained from the ABS from 2 years before to 2 years after the 2011 duty hour reform became effective. We tested ABSITE scores for a longitudinal trend for each postgraduate year using a linear contrast test within an analysis of variance to determine if there were differences in examination performance after duty hour reform. The percentages of individuals passing the written and oral board examinations were compared over time using the Cochran-Armitage trend test. Statistical analyses were performed in SPSS version 22 (IBM Corp).
Of the 1 023 883 patients from 448 ACS NSQIP hospitals, those that were not continuously enrolled from 2009 to 2013 were excluded (298 hospitals, 405 728 patients) (Figure 1). Next, hospitals were excluded if located in New York state (14 hospitals, 64 208 patients), had a new general surgery residency accreditation (3 hospitals, 12 181 patients), or were not in the United States (2 hospitals, 6267 patients). Thus, 535 499 patients from 131 hospitals met the inclusion criteria. Hospitals were then classified into teaching vs nonteaching hospitals using different definitions (eTable 1 in the Supplement). For the main analyses, teaching hospitals were defined as those with a resident involved in at least 95% of general surgery cases and nonteaching hospitals as those with a resident involved in 1% or fewer cases (54 hospitals, 204 641 patients). Patients were nearly evenly divided between the 23 teaching (n = 102 525) and 31 nonteaching (n = 102 116) hospitals. Patient characteristics are reported in Table 1 and the most frequently performed operations are summarized in eTable 3 in the Supplement.
The unadjusted rate of the death or serious morbidity composite outcome improved from prereform year 1 to postreform year 2 in both teaching (11.6% [95% CI, 11.3%-12.0%] to 9.4% [95% CI, 9.1%-9.8%]; P < .001) and nonteaching hospitals (8.7% [95% CI, 8.3%-9.0%] to 7.1% [95% CI, 6.8%-7.5%] P < .001; Table 2). Unadjusted rates of any morbidity, failure to rescue, SSI, and sepsis or septic shock remained relatively stable in both hospital groups over time. Adverse event rates adjusted for patient demographics, comorbidities, and procedural case mix for teaching and nonteaching hospitals are shown in Figure 2.
On adjusted analyses comparing outcomes before and after the 2011 ACGME duty hour reform at teaching and nonteaching hospitals, reform was not associated with a change in the likelihood of death or serious morbidity (odds ratio [OR], 1.06; 95% CI, 0.93-1.20; Table 3). Duty hour reform was also not associated with early or delayed differences in death or serious morbidity when evaluating postreform year 1 (OR, 1.12; 95% CI, 0.98-1.28) and postreform year 2 (OR, 1.00; 95% CI, 0.86-1.17) separately. Moreover, duty hour reform was not associated with a significant change in any of the other adverse outcomes examined (Table 3).
As a sensitivity test, we compared less restrictive definitions of teaching and nonteaching hospital status using the percentage of general surgery cases in which a resident was involved. This resulted in more hospitals and patients being included in the models (eTable 2 in the Supplement). Using these alternative definitions of teaching status, there were no significant associations between duty hour reform and death or serious morbidity in postreform year 1 or postreform year 2 or when both postreform years were combined (eTable 2 in the Supplement). In additional analyses, there was no significant association between duty hour reform and death or serious morbidity when comparing inpatients with outpatients for the entire postreform period (coefficient, 0.10; 95% CI, −0.12 to 0.31; P = .38) or when evaluating each postreform year separately (eTables 4, 5, and 8 in the Supplement). When high-risk patients were compared with low-risk patients, there was also no significant differential association of duty hour reform with death or serious morbidity or any other adverse outcome examined for any postreform period (eTable 6, 7, and 8 in the Supplement).
On post hoc analysis, when adverse outcomes at teaching and nonteaching hospitals were compared between prereform year 1 and 2, a significant preexisting difference was found only for any morbidity (P = .02). However, on subsequent analyses using prereform year 2 as the reference group, there was no association between duty hour reform and any morbidity when comparing the entire postreform period, postreform year 1, or postreform year 2.
Additional sensitivity analyses were performed. When controlling for time trends with the academic year rather than the quarter in which surgery was performed, our results were unchanged (eTable 2 in the Supplement). We also evaluated teaching and nonteaching hospitals using the hospital-level resident-to-bed ratio. On adjusted analyses comparing very minor (0.001-0.049), minor (0.050-0.249), and major teaching (0.250-0.599) hospitals with nonteaching (0.000) hospitals, there was no association between duty hour reform and death or serious morbidity. When very major teaching (≥0.600) hospitals were compared with nonteaching hospitals, duty hour reform was associated with a significant increase in death or serious morbidity in postreform year 1 (OR, 1.13; 95% CI, 1.00-1.26), but not in the other postreform periods examined. When a hierarchical model with the hospital as the random effect was used, duty hour reform was again associated with an increased likelihood of death or serious morbidity in the first year after reform (OR, 1.13; 95% CI, 1.02-1.29), but not in postreform year 2 or when pooling both postreform years together.
Mean (SD) ABSITE scores for PGY-1 residents did not significantly change from 2010 to 2013 (499.7 [85.2] to 500.5 [84.2], P = .99) when examining all programs. Nonsignificant changes in ABSITE scores were also found in other PGY classes during this period (Table 4).
The percentage of first-time examinees passing the written board examination significantly increased from 2010 to 2013 (83.1% [95% CI, 80.8%-85.3%] to 88.1% [95% CI, 86.1%-90.0%], P < .001). When only the 2011 to 2013 period was examined, the findings were no longer significant (87.5% [95% CI, 85.5%-89.5%] to 88.1% [95% CI, 86.1%-90.0%], P = .41). There was no change in oral board examination pass rates over time (81.7% [95% CI, 79.2%-84.2%] to 80.9% [95% CI, 78.7%-83.2%], P = .21) (Table 4).
In this study, the 2011 ACGME duty hour reform was not associated with a change in general surgery patient outcomes or differences in resident in-training, written board, or oral board examination performance. To our knowledge, this is one of the first national empirical evaluations of the 2011 ACGME duty hour reform. Our results may be considered when examining the merit of this reform and to inform future duty hour policies.
Previous studies evaluating patient outcomes before and after the 2003 ACGME duty hour reform found that reform was not associated with differences in surgical mortality.18,19 Several recent meta-analyses and systematic reviews have also concluded that duty hour reforms were not associated with improvements in surgical care, with some supporting possibly worsened outcomes.2,32,33 Using clinically collected data, we found that there was no association between the 2011 ACGME duty hour reform and outcomes in general surgery patients.
Some have suggested that outcomes for high-risk patients may better reflect resident involvement and be more sensitive to duty hour reforms.26,34 Volpp et al26 examined Medicare and Veterans Affairs patients in the highest quartile of risk and found no consistent association between the 2003 reform and mortality or failure to rescue. In our study, we also found that duty hour reform was not associated with a differential association in high-risk vs low-risk patients. Although continuity of care may be negatively affected by duty hour policies, this may be mitigated by hospitals anticipating the effects of these regulations and responding accordingly (eg, hiring midlevel support staff, formalized training in patient handoffs, etc).
The few single-institution studies comparing surgical resident in-training examination performance before and after the 2003 ACGME duty hour reform have demonstrated conflicting results.29,30 We found that there were no changes in in-training examination performance for all general surgery programs in the United States during this period. Moreover, first-year trainees who were most directly affected by the 2011 reform did not improve their ABSITE scores, despite presumably more free time to prepare.
Prior evaluations of board certification examinations demonstrated that written examination failure rates were relatively stable over time whereas oral board failure rates had increased.35,36 We found no significant change over time for written board examination and oral board examination pass rates. Although there was a significant increase in written board examination scores over the entire period, this was predominantly due to the anomalous poor performance of 2010 examinees. When comparing written examination performance the year before reform (2011) with those after the reform (2012-2013), pass rates appeared to be stable. A significant difference in board examination performance was not expected because these examinees had only trained for 1 or 2 years under the 2011 duty hour reform. Nevertheless, the 2011 duty hour reform was not associated with an improvement in these standard measures of resident education.
There are several possible limitations of our study. First, we were only able to assess the first 2 years following duty hour reform. There may be differences in patient care or resident examination performance that are evident only several years after implementation and adoption of new duty hour requirements. Second, to be able to examine outcomes before and after duty hour reform using clinical data, we were limited to hospitals participating in ACS NSQIP. These hospitals likely have more resources and possibly greater interest in quality improvement than other institutions. As such, more financially-strained hospitals may be less able to mitigate the effects of duty hour reform through the use of midlevel practitioners and ancillary staff and thus may be more sensitive to duty hour reforms. Third, although we were able to risk-adjust using detailed clinical data and control for unmeasured changes using a difference-in-differences approach, our study was still observational. Any unobserved confounders that influenced patient outcomes differentially at teaching or nonteaching hospitals may have biased our findings. Fourth, although we found no statistically significant association between the 2011 ACGME duty hour reform and patient outcomes, some confidence intervals are fairly wide and encompass ranges of values that may be clinically important. Nevertheless, our results were unchanged in sensitivity analyses that were based on a larger sample size (131 hospitals and 535 499 patients) than the main analysis.
The study findings could be interpreted in at least 2 ways. First, there is no evidence of worsened patient care or resident education, and given assumed improvements to resident well-being, this could indicate that current policies should continue forward as they are. Conversely, the potential harm from poor continuity of care, increased handoffs, trainees feeling unprepared to practice, and concern regarding residents developing a shift-work mentality engendered by these policies could suggest that the duty hour reform may require significant revision or reconsideration.8,37- 39 Although many of these concerns have not been substantiated by consistent evidence, they reflect the intense interest duty hour reform has generated from the clinical and educational community.
The issues surrounding duty hour reform may have implications for patient care and resident training, and the lack of high-level evidence to guide policy decisions in this area needs to be addressed with randomized trials. To that end, a national multicenter cluster randomized trial is being conducted (the Flexibility In Duty Hour Requirements for Surgical Trainees (FIRST) Trial, http://www.Clinicaltrials.govNCT02050789) comparing current duty hour requirements vs flexible duty hours to assess the effects of this intervention on patient outcomes and resident well-being.40 This trial may further inform the debate of how to optimally structure postgraduate training.
Implementation of the 2011 ACGME duty hour reform was not associated with a change in surgical patient outcomes or resident examination performance. The implications of these findings should be considered when evaluating the merit of the 2011 ACGME duty hour reform and revising related policies in the future.
Corresponding Author: Karl Y. Bilimoria, MD, MS, Surgical Outcomes and Quality Improvement Center (SOQIC), Department of Surgery and Center for Healthcare Studies, Feinberg School of Medicine, Northwestern University, 633 N St Clair St, 20th Floor, Chicago, IL 60611 (email@example.com).
Author Contributions: Dr Rajaram had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: All authors.
Acquisition, analysis, or interpretation of data: All authors.
Drafting of the manuscript: Rajaram, Chung, Jones, Bilimoria.
Critical revision of the manuscript for important intellectual content: All authors.
Statistical analysis: Rajaram, Chung, Jones, Cohen, Bilimoria.
Obtained funding: Bilimoria.
Administrative, technical, or material support: Bilimoria.
Study supervision: Ko, Tarpley, Lewis, Hoyt, Bilimoria.
Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Dr Bilimoria reported support from the National Institutes of Health, Agency for Healthcare Research and Quality, American Board of Surgery, American College of Surgeons, Accreditation Council for Graduate Medical Education, National Comprehensive Cancer Network, American Cancer Society, Health Care Services Corp, California Health Care Foundation, Northwestern University, the Robert H. Lurie Comprehensive Cancer Center, Northwestern Memorial Foundation, and Northwestern Memorial Hospital. Dr Bilimoria has received honoraria from hospitals, professional societies, and continuing medical education companies for clinical care and quality improvement research presentations.
Funding/Support: Dr Rajaram is supported by grant T32HS000078 from the Agency for Healthcare Research and Quality, theAmerican College of SurgeonsClinical Scholars in Residence Program, and an unrestricted educational grant from Merck.
Role of the Funder/Sponsor: The funding sponsors had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Correction: This article was corrected on December 18, 2014, to reorder footnotes in Table 4.