Background
A critical component of pediatric residency training is exposure to diverse and challenging hospitalized patients, yet little is known about the differences in pediatric inpatient educational experiences across residencies.
Objective
To examine variations in inpatient illness severity and diagnostic diversity at the affiliated hospitals of small, medium, and large pediatric residencies.
Design
A retrospective analysis of hospital discharges among children aged 0 to 18 years (excluding newborns) in a sample of pediatric residency programs within the University HealthSystems Consortium.
Main Outcomes of Interest
The study compares the mean and median Diagnosis-Related Group (DRG) weights of hospital discharges (illness severity) as well as the percentage of discharges for the 5 most common diagnoses and the percentage of discharges for asthma (diagnostic diversity).
Results
There was no relationship between mean and median medical DRG weights and residency size (mean DRG weight: small, 0.89; medium, 0.86; and large, 0.85; small vs medium, P = .29; small vs large, P = .23). Larger programs had surgical patients with more severe illness (mean DRG weight, small, 2.11; medium, 2.08; and large, 2.47; small vs medium, P = .85; small vs large, P = .02) but less diagnostic diversity (small, 24.9%; medium, 25.9%; and large, 29.9%; small vs medium, P<.001; small vs large, P = .07). The proportion of medical discharges for asthma increased with residency size (small, 6.5%; medium, 7.4%; and large, 9.3%; small vs medium and large, P<.001).
Conclusion
Large variations in inpatient illness severity and diagnostic diversity were seen across programs, but program size was found to be a poor indicator of inpatient learning opportunities.
HIGH-QUALITY pediatric graduate medical education requires exposure to a diverse and challenging patient population.1,2 Because of the experiential nature of residency education, care of severely ill inpatients is an essential component of learning opportunities. Residents' experiences treating inpatients, however, vary from program to program, depending on the residency curriculum and the patient population in affiliated hospitals. The Residency Review Committee for pediatrics gives individual programs flexibility, requiring residents to treat an unspecified "wide range of acute and chronic medical conditions" and allowing the duration of inpatient rotations to vary from 5 to 8 months.3
Although residencies can choose the duration of inpatient rotations, they have much less control over the children admitted to their hospitals. Pediatric residency programs vary widely, and much of this variation is a reflection of the differences between programs' affiliated hospitals. As such, hospitals with pediatric residencies range from moderate-sized rural medical centers to very large multi-institutional programs in large cities. Consequently, a program's size and setting are likely to affect the types of patients treated, which could, along with other program characteristics, have implications for the resident's learning opportunities.
To date, no study has examined the differences in pediatric inpatient learning opportunities among residency programs. Lacking empirical data, it is commonly assumed that larger residency programs, which are typically found in large cities, have more diagnostic diversity and severity of illness among pediatric inpatients and are able to provide a richer training experience. This study examines the relative differences in inpatient illness severity and diagnostic diversity seen at small, medium, and large pediatric residency programs as a reflection of pediatric residency inpatient learning opportunities.
This study linked pediatric residency programs to their primary hospitals and examined the corresponding inpatient illness severity and diagnostic diversity. We used inpatient data from the University HealthSystems Consortium (UHC) Clinical Data Base. The UHC is a nonprofit organization of 87 academic health centers and 114 hospitals through which academic medical centers voluntarily share data submitted by their members for clinical improvement and research. Because of anonymity agreements within the UHC, institutions were not analyzed individually but were instead aggregated by residency size. We excluded 59 hospitals that did not have a corresponding pediatric residency listed in the American Medical Association's Fellowship and Residency Electronic Interactive Database (FREIDA).4 Many of these hospitals had freestanding children's hospitals that did not contribute data to the UHC database. This left 55 pediatric residency programs. This analysis used data for the period between October 1999 and September 2000.
Classification of residency programs
Using FREIDA, we obtained each residency program's self-reported number of pediatric residents, faculty members, and neonatal and pediatric intensive care unit beds from its major hospital affiliate. We grouped the 55 programs into 3 different categories based on the number of residents (small, ≤30 residents; medium, 31-49; and large ≥50). Some programs had incomplete information on faculty size (n = 12; 20% of programs), the number of intensive care beds (n = 16; 27% of programs), and the number of neonatal beds (n = 14; 23% of programs), and we excluded missing elements from the institutional descriptions (Table 1).
Classification of patient discharges
We used diagnostic codes (International Classification of Diseases, Ninth Revision, Clinical Modification) and diagnosis related group (DRG) specification and weight to provide a standardized means of comparison. By definition, DRGs group primary diagnoses and procedures to determine the average resource use for payment purposes.5 The DRG weight is the standardized measure of resource requirements used to calculate payment and is closely correlated with illness severity and the complexity of management.5 Pediatric DRG weights vary from 0.10 for allergic reactions to 17.79 for heart transplantations. In this analysis, we included discharge data for children aged 0 to 18 years but excluded obstetric, newborn, and perinatal hospitalizations. In addition, we excluded the DRG classification used for all nonclassifiable diagnoses (DRG 470), accounting for approximately 0.1% to 0.2% of all discharges. We stratified DRG weights into medical and surgical categories.
For measures of severity, median and mean DRG weights for medical and surgical pediatric patients were calculated. Only institutional means with the number of discharges were available. To measure the diagnostic diversity of inpatients treated by program size, we calculated the percentage of all discharges that were for one of the 5 most common DRGs. We also calculated the percentage of all discharges with asthma as the principal diagnosis using the Agency for Health Care Research and Quality Clinical Classifications Software asthma definitions.6
Statistical analysis was performed using Stata software, version 7.0 (Stata Corp, College Station, Tex). For the analysis of patients' illness severity, we performed linear regression for mean DRG weights and residency size. For the analysis of diagnostic diversity, we used logistic regression to compare the percentage of the top 5 diagnoses and the percentage of asthma of all discharges across residency size. We used robust variance calculations (cluster option within Stata, version 7.0) to account for the within-hospital correlations of observations.7P<.05 was considered to be statistically significant.
This study was given exempt status by the Institutional Review Board of Dartmouth College's Committee on the Protection of Human Subjects (CPHS No. 16057). The information presented in this publication was based in part on the Clinical Data Base provided by the University HealthSystem Consortium.
The characteristics of residency programs and their affiliated teaching hospitals varied widely (Table 1). The residency programs varied substantially in the number of residents (range, 15-84) and in faculty size (range, 20-512). Larger programs tended to have more pediatric intensive care unit beds, but this trend was not seen with neonatal intensive care unit beds. Only large programs (≥50 residents) had freestanding children's hospitals, defined as meeting the eligibility requirements for Children's Hospital Graduate Medical Education funds.8 Only 1 program was not located in a metropolitan area.
There was no relationship between mean and median medical DRG weights and residency size (Table 2). The mean DRG weight for small programs was 0.89; medium, 0.86; and large, 0.85 (small vs medium, P = .29; small vs large, P = .23). Larger programs had more patients with severe surgical diagnoses. The mean surgical DRG weights differed between small and large programs; the small program mean was 2.11; medium, 2.08; and large, 2.47 (small vs medium, P = .85; small vs large, P = .02), although this analysis only used institutional means, not the total number of discharges. The increase in illness severity among surgical patients at larger programs is not surprising because highly technical procedures with high DRG weights, such as organ transplantation, tend to occur at larger, regionalized institutions.
The 5 most common diagnoses at small, medium, and large hospitals were similar, although the ranks differed (Table 3). Asthma was the most common diagnosis at both medium and large programs but was only the fifth most common diagnosis at small programs. Maintenance chemotherapy and radiotherapy was the most common diagnosis at small programs and the second most common diagnosis at medium and large programs. Pneumonia was the third most common diagnosis at small and medium programs but was not within the top 5 diagnoses at large programs. Conversely, complications due to a device, implant, or graft was the fifth most common diagnosis at large programs but was not within the top 5 diagnoses at small and medium programs, likely reflecting the surgical environment at larger institutions.
Figure 1 illustrates the variation in diagnostic diversity across programs. In general, diagnostic diversity decreased with increasing residency size. The percentage of total discharges for diagnoses within the top 5, one measure of diagnostic diversity, increased with increasing program size, illustrating a decrease in diversity. In small programs, a mean of 24.9% of the total discharges were for the top 5 diagnoses compared with 25.9% in medium programs and 29.9% in large programs (small vs medium, P<.001; small vs large, P = .07). We also examined the proportion of the 10 and 15 most common diagnoses and still found that the concentration of patients increased at larger programs, decreasing patient diversity.
Discharges for asthma are another indicator of educational diversity (Figure 1). The percentage of discharges for asthma increased with residency size. Asthma was the diagnosis for a mean of 6.5% of all surgical and medical discharges at small programs, whereas medium programs had 7.4% and large, 9.3% (small vs medium, P<.001; small vs large, P<.001). At some medium and large programs, asthma accounted for more than 1 in 5 medical discharges.
Pediatric inpatient illness severity and diagnostic diversity varied among hospitals with residency programs, yet these differences were poorly related to the size of the residency programs. This study shows that smaller programs, on average, have similar levels of inpatient illness severity for medical discharges compared with larger programs. In contrast, larger programs have a higher proportion of discharges concentrated in their top 5 diagnoses, particularly asthma, thereby decreasing diversity. The 5 most common diagnoses were almost identical at small, medium, and large programs. These findings challenge the assumption that, all else held equal, better pediatric inpatient learning opportunities are offered at larger, urban hospitals.
Surgical patient illness severity, however, was higher at larger residency programs. It should be noted that there are substantial differences across programs in the level of involvement of pediatric residents with surgical patients. Greater surgical case complexity, therefore, does not necessarily lead to greater learning opportunities for pediatric residents.
Previous research has not specifically examined the mix of inpatients treated among pediatric graduate medical programs. Several small studies have demonstrated differences in the case mix9-13 of internal medicine residency programs, medical school clerkships (family practice, pediatrics, and internal medicine),14 and outpatient settings.2,15 None of these studies have examined the relationship between program size and patient characteristics.
Recently, Shipman et al16 reported that residents in small pediatric programs (<30 residents) spent less time in emergency departments and inpatient wards and more time in outpatient clinics. Residents in small programs also failed the pediatric boards more frequently, although the study did not control for differences in the resident characteristics across program size. It is not clear from this study whether smaller programs afford a poorer learning experience or whether they tend to attract less qualified medical students looking for an easier patient environment. Our study suggests that one important aspect of a rigorous residency program, very ill inpatients with diverse diagnoses, is present in small as well as large programs.
The Residency Review Committee provides only general guidelines for pediatric residency programs with respect to patient populations. An appropriate residency educational experience must include patients that are of
sufficient number, age distribution, and variety of complex and diverse pathology to assure the residents of adequate experience with infants, children, and adolescents who have acute and chronic illnesses, as well as with those with life-threatening conditions.3
Pediatric residency programs and the Residency Review Committee have not established a standardized way of measuring this patient mix.17-19 Other specialties, such as general surgery, attempt to guarantee a breadth of educational experiences by requiring residents to fulfill specified surgical categories, although the value of these distribution requirements remains unknown. Pediatric residents are only required to complete a checklist of procedures, such as lumbar punctures, but not a specific list of patient diagnoses.
In the absence of an agreed-upon method of comparing inpatients, we used DRG weights as a quantifiable measure of patient illness severity. The DRGs were first established as a means to determine hospital reimbursement for Medicare and Medicaid patients5 and have been faulted for failing to account for the resources required to care for infants and children. This is particularly evident in the crude DRG classification of neonatal patients, a group excluded from our study.20-23 Since their initial development, DRGs have been refined to All Patient DRGs to improve the applicability of DRGs in non-Medicare patient populations.24 Although it would have been preferable to use the All Patient DRG or All Patient Refined DRG measures, these were not available for the data set included in the study. The bias of using DRG weights in this study is minimized by excluding newborns and by using DRG weights to measure relative, not absolute, severity differences across pediatric program size. Further, the study only compared DRG weights from the program's primary hospital affiliation. This restriction leads to higher mean DRG weights in large multi-institutional programs because the discharges at community hospitals, where inpatients are less severely ill, are excluded.
Several other limitations of this study merit discussion. This study examined only a sample of the 205 US pediatric programs,4 excluded military programs, and relied on the hospitals' voluntary contributions of data to the UHC database. Additionally, faculty size, an important component of pediatric education, is self-reported and may vary according to institutional definitions. However, this does not affect our classifications of program size. Finally, we have not explicitly measured excellence in education and patient care, a defining characteristic of pediatric residency programs.
We, as well as others,1,2,9,14,25 assume that diverse patient illnesses and higher severity are part of an enriched inpatient educational experience. However, it should be noted that these measures are only 2 of many aspects of pediatric residency training. Additionally, it is not known whether the mix of patients treated during a residency should mirror the breadth expected to be seen after training has been completed10 or should be as broad as possible to include rare cases.
Aside from patient illness severity and diagnostic diversity, the volume of patients treated by each resident is an additional educational consideration. Many studies have found differences between the workload and volume of patients seen at different programs, and there is heated educational and legal debate over the apparent tension between residents' long work hours, educational experience, and optimal patient care.26 We were not able to assess the relationship between program size and patient volume because the cumulative duration of inpatient rotations, the number of patients seen at affiliated hospitals, and the numbers of additional clinical staff (such as student "subinterns," family practice residents, and physician extenders) were unknown. Our data, however, should caution against using inpatient volume as an indicator of educational opportunity without considering case mix. The high volume of a small number of diagnoses and the predominance of asthma in some larger programs likely represent redundant learning experiences.
This study highlights the need to examine the relative merits of both small and large programs. The reputation of large programs in pediatric residency training is well established, whereas smaller programs have had a greater challenge in defining their position in graduate medical education. Our study suggests that the simplified ranking of programs by size may be unjustified. Smaller programs may offer some advantages over larger programs, including fewer clinical fellows and increased faculty contact. They are also more likely to be a child's first entry into the hospital system, allowing residents to see the initial presentations and diagnosis of illness. Finally, smaller programs are more likely to be in small cities and rural areas and expose residents to a different patient population.
In summary, this study shows that a pediatric residency's size is an insufficient measure of inpatient educational learning opportunities. It also establishes the utility of administrative data sets as a tool for evaluating program inpatient services. Beyond measuring structures and processes, additional work must be undertaken to understand educational outcomes. To improve residents' inpatient education and ultimately improve patient care, residency programs must evaluate the full breadth of clinical learning provided to their residents.
Article
Corresponding author: David C. Goodman, MD, MS, Dartmouth Medical School, 7251 Strasenburgh Hall, Hanover, NH 03755 (e-mail: david.goodman@dartmouth.edu).
Accepted for publication February 26, 2003.
This study was supported in part through a National Research Service Award (T32 HS00070) from the Agency for Healthcare Research and Quality, Rockville, Md (Dr Thompson).
This study was presented in part at the 2003 Annual Meeting of the Pediatric Academic Societies; May 6, 2003; Seattle, Wash.
Pediatric residency programs provide crucial hands-on training for future pediatricians, yet there has been little research comparing the inpatient learning opportunities at different residency programs. When evaluating the relative merits of residency programs, medical students and pediatric faculty advisors often rely on the unverified assumption that larger programs offer a more diverse and challenging inpatient population. Contrary to this belief, we found that, on average, small, medium, and large programs have similar levels of inpatient illness severity. We also found more diagnostic diversity at smaller programs. Further, this study demonstrates that administrative data sets offer a relatively efficient means of comparing inpatient learning opportunities.
1.Wilton
RPennisi
A Insurance coverage and residents' experience in a pediatric teaching clinic.
Am J Dis Child. 1993;147284- 289
PubMedGoogle Scholar 2.Osborn
LMSargent
JRWilliams
SD Effects of time-in-clinic, clinic setting, and faculty supervision on the continuity clinic experience.
Pediatrics. 1993;911089- 1093
PubMedGoogle Scholar 5.Culbertson
CedSchmidt
Ked St Anthony's DRG Guidebook. West Valley City, Utah Ingenix Inc1997;
6.Elixhauser
AAndrews
RMFox
S Clinical Classifications for Health Policy Research: Discharge Statistics by Principal Diagnosis and Procedure: Provider Studies Research Note 17. Rockville, Md Agency for Health Care Policy and Research1993;AHCPR Publication No. 93-0043
7.StataCorp, Stata Statistical Software: Release 7.0. College Station, Tex Stata Corp2001;
9.O'Brien
BMacDonald
JHolmes
BKaufman
D Learning opportunities for internal medicine residents: comparison of a tertiary care setting and a regional setting.
Acad Med. 1996;71284- 286
PubMedGoogle ScholarCrossref 11.Davidson
R Changes in the educational value of inpatients at a major teaching hospital: implications for medical education.
Acad Med. 1989;64259- 261
PubMedGoogle ScholarCrossref 13.Steiner
JFFeinberg
LEKramer
AMByyny
RL Changing patterns of disease on an inpatient medical service: 1961-1962 to 1981-1982.
Am J Med. 1987;83331- 335
PubMedGoogle ScholarCrossref 14.Rattner
SLLouis
DZRabinowitz
C
et al. Documenting and comparing medical students' clinical experiences.
JAMA. 2001;2861035- 1040
PubMedGoogle ScholarCrossref 15.Malloy
MHSpeer
A A comparison of performance between third-year students completing a pediatric ambulatory rotation on campus vs in the community.
Arch Pediatr Adolesc Med. 1998;152397- 401
PubMedGoogle Scholar 16.Shipman
SACull
WLBrotherton
SEPan
RJ Does residency size matter? the impact of program size and freestanding children's hospital status on pediatric residency training. Presented at: Pediatric Academic Societies annual meeting May 6, 2002 Baltimore, Md
17.Siegel
BSGreenberg
LW Effective evaluation of residency education: how do we know it when we see it?
Pediatrics. 2000;105964- 965
PubMedGoogle Scholar 18.Holmboe
ESHawkins
RE Methods for evaluating the clinical competence of residents in internal medicine: a review.
Ann Intern Med. 1998;12942- 48
PubMedGoogle ScholarCrossref 20.Payne
SMCSchwartz
RM An evaluation of Pediatric-Modified Diagnosis-Related Groups.
Health Care Financ Rev. 1993;1551- 70
PubMedGoogle Scholar 22.Phelan
P Are casemix developments meeting the needs of paediatrics?
Med J Aust. 1994;161(suppl)S26- S29
PubMedGoogle Scholar 23.Muldoon
J Structure and performance of different DRG classification systems for neonatal medicine.
Pediatrics. 1999;103302- 318
PubMedGoogle Scholar 24.3M Health Information Systems, All Patient Diagnosis Related Groups: Definitions Manual, Version 11.0. Wallingford, Conn 3M Health Information Systems1993;