Description of patients hospitalized and cared for by the medical ward teams throughout the study period. HBA indicates hospital-based attending physician; CBA, clinic-based attending physician.
Kearns PJ, Wang CC, Morris WJ, Low DG, Deacon AS, Chan SY, Jensen WA. Hospital Care by Hospital-Based and Clinic-Based FacultyA Prospective, Controlled Trial. Arch Intern Med. 2001;161(2):235–241. doi:10.1001/archinte.161.2.235
The hospital length of stay decreases and clinical outcomes are maintained when teaching hospitals involve hospital-based attending physicians in comparison with traditional attending physicians. The attending physician's time commitment, including the number of hours per day and months per year, required to achieve this result is unknown. This study compared the clinical outcomes and cost of care for patients treated by hospital-based and clinic-based attending physicians devoting dramatically different amounts of time to supervising residents on the medical wards of a suburban county hospital.
Patients were alternately admitted to 2 groups of ward teams. Faculty who attended 10 months of the year supervised one group. The comparison group's attending physicians were on service for 2 months or less and maintained clinic responsibilities while on service. The cost of patient care was compared by means of the length of stay, total hospital costs, and costs for ancillary services. Hospital mortality and readmission rates compared clinical outcomes.
There were 4456 patients hospitalized on the medical wards of a teaching service. No differences were detected in the length of stay (4.37 ± 0.1 days for hospital-based and 4.39 ± 0.1 days for clinic-based attending physicians). Hospital cost was observed to be similar (average cost, $5989 and $5977 per patient, respectively). The clinical outcomes were equivalent, with adjusted mortality rates for hospital-based attending physicians of 3.2% vs 3.9% for clinic-based attending physicians (P = .28).
An increase of faculty time and involvement for supervision of resident-managed hospital care did not improve clinical outcomes or decrease costs during the 1-year study period.
RECENTLY, considerable national attention has focused on physicians who devote a substantial amount of their time to the care of hospitalized patients.1 Given growing pressures to manage costs and maximize efficiency in all health care sectors, increased emphasis on cost of inpatient care is essential.2 There is growing evidence that hospitalists can shorten the length of hospital stay (LOS) and decrease inpatient costs while maintaining the quality of care.3- 6 Because the hospitalist model may offer a partial solution to the inflationary rise in hospital costs, it is being routinely considered as a model for teaching hospitals.
Many factors may influence the impact of a hospitalist program. An essential issue in trying to develop the hospitalist's role is defining the optimal amount of inpatient activity. Academic physicians in municipal hospitals attend on a ward service from 1 to 6 months per year. Wachter and Goldman1 suggested that a hospitalist should be defined as a physician occupied in hospital care 25% of his or her professional time. Although researchers at the University of California at San Francisco (UCSF) have demonstrated a decrease in costs and LOS with a reorganization of the attending service, there were "no significant differences in cost or LOS based directly on the number of months worked as attending physicians."6 This may have been due to inadequate power. The intervention increased the annual faculty commitment only from 0.9 month for the traditional attending physicians to 1.7 months for the managed care service (MCS). Only 3 (21%) of the 14 MCS internists actually attended for 3 or more months. Seventy-nine percent did not attend for the minimum of 3 months, and 43% attended for only 1 month.1,2 The authors commented on the need to clarify whether "the key factor in improving efficiency is in increased faculty experience (e.g. multiple months of work as an attending physician per year), earlier and more intensive faculty involvement and commitment to inpatient care, greater use of guidelines, or the mandate to improve quality and decrease costs."6
A reorganization of our medical teaching service allowed us to conduct a prospective controlled trial in which one group of internists provided continuous, full-time supervision for half the medical ward teams of a county teaching hospital. The other teams were supervised by internists who maintained an afternoon ambulatory clinic while attending for 1 to 2 months during the study. The study evaluated the effect of extended faculty availability during the day and an increased number of months of attending physician experience on mortality, readmission rates,7 and resource utilization for hospital care.
Santa Clara Valley Medical Center is a 390-bed county hospital in San Jose, Calif, affiliated with Stanford University School of Medicine. The internal medicine residency program was based at Santa Clara Valley Medical Center and consists of approximately 60 house staff and 70 faculty. Eight ward teams, composed of an attending physician, a resident, an intern, and a periodic medical student, managed medical care. Two ward teams were linked in the call schedule, sharing admissions on every fourth day. In addition, the linked teams admitted up to 4 patients each on the midcycle day. All medical ward admissions were managed by these ward teams. A separate intensive care unit team treated patients who required intensive care. A patient's primary care physician relinquished responsibility for treatment of the patient to the attending physician and residents. On discharge, the patient was sent back to the primary care physician's clinic for ongoing treatment. There were no subspecialty wards. All members of the department traditionally shared attending responsibilities. While attending, the faculty maintained their clinic duties as well as their administrative responsibilities. During the year before the study, 38 faculty members attended for an average of 2.5 months (range, 1-4 months). A hospital reorganization plan was implemented to increase resident supervision and attending physician involvement to optimize care and utilization of hospital resources. The effect of this reorganization was studied prospectively.
We studied clinical outcomes of patients treated by resident physicians supervised by a hospital-based attending physician (HBA). These outcomes were compared with those resulting from supervision by a clinic-based attending physician (CBA). The HBAs were recruited from the board-certified internists comprising the salaried faculty in the Department of Medicine. The HBAs were relieved of most of their outpatient responsibilities to provide full-time supervision for 4 of the 8 resident ward teams for 10 months during the study year. The traditional team structure was maintained. No productivity incentives were offered. A 7% salary supplement was given in anticipation of the increase in work hours required by the HBA schedule.
The CBAs were selected from the same pool of full-time faculty. The contrasting expectations of the 2 groups' duties are presented in Table 1. The essential difference between the 2 groups was the job expectation and their involvement in ambulatory care clinics. The HBAs attended for 10 months and were to be in the hospital and actively involved in the patient's treatment on the day of admission. The CBAs were to supervise a resident ward team, identical to the HBA teams, for 1 to 2 months of the year. While an inpatient attending physician, the CBAs made rounds in the morning with their team and then returned to their ambulatory care clinical duties for the afternoon. The majority of admissions for this group were presented to the attending physician on the morning after admission.
Each HBA team was linked to a CBA team and alternated admissions during call. The teams admitted patients to geographically separate medical wards. An exception to this separation involved patients admitted to the shared transitional care unit. Medical ward units were similar in layout, staffing, and organizational structure. The CBA group did not admit their clinic patients to their teams unless this assignment was made by the alternating admission scheme. The HBA staff had minimal interaction with the CBAs. For both groups, outpatient follow-up was provided by primary care attending physicians assigned by ambulatory care staff. A discharge clinic was staffed by the HBAs to facilitate the transition of care of HBA patients to their primary care physician. The HBAs emphasized the need for a team member to communicate details of the discharge plan to the outpatient caregiver. No other differences existed between groups in the transitioning of care from the inpatient to outpatient venue. Only 4 HBAs were used. During the 2 vacation months taken by each HBA, a CBA substituted for the HBA. Patients admitted to the linked teams during these 8 vacation periods were excluded from analysis (Figure 1). This was done to avoid dilution of the HBA effect during their absence.
Control for the assignment of residents was not included in the study design. The chief resident and program director based assignments on the resident's academic needs, as had been the custom in previous years. The study coordinators gave explicit instructions to the chief residents not to allow the attending physician's assignment to influence those of the residents. Resident satisfaction was measured by an anonymous 15-question survey. The areas evaluated included the learning experience, emphasis on evidence-based medicine, quality of health care delivered, the level of autonomy, and availability of attending physicians. Each area was ranked on a 5-point scale; 5 was excellent and 1, poor. The instrument was developed internally and validated during the 3-month prestudy period.
The study's primary intervention was the creation of differing expectations and time commitments for the attending physicians. An increase in the hospital presence of the HBAs augmented the attending physician's early involvement in the assessment and treatment of patients.
All patients admitted to the hospital on the medical service were eligible for admission to the study. These included patients admitted from the emergency department and specialty clinics and those transferred from other specialty services or community hospitals. Patients excluded from the study were those admitted to the critical care units. Patients who were admitted to a medicine team that was not part of a paired HBA-CBA team were also excluded from analysis. This occurred only during the vacation periods for the HBAs when a CBA substituted as the attending physician for the month. Data from these periods were compared separately and did not differ from those generated during the CBA-HBA months.
The first patient enrolled in the study was randomly assigned to the care of 1 of the 2 groups at the time of admission. Subsequently, patients were admitted alternately to the CBA and HBA teams admitting on the same day. Two teams from each group admitted every day. A blinded admitting clerk assigned patients to alternating teams according to a prescribed protocol. Any attempted deviation from the assignment scheme was reported. The institutional review board approved the protocol and determined that the study did not require informed consent.
Data collection and data entry were performed daily. Demographic data, attending physician assignment, LOS, hospital charges, and readmissions were derived from the Shared Medical Systems (Melvern, Pa) hospital database. The treating physicians corroborated this information once during every 4-day call cycle. After they reviewed and corrected their census sheets, the Shared Medical Systems computer was updated. All mortality data were confirmed by the treating physician, the hospital morgue, and vital statistics. Thirty-day mortality was obtained by review of the California Department of Public Health's list of deaths within the state. The random assignment of patients and the data collection procedures were implemented and refined for a 3-month period preceding the initiation of the study.
The clinical outcome measures prospectively evaluated were the hospital mortality rates and the readmission rates.7 The readmission rate was determined at 7 and 30 days after discharge.
All deaths occurring in the hospital were reviewed. The review determined the prognosis of each patient at the time of admission. The assessment included review of the admission history, physical examination, initial laboratory studies, nursing notes, and admission orders. Patients were categorized as having an excellent, good, fair, poor, or grim prognosis. Ten percent of all charts were reviewed and categorized according to this same scheme. This distribution by prognostic category was extrapolated to determine the frequency of prognostic category for the entire study population. This estimation was used to calculate the mortality rate by prognostic category. Charts of all patients who died and were in the 3 best prognostic categories were abstracted and summarized. This summary was reviewed by an independent, blinded clinician (W.A.J.) who classified the death as preventable, possibly preventable, or unpreventable.8,9 The numbers of combined preventable and possibly preventable deaths were compared with the number of unpreventable deaths for each group.
The outcome measures for comparing resource utilization included the LOS and hospital costs. Costs were determined by multiplying the ancillary ratio of cost to charges by the charges. The ancillary ratio of cost to charges was based on the Medicare cost report. Cost-charge ratios were generated for each specific cost center. A cost ratio analysis compared costs generated by teams sharing the same call schedule. The overall cost ratio was calculated by multiplying the cost-charge ratio by the total charges generated for patients admitted to each group. The individual resource centers analyzed included diagnostic imaging, the clinical laboratory, and radiology. Physicians' costs to the institution were included.
A final outcome comparison was the number of patients transferred to the critical care units after initial admission to a study team. This was used as a measure not only of quality but also of utilization of the most costly hospital resource.10- 12
An independent analysis of the LOS was conducted for the year immediately preceding this study. Data were retrospectively retrieved from Shared Medical Systems and included all patients admitted to attending physicians in the Department of Medicine. These data were censored at 100 days of hospitalization and compared with the data for all patients admitted during the study year. For purposes of this comparison only, the LOS for the study year was censored at 100 days as well. Most organizational features of the medical service were maintained during the study year. The exception to this involved the addition of 4 HBAs assigned to the medicine service as the study's main intervention. The four HBAs had been traditional attending physicians during the prestudy period. During the study, the HBAs met weekly with social service, nursing, and case managers to discuss care delivery issues and to promote coordination of the staffs' efforts.
Analysis included each patient in the group of initial assignment. If the patient's condition required a change of attending physician to an intensivist or surgeon, the entirety of the hospital stay was attributed to the initial group.
During the study organization, a power analysis predicted that 4000 patients would be required to show a clinically significant difference in the LOS. The anticipated change was a reduction of 0.5 day over an average LOS of 5 days. These figures were derived from published studies and the recent historical LOS at Santa Clara Valley Medical Center.3- 6 This predicted an 80% chance of showing a 10% reduction in the LOS with 2000 patients enrolled in each group (β = .80, α = .05).13 The admission rate to the medical services was anticipated to be 5000 patients annually. The study was designed to enroll patients for 1 year.
Statistical analysis performed on StatView (Brain Power, Inc, Calabasas, Calif) used an analysis of variance for multiple comparisons of hospital costs and LOS across International Classification of Diseases, Ninth Revision (ICD-9)14 codes. Comparisons of baseline characteristics, LOS, and factors contributing to significant differences in the analysis of variance were performed by means of an unpaired t test for continuous variables or a χ2 test for dichotomous variables. The coefficient of variation was calculated to evaluate the variability in physician practice within each group.15 We used multiple stepwise regression analysis to examine the influence of confounding variables and adjust the LOS for significant factors.16 The explanatory variables identified included sex, age, ward or transitional care unit, principal discharge ICD-9 code, and insurance status. To account for outliers in the LOS, analysis was completed with the data censored at a value equal to the 99th percentile (45 days) and at 100 days for comparison with the baseline year. Further analysis of LOS data transformed values to log 10 to limit the effect of skew. Death rate was adjusted by means of the Mantel-Haenszel χ2 calculation.16 Values are given as the mean ± SEM or mean with the 95% confidence interval (CI).
Four HBAs supervised 4 medical teams for 40 months. Twenty-seven internal medicine faculty attended for an average of 1.5 months (range, 1-2 months) and comprised the linked CBA group.
Attending physicians in both groups were board certified in internal medicine. Both HBA and CBA internists had a median of 1.5 additional certifications or special qualifications. The HBAs had an average of 10 years (range, 2-17 years) and the CBAs had 8.5 years (range, 2-21 years) of experience as ward attending physicians.
The resident assignments were comparable for the 2 groups. The distribution of junior and senior residents as well as the primary program affiliation of residents were not statistically different during the course of the study. There were 35 and 29 second-year residents and 17 and 23 third-year residents assigned to the HBA and CBA teams, respectively (P = .31). On the resident satisfaction questions, the HBAs scored significantly higher than the CBAs over the entire survey (4.5 ± 0.3 vs 3.7 ± 0.4; P<.05). The differences were greatest in the categories of attending physician availability and emphasis on evidence-based medicine.
Data were collected for all patients admitted to a medical service from April 1, 1997, through March 31, 1998. A total of 5940 patients were hospitalized on the medical service during this period. The teams that did not consist of a linked HBA-CBA group admitted 1415 patients (24%). These patients were excluded from the primary analysis. The LOS, readmissions, costs, and deaths for this group did not differ from those of patients in the 2 study groups. The HBAs discharged 2238 patients and the CBAs discharged 2217 patients during the same period. The demographic data for patients admitted to the study and control groups were comparable (Table 2). There were no differences in the distribution of patients admitted to the medical wards or transitional care unit (68% ± 2% and 32% ± 2%, respectively, for both groups).
The mortality rate during the hospitalization phase was 3.2% ± 0.9% for the HBA and 3.9% ± 0.9% for the CBA group (P = .28). Table 3 shows that, during the 30 days after discharge, the additional deaths resulted in an overall mortality rate of 6.0% ± 0.8% for patients of HBAs compared with 6.7% ± 0.8% for patients of CBAs (P = .41). The adjusted odds ratio for deaths in the 2 groups was 1.15 (95% CI, 0.86-1.54). The readmission rate was comparable for the 2 groups at 7 and 30 days. The rates were 4.1% and 12.9% for HBAs vs 4.8% and 13.5% for CBAs (P = .22 and .64, respectively).
A total of 158 patients died during the study. The number of deaths sorted by ICD-9 category did not differ significantly (P = .46). These patients' charts were reviewed, and there was a similar distribution of patients among the 5 prognostic categories (Table 3). Twenty-two (31% ± 8%) of the patients of HBAs who died had a prognosis of fair or better on admission. This compares with 36 of the patients of CBAs (42% ± 10%; P = .15). The charts of these 58 patients were abstracted and reviewed to determine whether the death could have been prevented. No significant difference was noted between the groups for preventable and unpreventable deaths (P = .97). The HBAs had 11% of total deaths characterized as preventable or possibly preventable; CBAs had 8% of the total deaths in these categories (Table 3). The estimate of the mortality rate by prognostic category showed no significant differences between the 2 groups (Table 3).
The overall LOS was comparable for the 2 groups (Table 4). When the LOS was compared according to the principal discharge diagnosis, no statistical differences were noted for the 10 most common ICD-9 code groups (Table 4). Comparisons of the LOS censored at 3 SDs, 100 days, and uncensored were statistically insignificant. The 2 groups did differ, however in the variability of several measurements. The coefficient of variation differed for LOS, average charge per patient, and average charge by each diagnostic group. The HBAs had less practice variation, with an aggregated coefficient of variation of 32% (95% CI, 23%-40%) vs 75% (95% CI, 65%-85%) for the CBAs.
The total costs generated by the hospital care for the groups were comparable. The HBAs generated $12 510 357 in overall costs compared with $12 360 837 by CBAs. These costs generated a HBA-CBA cost ratio of 1.00 (95% CI, 0.98-1.03). The average hospital cost per patient was $5989 (95% CI, $5519-$6460) and $5977 (95% CI, $5442-$6511), respectively. No differences were seen in the costs by resource category (Table 5). The HBA costs averaged $430 (95% CI, $392-$463), $569 (95% CI, $528-$602), and $809 (95% CI, $732-$881) for radiology, clinical laboratories, and pharmacy, respectively. The CBA average patient care costs were $407 (95% CI, $377-$435), $560 (95% CI, $521-$599), and $813 (95% CI, $734-$892) in the corresponding categories. All cost ratio comparisons equaled 1.
During the prestudy year, 38 faculty internists were assigned attending duties, with an average of 2.5 months of attending each. The average LOS during this prestudy year, censored at 100 days, was significantly longer than for all patients admitted during the study period when similarly censored at 100 days for this comparison (5.1 ± 0.1 days vs 4.7 ± 0.1 days; P<.05). Since the prestudy data were available only censored at 100 days, the LOS for the study year differs from the adjusted LOS for all study patients censored at 45 days (4.4 ± 0.1 days).
This study demonstrates equivalence in mortality, resource utilization, and costs for hospital care delivered by residents supervised by an HBA compared with a CBA. This comparison of outcomes is based on differences in the time commitment that full-time clinical faculty devoted to supervision of hospital care. The HBAs did not decrease LOS or costs or improve clinical outcomes despite the significant increase in the number of hours and months per year devoted to the inpatient supervision of residents. The power analysis, sample size, and adjustment for confounding variables lend credibility to the interpretation that the attending physician time commitment does not have a direct impact on resource utilization or clinical outcomes.
The UCSF group saw a significant reduction in LOS and costs, but they too were unable to demonstrate a significant correlation between these differences and the number of months that faculty were assigned to ward attending.6 Thus, 2 prospective studies, the UCSF and our experience, failed to demonstrate a relationship between the attending physician time factor and improved efficiencies. Several studies show a similar temporal relationship between "hospitalist" programs and a shortened LOS.3- 6 Three of these studies showed a reduction in LOS after the initiation of a hospitalist program.3- 5 Their methodologic designs did not allow identification of a responsible factor. One study showed no benefit when initiating a hospitalist program.17 Our study showed a reduction in LOS only with a retrospective comparison with the prestudy year. In discussing the UCSF failure to demonstrate a correlation between the number of attending physician months and the decreased LOS, Wachter et al6,18 enumerated several alternate factors that may account for lower LOS and costs. They believed that a search for the possible key factors should include "increased faculty experience (e.g., multiple months of work as an attending physician per year) earlier and more intensive faculty involvement and commitment to inpatient care, greater use of clinical guidelines, or a mandate for change." We would add that this mandate occurs in hospitals with and without hospitalist programs and may be independent of the hospitalist movement. A final addition to this list is the Hawthorne effect.19,20 This effect would anticipate improved performance when physician-participants become aware that LOS and costs are the observed outcomes. Even though there was no difference between groups, both groups reduced the LOS compared with the prestudy year. Although our findings do not identify a causative factor, they do support the conclusion that the attending physician time factor is not pivotal in determination of LOS or hospital costs.
The differences in design of the 2 prospective studies suggest a causal factor. Many of the attending physicians involved in the UCSF trial had "spent the bulk of their time engaged in research activities."6 The volunteer character of the MCS, along with the advertised MCS mandate to decrease hospital costs while maintaining quality, selected faculty with a focus on quality, cost-effective care. The traditional-service attending physicians were likely to be less focused on their ward attending obligations. With this difference in the comparative groups, the UCSF MCS reduced the LOS and hospital costs. Our study assigned full-time clinical faculty from the same pool, equally committed to cost-effective care, to both groups. Given their similarity, the greater than 5-fold increase (8 months vs 1.5 months of attending) in faculty time devoted to hospital care failed to improve quality or efficiency. Both groups achieved a LOS (4.37 ± 0.1 and 4.39 ± 0.1 days for HBAs and CBAs, respectively) similar to that of the UCSF MCS (4.3 ± 0.1 days; SEM calculated from published data). The difference in attending physician attitude to ward supervision between groups may be the critical element distinguishing the design of the UCSF hospitalist system and accounting for the improved efficiency in hospital care. If this factor is important, it follows that our study should produce no difference in LOS or cost.
In our study, a system existed in which case managers and social workers were unavailable during nights, holidays, and weekends. No attempts were made to increase the availability of diagnostic or therapeutic procedures performed by cardiology, gastroenterology, radiology, or surgery during weekends and holidays. The administration's policy of providing only emergency coverage in these areas during off hours meant that routine procedures were not available up to 30% of the time. Since these resources were common to both groups during the study, the lack of an effect may result from these limitations.21 However, given the adjusted LOS of 4.38 days, comparable with the UCSF MCS experience, we believe it is unlikely that this effect was responsible for eliminating an observable effect. An analogous rationale applies to concern that an overworked hospital staff may have nullified an HBA impact. If hospital staff are working at peak efficiency, the expedited HBA workups and order writing would fail to achieve a shortened hospital stay. The physician's commitment to hospital care may be essential but not sufficient to improve efficiency. However, as with the previous point, since quality and LOS in this study are comparable with or superior to those of previous studies, one cannot attribute the lack of an HBA effect to an overstressed system. In addition, since both groups improved in comparison with the previous year's LOS, the hospital culture was responsive to the increased emphasis on cost and resource utilization.
Although increasing attending ward commitment was not sufficient to effect a change in LOS or the quality of care, this study design did not allow comparison of many important factors that may be influenced by the hospitalist model. This study evaluated the effectiveness of a hospitalist compared with traditional academic attending physicians. No conclusions can be drawn about the advantage of hospitalists over private attending physicians caring for their own patients without house staff. This is a crucial void in the hospitalist literature, since no prospective study examined this venue. The significantly lower coefficient of variation for LOS and costs for HBAs suggests a decreased variability in practice style. This predictability in hospital management is viewed as valuable to health care planners and third-party payers. This is a variable that should be an area for future analysis, especially in the private practice and nonacademic settings. Other factors such as improvement in resident curriculum and educational programs,22,23 communication among hospital staff and physicians, an overall decrease in resource utilization,24,25 and increased physician accountability for hospital systems are additional areas of focus for future studies. It should be stressed that studies must include the evaluation of the potential negative effects of these programs as well. Relevant areas that should be included are the impact on malpractice suits, physician burnout, and the financial consequences of the system on physicians.17,25,26 Finally, the groups who have presented their experience should publish their follow-up data to provide chairs of medicine with information regarding long-term effects of these programs.22,26 For the present, chairs of medicine should retain maximum flexibility in the design and implementation of a hospitalist program, realizing that its benefits may be indirect and subjective.
Accepted for publication July 20, 2000.
We acknowledge Jen Eng, MD, for his contributions in preparing the manuscript for submission and Marcia Vierra for her organizational and clerical efforts.
Corresponding author and reprints: P. J. Kearns, MD, 751 Bascom Ave, SCVMC, San Jose, CA 95128 (e-mail: email@example.com).