Context Issues of cost and quality are gaining importance in the delivery of
medical care, and whether quality of care is better in teaching vs nonteaching
hospitals is an essential question in this current national debate.
Objective To examine the association of hospital teaching status with quality
of care and mortality for fee-for-service Medicare patients with acute myocardial
infarction (AMI).
Design, Setting, and Patients Analysis of Cooperative Cardiovascular Project data for 114,411 Medicare
patients from 4361 hospitals (22,354 patients from 439 major teaching hospitals,
22,493 patients from 455 minor teaching hospitals, and 69,564 patients from
3467 nonteaching hospitals) who had AMI between February 1994 and July 1995.
Main Outcome Measures Administration of reperfusion therapy on admission, aspirin during hospitalization,
and β-blockers and angiotensin-converting enzyme inhibitors at discharge
for patients meeting strict inclusion criteria; mortality at 30, 60, and 90
days and 2 years after admission.
Results Among major teaching, minor teaching, and nonteaching hospitals, respectively,
administration rates for aspirin were 91.2%, 86.4%, and 81.4% (P<.001); for angiotensin-converting enzyme inhibitors, 63.7%, 60.0%,
and 58.0% (P<.001); for β-blockers, 48.8%,
40.3%, and 36.4% (P<.001); and for reperfusion
therapy, 55.5%, 58.9%, and 55.2% (P = .29). Differences
in unadjusted 30-day, 60-day, 90-day, and 2-year mortality among hospitals
were significant at P<.001 for all time periods,
with a gradient of increasing mortality from major teaching to minor teaching
to nonteaching hospitals. Mortality differences were attenuated by adjustment
for patient characteristics and were almost eliminated by additional adjustment
for receipt of therapy.
Conclusions In this study of elderly patients with AMI, admission to a teaching
hospital was associated with better quality of care based on 3 of 4 quality
indicators and lower mortality.
Issues of cost and quality have increased importance in the delivery
of medical care,1 and all hospitals, especially
teaching hospitals, must reassess their mission and strategy for survival.2 Teaching hospitals may provide care that is of higher
quality but more costly when compared with nonteaching hospitals.3-10
Rosenthal et al11 suggest that much of the
increased cost attributed to academic medical centers stems from such societal
functions as providing medical education, conducting research, and caring
for indigent patients. Some authors speculate that the academic medical institution
as it is known today cannot survive,12-15
and academic medical centers are responding to this challenge by developing
new strategies for maintaining their "core mission."16
Comparisons of teaching hospitals and nonteaching hospitals have produced
different conclusions regarding cost and quality of care for several conditions,17-21
and a few studies have specifically looked at differences in the treatment
of acute myocardial infarction (AMI). Iezzoni et al7
reported higher AMI mortality rates after admission to 15 major teaching hospitals
in Boston, Mass. Rosenthal et al11 compared
severity-adjusted mortality for 6 conditions including AMI at 5 major teaching
hospitals, 6 minor teaching hospitals, and 19 nonteaching hospitals. In general,
adjusted mortality was lower at major teaching hospitals than at minor or
nonteaching hospitals for all 6 conditions. Using data from the 1994-1995
Cooperative Cardiovascular Project (CCP), Chen et al22
found that admission to top-ranked hospitals, as defined by US News & World Report, was associated with greater use of aspirin
and β-adrenergic receptor antagonists. Chen et al also reported that
patients treated at the top-ranked hospitals had better 30-day survival. An
accompanying editorial suggested that nonteaching hospitals may have lower
"standards of clinical practice" than teaching hospitals.23
We used the CCP data set, which is rich in clinical detail and breadth
of patient representation, to examine quality of care and mortality for Medicare
patients with AMI admitted to teaching vs nonteaching hospitals.
The Cooperative Cardiovascular Project
The CCP was a national quality improvement project sponsored by the
Health Care Financing Administration (HCFA) to improve the quality of care
for Medicare patients hospitalized with AMI.24,25
In the CCP, data were obtained by medical record review from a large random
sample of Medicare patients with AMI, quality measures were computed, and
the results were reported back to the admitting hospitals.26
Data for our analyses were obtained from the original CCP analysis data
set containing 234,754 randomly selected Medicare fee-for-service beneficiaries
from all 50 states who were hospitalized with AMI from February 1994 through
July 1995. Cases were identified using the hospital bills (UB-92 Claims Form
Data) in the Medicare National Claims History File. Patients with an International Classification of Diseases, 9th Revision, Clinical
Modification27 principal discharge diagnosis
code of 410 (AMI) were sampled from 6684 hospitals.
To derive the data set used in this study, 120,343 patients were excluded
for the following reasons: AMI not confirmed by clinical criteria (n = 29,885),
second hospital admission for AMI (n = 22,773), age younger than 65 years
(n = 17,591), ethnicity other than African American or white (n = 9007), transferred
to an index hospital (n = 39,025), transferred from an index hospital within
24 hours of admission (n = 42,176), or unclear hospital teaching status (n
= 11,855). We excluded patients who were not African American or white for
2 main reasons: race/ethnicity is an important confounding variable, and races/ethnicities
other than African American or white cannot be classified with reasonable
reliability from Medicare administrative data sets.28,29
Patients were confirmed as having AMI according to the clinical criteria listed
by Marciniak et al.26
As an integral part of the CCP, quality indicators for the management
of AMI were developed from the guidelines issued by the American Heart Association
and the American College of Cardiology.30 For
our analyses, we chose 4 quality indicators: (1) provision of acute reperfusion
therapy (including thrombolysis or primary angioplasty) on admission, (2)
administration of aspirin during hospitalization, (3) administration of angiotensin-converting
enzyme (ACE) inhibitors at discharge, and (4) administration of β-blockers
at discharge. These process of care indicators have been validated previously31 and are linked to favorable outcomes by strong clinical
evidence.32,33
For reperfusion therapy, aspirin, and ACE inhibitors, we identified
"ideal candidates" as patients who met inclusion criteria and lacked relative
and absolute contraindications to therapy. For β-blockers, we defined
"eligible candidates" as patients who met inclusion criteria and lacked absolute
contraindications. Candidate definitions were derived from the CCP quality
indicators, as defined by Marciniak et al.26
For the β-blocker at discharge and ACE inhibitor at discharge quality
indicators, the CCP algorithms excluded all patients not discharged alive.
As part of the CCP, all variables needed to calculate the quality of
care indicator rates and measures of risk, comorbidity, and severity of illness
were abstracted by trained personnel at 2 clinical data abstraction centers.
Data quality was monitored and maintained through the use of randomly selected
records for reabstraction, and the results are reported elsewhere.26
In addition, we ascertained patient mortality at 30, 60, and 90 days
and 2 years after hospital admission for AMI.
Definition of Hospital Teaching Status
We determined hospital teaching status by 2 methods and analyzed the
data separately for each method. First, we merged the CCP data set with HCFA
data on the number of interns per bed (I/B ratio) at each hospital. Although
different demarcation points have been used in other studies,17,19
we placed hospitals in 1 of 3 mutually exclusive categories: hospitals with
an I/B ratio greater than 0.10 (the median I/B ratio of all teaching hospitals
in our study data set) were classified as major teaching hospitals; those
with an I/B ratio less than or equal to 0.10 but greater than 0 were classified
as minor teaching hospitals; and those with an I/B ratio of 0 were considered
nonteaching hospitals.
Second, we used the classification approach by Rosenthal et al and merged
the CCP data set with the 1996 American Hospital Association Annual Hospital
Survey.11,34 Hospitals that were
members of the Council of Teaching Hospitals and had approval to participate
in residency training by the Accreditation Council for Graduate Medical Education
were classified as major teaching hospitals. Those that were not members but
had accredited residency programs were classified as minor teaching hospitals,
and those that met neither criterion were classified as nonteaching hospitals.
We used the I/B ratio method to determine hospital teaching status for
all primary analyses. We compared patient baseline demographics, severity
of illness, and comorbidity status across the 3 types of hospitals with the χ2 trend statistic for dichotomous variables and analysis of variance
for continuous variables.35 Next, we examined
the bivariate association of each process of care indicator with hospital
status. We measured quality of the process of care for each therapy as the
proportion of CCP-defined ideal or eligible candidates who actually received
treatment.
We based much of our multivariable modeling process on the work of Krumholz
et al,36 who compared several mortality prediction
models derived from CCP data. We considered the patient as the unit of analysis
and performed logistic regression with mortality as the dependent variable.37 Model 1 included terms for risk adjustment (patient
demographics, comorbidity, and severity of illness). Model 2 included all
variables from model 1 with the addition of hospital teaching status. We represented
hospital teaching status as 2 main independent indicator variables, one for
major teaching vs nonteaching and one for minor teaching vs nonteaching.
Model 3 included all terms from model 2 with the addition of covariates
representing administration of therapy. Receipt of therapy was based on treatment
categories derived from all possible combinations of the 4 unique treatments.
For model 3, we first added individual indicator variables for receipt of
each therapy with 1 model for each therapy. However, combining all therapies
in a single model produced the best fit by the −2log likelihood statistic.
We examined standardized coefficients to determine the relative contribution
of each independent variable to the variation in mortality captured by the
model.38 The c statistic
was used to assess model discrimination39 and
the Nagelkerke R2 was used as an approximation
of explanatory power.40
All primary analyses described above were performed using the study
data set, which was derived from the full CCP analysis data set by applying
the exclusion criteria. To examine these primary analyses for bias introduced
by our assumptions, we conducted a set of secondary analyses. We conducted
separate, parallel analyses for the 2 different methods of determining hospital
teaching status. To detect potential bias associated with our inclusion criteria,
we compared the results of the primary analyses based on the study data set
with identical analyses performed using the full CCP analysis data set. To
detect bias introduced by the CCP quality indicators for ideal candidacy,
we used the study data set and repeated all process of care analyses with
all patients in the denominator instead of only those classified as ideal
or eligible candidates. We repeated the univariate analyses using the study
data set without excluding those patients of race/ethnicity other than African
American or white, and we also repeated the univariate analyses after stratifying
by race. We used generalized estimation equations to detect any inflation
of statistical significance due to the clustering of patients within hospitals.41 The results of all these secondary analyses are not
reported herein, but were not importantly different from the results presented
in this article.
Hospital and Patient Characteristics
The study sample is based on 114,411 of the 234,754 patients and 4361
of the 6668 hospitals in the original CCP data set. For the included hospitals,
79.5% were classified as nonteaching, 10.1% as major teaching, and 10.4% as
minor teaching (Table 1). Teaching
hospitals tended to be larger: approximately half had bed sizes of at least
500. As expected, more major teaching hospitals offered invasive cardiac procedures,
followed by minor teaching hospitals then nonteaching hospitals.
Major teaching hospitals had a higher proportion of African American
patients compared with minor teaching and nonteaching hospitals (Table 2). While most differences in other
patient characteristics were relatively small, and perhaps not important,
some differences were statistically significant because of the very large
sample size. Patients treated at nonteaching hospitals had slightly higher
mean Charlson comorbidity and APACHE II (Acute Physiology and Chronic Health
Evaluation) scores, but patients treated at teaching hospitals had a slightly
higher mean prevalence of diabetes, hypertension, and chronic renal insufficiency
and were slightly more likely to have had prior percutaneous transluminal
coronary angioplasty (PTCA) or coronary artery bypass graft surgery.
The numbers of ideal candidates for each therapy were 57,476 for aspirin;
13,025 for ACE inhibitors; 28,636 for β-blockers; and 14,071 for reperfusion.
For aspirin, ACE inhibitors, and β-blockers, there was a "gradient" effect
with the performance of minor teaching hospitals below that of major teaching
hospitals, but above that of nonteaching hospitals (Figure 1). The mean performance of major teaching, minor teaching,
and nonteaching hospitals by type of therapy is illustrated in Figure 1.
Of all patients who received reperfusion therapy, 16.3% received primary
angioplasty alone, 71.0% received thrombolysis alone, and 12.7% received both.
Acute reperfusion therapy was more likely to include primary PTCA at major
teaching hospitals (44.8%), followed by minor teaching hospitals (40.1%),
and then by nonteaching hospitals (24.5%) (P<.001).
Mortality for patients treated at minor teaching hospitals was greater
than for those treated at major teaching hospitals and less than that of patients
treated at nonteaching hospitals (Figure 2). This gradient persisted from 30 days until 2 years following
admission.
Mortality differences were attenuated by adjustment for patient characteristics
and receipt of therapy (Table 3).
The standardized coefficients for major and minor teaching hospitals were
in general several orders of magnitude smaller than the coefficients for patient
characteristics and process of care variables.
We found that teaching hospitals provided more aspirin, ACE inhibitors,
and β-blockers for Medicare beneficiaries admitted with AMI. The gradient
effect of increasing quality of care from nonteaching to minor teaching to
major teaching hospitals was accompanied by a corresponding survival gradient,
suggesting a dose-response effect. Our multivariate analyses indicated that
the process of care measures we used were strongly related to the observed
mortality advantage of teaching hospitals.
Our study demonstrated no significant difference in the receipt of acute
reperfusion by hospital type. As acknowledged by Chen et al, CCP reperfusion
analyses are limited by the relatively small number of ideal patients.22 However, the teaching hospital is apt to have a more
complex organizational structure,42-44
which may delay administration of time-sensitive treatment such as reperfusion
beyond the period of patient eligibility. Additional potential explanations
for the absence of better teaching hospital performance on the reperfusion
measure may include, for example, competing technologies and research protocols.
In addition to differences in process of care indicators, we also found
significant mortality differences. The bivariate analysis revealed that patients
admitted to teaching hospitals had a significant mortality advantage at 30
days following discharge. This unadjusted survival advantage of approximately
4% to 5% for teaching compared with nonteaching hospitals remained remarkably
constant for each time interval (30, 60, and 90 days, and 2 years). This is
consistent with the concept that treatment administered early during hospitalization
or at time of discharge manifests an immediate and lasting benefit.
Adjustment for patient characteristics and receipt of therapy greatly
attenuated the apparent mortality difference between teaching and nonteaching
hospitals. Examination of the standardized coefficients from the multivariate
models suggests that mortality was most strongly affected by receipt of therapy
and less so by patient and hospital characteristics.38
Therefore, the posthospital survival advantage of patients admitted to teaching
hospitals may be due to better processes of care.
Although Chen et al22,45
reported similar findings using the CCP data set, they did not directly compare
the performance of teaching hospitals with nonteaching hospitals. Their main
comparison groups were top-ranked hospitals as determined by the US News & World Report vs peer hospitals. The top-ranked hospitals,
which provided better quality of care and had lower postdischarge mortality
when compared to similarly equipped and nonsimilarly equipped hospitals not
in the top 60 list, included only a small fraction of the major teaching hospitals
in the CCP data set. In addition, all comparison groups included teaching
hospitals. More specifically, Chen et al placed 60 teaching hospitals in the
top-ranked category, 426 teaching hospitals in the similarly equipped category,
and 357 teaching hospitals in the nonsimilarly equipped category. They did
subdivide teaching hospitals into major and minor categories but did not report
any subgroup analyses. Also, they only reported mortality at 30 days.
There are several important distinctions between our article and that
of Chen et al.22 We categorized hospitals exclusively
by teaching status, included an analysis of the administration of ACE inhibitors,
and extended mortality analyses to 2 years. Chen et al used 3 comparison groups
with teaching hospitals distributed among all groups, which limits direct
inferences about teaching status. Unlike Chen et al, we reported a gradient
according to hospital teaching status for both process and outcome of care.
With multivariate analyses, we also investigated the relative contribution
of teaching status and process of care to mortality.
Our study has several limitations. First, many important questions remain
to be answered, especially regarding the relationship of hospital characteristics
other than teaching status to quality of care and mortality. Second, data
were collected from retrospective chart review and administrative files, and
both sources have recognized limitations.46-48
Third, adjustment for patient socioeconomic factors was not performed; however,
because Medicare patients with lower socioeconomic status are more often treated
at teaching hospitals but receive poorer quality of care,49
lack of adjustment for socioeconomic status is likely to underestimate the
benefit of teaching hospital status. Fourth, we considered overall mortality
alone and did not examine cardiovascular deaths separately. Fifth, we did
not consider factors that could influence the probability of death after discharge.
Sixth, part of the apparent residual mortality difference may be a consequence
of inadequate risk adjustment. For example, simple adjustment for case mix
may be improved by the addition of measures of patient functional level,50 and different risk adjustment methods may give different
results.51
Three specific sources of bias in our study must be considered. First,
excluding patients transferred from another acute care facility may introduce
referral bias. The relationship of case mix and interhospital transfer is
complex, with several studies pointing to differences in severity of illness,
comorbidity, and adverse outcomes.52-58
However, it is not logical to attribute processes of care incurred at the
initial hospital to a subsequent receiving hospital. Therefore, our main analysis
focused on the set of patients not received in transfer.
Second, indicator bias may occur because the CCP indicators identified
a subset of patients with strong indications for therapy and did not include
many patients with weaker indications. As a result, many patients for whom
a specific therapy might be reasonable were not selected as ideal or eligible
candidates. Thus, the CCP quality indicators may select a subset of patients
different in critical aspects other than candidacy for therapy, leading to
biased estimates. However, when we performed all analyses using simple receipt
of therapy for all patients instead of receipt of therapy for only ideal or
eligible candidates, we obtained similar results. Also, the 4 indicators we
chose for this study do not completely capture quality of care for patients
with AMI, and comparisons based on other quality indicators might have shown
different results.59
Third, we excluded patients younger than 65 years, and this Medicare
population has a high proportion of dialysis patients.60
However, we found no evidence of "exclusion bias" when we performed all analyses
without the exclusion criteria and obtained similar results. Many patients,
such as those enrolled in Medicare managed care organizations, were excluded
by necessity because they were not in the original data set.
Our study also has several strengths. For example, we used a national
data set that provided adequate power to determine overall differences of
care. We placed our analysis in the classic quality improvement context of
structure, process, and outcome.61 We used
a database rich in clinical variables, which allowed determination of ideal
status for receipt of therapy and adjustment for important factors such as
severity of illness, comorbidity, and refusal of therapy.
In this study, we found that teaching hospitals provided more aspirin, β-blockers,
and ACE inhibitors to Medicare patients hospitalized with AMI and that there
was a gradient of increasing performance from nonteaching to minor teaching
to major teaching hospitals. We found no difference in the use of acute reperfusion
therapy. We also observed a corresponding teaching hospital survival advantage
with a gradient of increasing mortality from major teaching to minor teaching
to nonteaching hospitals. Mortality differences were attenuated by adjustment
for patient characteristics and process of care. In the multivariable analysis,
process of care had the strongest association with survival, suggesting that
unadjusted mortality is lower in teaching hospitals because they offer better
care.
It is important to emphasize that there is substantial room for improvement
by all hospitals. However, the emerging trend in the quality improvement community
of implementing change in an organizational context rather than in the context
of an individual health care professional offers new hope for improvement.62,63 Hospital characteristics such as
teaching status warrant serious consideration in the formulation of national
policies and programs to improve health care quality.
1.McArthur JH, Moore FD. The two cultures and the health care revolution: commerce and professionalism
in medical care.
JAMA.1997;277:985-989.Google Scholar 2.Kralewski JE, Hart G, Perlmutter C, Chou SN. Can academic medical centers compete in a managed care system?
Acad Med.1995;70:867-872.Google Scholar 3.Mechanic R, Coleman K, Dobson A. Teaching hospital costs: implications for academic missions in a competitive
market.
JAMA.1998;280:1015-1019.Google Scholar 4.Rosborough TK. Doctors in training: wasteful and inefficient?
BMJ.1998;316:1107-1108.Google Scholar 5.Zimmerman JE, Shortell SM, Knaus WA.
et al. Value and cost of teaching hospitals: a prospective, multicenter, inception
cohort study.
Crit Care Med.1993;21:1432-1442.Google Scholar 6.Cox JL, Chen E, Naylor CD. Revascularization after acute myocardial infarction: impact of hospital
teaching status and on-site invasive facilities.
J Gen Intern Med.1994;9:674-678.Google Scholar 7.Iezzoni LI, Shwartz M, Moskowitz MA, Ash AS, Sawitz E, Burnside S. Illness severity and costs of admissions at teaching and nonteaching
hospitals.
JAMA.1990;264:1426-1431.Google Scholar 8.Kuhn EM, Hartz AJ, Gottlieb MS, Rimm AA. The relationship of hospital characteristics and the results of peer
review in six large states.
Med Care.1991;29:1028-1038.Google Scholar 9.Keeler EB, Rubenstein LV, Kahn KL.
et al. Hospital characteristics and quality of care.
JAMA.1992;268:1709-1714.Google Scholar 10.Udvarhelyi IS, Rosborough T, Lofgren RP, Lurie N, Epstein AM. Teaching status and resource use for patients with acute myocardial
infarction: a new look at the indirect costs of graduate medical education.
Am J Public Health.1990;80:1095-1100.Google Scholar 11.Rosenthal GE, Harper DL, Quinn LM, Cooper GS. Severity-adjusted mortality and length of stay in teaching and nonteaching
hospitals: results of a regional study.
JAMA.1997;278:485-490.Google Scholar 12.Iglehart JK. Rapid changes for academic medical centers, 1.
N Engl J Med.1994;331:1391-1395.Google Scholar 13.Iglehart JK. Rapid changes for academic medical centers, 2.
N Engl J Med.1995;332:407-411.Google Scholar 14.Epstein AM. US teaching hospitals in the evolving health care system.
JAMA.1995;273:1203-1207.Google Scholar 15.Kassirer JP. Academic medical centers under siege.
N Engl J Med.1994;331:1370-1371.Google Scholar 16.Fogelman AM, Goode LD, Behrens BL.
et al. Preserving medical schools' academic mission in a competitive marketplace.
Acad Med.1996;71:1168-1199.Google Scholar 17.Ayanian JZ, Weissman JS, Chasan-Taber S, Epstein AM. Quality of care for two common illnesses in teaching and nonteaching
hospitals.
Health Aff (Millwood).1998;17:194-205.Google Scholar 18.Espehaug B, Havelin LI, Engesaeter LB, Vollset SE. The effect of hospital-type and operating volume on the survival of
hip replacements: a review of 39,505 primary total hip replacements reported
to the Norwegian Arthroplasty Register, 1988-1996.
Acta Orthop Scand.1999;70:12-18.Google Scholar 19.Taylor Jr DH, Whellan DJ, Sloan FA. Effects of admission to a teaching hospital on the cost and quality
of care for Medicare beneficiaries.
N Engl J Med.1999;340:293-299.Google Scholar 20.Wells RD, Dahl B, Nilson B. Comparison of the levels of quality of inpatient care delivered by
pediatrics residents and by private, community pediatricians at one hospital.
Acad Med.1998;73:192-197.Google Scholar 21.Scholer SJ, Pituch K, Orr DP, Clark D, Dittus RS. Effect of health care system factors on test ordering.
Arch Pediatr Adolesc Med.1996;150:1154-1159.Google Scholar 22.Chen J, Radford MJ, Wang Y, Marciniak TA, Krumholz HM. Do "America's Best Hospitals" perform better for acute myocardial infarction?
N Engl J Med.1999;340:286-292.Google Scholar 23.Kassirer JP. Hospitals, heal yourselves.
N Engl J Med.1999;340:309-310.Google Scholar 24.Jencks SF, Wilensky GR. The health care quality improvement initiative: a new approach to quality
assurance in Medicare.
JAMA.1992;268:900-903.Google Scholar 25.Ellerbeck EF, Jencks SF, Radford MJ.
et al. Quality of care for Medicare patients with acute myocardial infarction:
a four-state pilot study from the Cooperative Cardiovascular Project.
JAMA.1995;273:1509-1514.Google Scholar 26.Marciniak TA, Ellerbeck EF, Radford MJ.
et al. Improving the quality of care for Medicare patients with acute myocardial
infarction: results from the Cooperative Cardiovascular Project.
JAMA.1998;279:1351-1357.Google Scholar 27. International Classification of Diseases, Ninth Revision, Clinical
Modification . 6th ed. Washington, DC: Public Health Service, US Dept of Health
and Human Services; 1997.
28.Pan CX, Glynn RJ, Mogun H, Choodnovskiy I, Avorn J. Definition of race and ethnicity in older people in Medicare and Medicaid.
J Am Geriatr Soc.1999;47:730-733.Google Scholar 29.Lauderdale DS, Goldberg J. The expanded racial and ethnic codes in the Medicare data files: their
completeness of coverage and accuracy.
Am J Public Health.1996;86:712-716.Google Scholar 30.Ryan TJ, Anderson JL, Antman EM.
et al. ACC/AHA guidelines for the management of patients with acute myocardial
infarction: executive summary: a report of the American College of Cardiology/American
Heart Association Task Force on Practice Guidelines (Committee on Management
of Acute Myocardial Infarction).
Circulation.1996;94:2341-2350.Google Scholar 31.Lambert-Huber D, Ellerbeck E, Wallace R.
et al. Validating quality of care indicators for patients with acute myocardial
infarction.
Clin Perform Qual Health Care.1994;2:219-222.Google Scholar 32.Collins R, Peto R, Baigent C, Sleight P. Aspirin, heparin, and fibrinolytic therapy in suspected acute myocardial
infarction.
N Engl J Med.1997;336:847-860.Google Scholar 33.Hennekens CH, Albert CM, Godfried SL, Gaziano JM, Buring JE. Adjunctive drug therapy of acute myocardial infarction: evidence from
clinical trials.
N Engl J Med.1996;335:1660-1667.Google Scholar 34. American Hospital Association Guide to the Health Care Fields . Chicago, Ill: Healthcare InfoSource; 1996.
35.Rosner B. Fundamentals of Biostatistics. 4th ed. Belmont, Calif: Wadsworth Publishing Co; 1995.
36.Krumholz HM, Chen J, Wang Y, Radford MJ, Chen YT, Marciniak TA. Comparing AMI mortality among hospitals in patients 65 years of age
and older: evaluating methods of risk adjustment.
Circulation.1999;99:2986-2992.Google Scholar 37.Harrell Jr FE, Lee KL, Mark DB. Multivariable prognostic models: issues in developing models, evaluating
assumptions and adequacy, and measuring and reducing errors.
Stat Med.1996;15:361-387.Google Scholar 38.Hamilton LC. Regression With Graphics: A Second Course in Applied
Statistics. Belmont, Calif: Brooks/Cole; 1992.
39.Centor RM. Signal detectability: the use of ROC curves and their analyses.
Med Decis Making.1991;11:102-106.Google Scholar 40.Nagelkerke NJD. A note on general definition of the coefficient of determination.
Biometrika.1991;78:691-692.Google Scholar 41.Liang K, Zeger S. Longitudinal data analysis for discrete and continuous outcomes.
Biometrics.1986;42:121-130.Google Scholar 42.Magnusen K. Organizational Design, Development, and Behavior:
A Situational View. Glenview, Ill: Scott Foresman & Co; 1977.
43.Shortell SM, O'Brien JL, Carman JM.
et al. Assessing the impact of continuous quality improvement/total quality
management: concept versus implementation.
Health Serv Res.1995;30:377-401.Google Scholar 44.al-Haider AS, Wan TT. Modeling organizational determinants of hospital mortality.
Health Serv Res.1991;26:303-323.Google Scholar 45.Chen J, Radford MJ, Wang Y, Marciniak TA, Krumholz HM. Performance of the "100 top hospitals": what does the report card report?
Health Aff (Millwood).1999;18:53-68.Google Scholar 46.Quam L, Ellis L, Venus P, Clouse J, Taylor C, Leatherman S. Using claims data for epidemiologic research: the concordance of claims-based
criteria with the medical record and patient survey for identifying a hypertensive
population.
Med Care.1993;31:498-507.Google Scholar 47.Jollis J, Ancukiewicz M, DeLong E, Pryor D, Muhlbaier L, Mark D. Discordance of databases designed for claims payment versus clinical
information systems: implications for outcomes research.
Ann Intern Med.1993;119:844-850.Google Scholar 48.Allison JJ, Wall TC, Spettell CM.
et al. The art and science of chart review.
Jt Comm J Qual Improv.2000;26:115-136.Google Scholar 49.Kahn KL, Pearson ML, Harrison ER.
et al. Health care for black and poor hospitalized Medicare patients.
JAMA.1994;271:1169-1174.Google Scholar 50.Covinsky KE, Justice AC, Rosenthal GE, Palmer RM, Landefeld CS. Measuring prognosis and case mix in hospitalized elders: the importance
of functional status.
J Gen Intern Med.1997;12:203-208.Google Scholar 51.Iezzoni LI, Ash AS, Shwartz M, Daley J, Hughes JS, Mackiernan YD. Judging hospitals by severity-adjusted mortality rates: the influence
of the severity-adjustment method.
Am J Public Health.1996;86:1379-1387.Google Scholar 52.Bernard AM, Hayward RA, Rosevear J, Chun H, McMahon LF. Comparing the hospitalizations of transfer and non-transfer patients
in an academic medical center.
Acad Med.1996;71:262-266.Google Scholar 53.Clough JD, Kay R, Gombeski Jr WR, Nickelson DE, Loop FD. Mortality of patients transferred to a tertiary care hospital.
Cleve Clin J Med.1993;60:449-454.Google Scholar 54.Gordon HS, Rosenthal GE. Impact of interhospital transfers on outcomes in an academic medical
center: implications for profiling hospital quality.
Med Care.1996;34:295-309.Google Scholar 55.Wyatt SM, Moy E, Levin RJ.
et al. Patients transferred to academic medical centers and other hospitals:
characteristics, resource use, and outcomes.
Acad Med.1997;72:921-930.Google Scholar 56.Schlesinger M, Dorwart R, Hoover C, Epstein S. The determinants of dumping: a national study of economically motivated
transfers involving mental health care.
Health Serv Res.1997;32:561-590.Google Scholar 57.Newhouse JP. Do unprofitable patients face access problems?
Health Care Financ Rev. 1989 Winter;11(2):33-421989;11:33-42.Google Scholar 58.Ballard DJ, Bryant SC, O'Brien PC, Smith DW, Pine MB, Cortese DA. Referral selection bias in the Medicare hospital mortality prediction
model: are centers of referral for Medicare beneficiaries necessarily centers
of excellence?
Health Serv Res.1994;28:771-784.Google Scholar 59.Eddy DM. Performance measurement: problems and solutions.
Health Aff (Millwood).1998;17:7-25.Google Scholar 60.Greer J. End stage renal disease.
Health Care Financing Rev.1992(suppl):199-205.Google Scholar 61.Donabedian A. The Definition of Quality and Approaches to Its Assessment:
Explorations in Quality Assessment and Monitoring. Ann Arbor, Mich: Health Administration Press; 1980.
62.Kolb D, Rubin IM. Organizational Behavior: An Experimental Approach. Englewood Cliffs, NJ: Prentice-Hall; 1991.
63.Shortell SM. Remaking Health Care in America. San Francisco, Calif: Jossey-Bass; 1996.