[Skip to Navigation]
Sign In
Figure. Risk-Standardized 30-Day All-Cause Hospital Mortality Rate Based on Performance on Process Measures by Quintile
Image description not available.

Ranges of composite score for each quintile are as follows: first, 12.3-16.3; second, 16.3-17.6; third, 17.6-19.0, fourth, 19.0-20.5; fifth, 20.5-27.4. Error bars indicate the 95% confidence interval around the mortality rate in each quintile.

Table 1. Hospital Characteristics (N = 962)
Image description not available.
Table 2. Hospital Performance in 2002-2003 on Process Measures and Risk-Standardized Mortality Rates*
Image description not available.
Table 3. Correlation Coefficients for Process Measures and Hospital Risk-Standardized Mortality Rate (30-Day and In-Hospital Mortality Rates)*
Image description not available.
Table 4. Item-Scale Correlations for Core Process Measures*
Image description not available.
Table 5. Percent Variance in 30-Day Risk-Standardized Mortality Rates Explained by Each Process Measure and Composite Measure*
Image description not available.
1.
Jha AK, Li Z, Orav EJ, Epstein AM. Care in U.S. hospitals—the Hospital Quality Alliance program.  N Engl J Med. 2005;353:265-27416034012Google ScholarCrossref
2.
Antman EM, Anbe DT, Armstrong PW.  et al.  ACC/AHA guidelines for the management of patients with ST-elevation myocardial infarction: executive summary: a report of the ACC/AHA Task Force on Practice Guidelines (Committee to Revise the 1999 Guidelines on the Management of Patients with Acute Myocardial Infarction).  Circulation. 2004;110:588-63615289388Google ScholarCrossref
3.
Bradley EH, Curry LA, Webster TR.  et al.  Achieving rapid door-to-balloon times: how top hospitals improve complex clinical systems.  Circulation. 2006;113:1079-108516490818Google ScholarCrossref
4.
Granger CB, Steg PG, Peterson E.  et al.  Medication performance measures and mortality following acute coronary syndromes.  Am J Med. 2005;118:858-86516084178Google ScholarCrossref
5.
Peterson ED, Rose MT, Mulgund J.  et al.  Association between hospital process performance and outcomes among patients with acute coronary syndrome.  JAMA. 2006;295:1912-192016639050Google ScholarCrossref
6.
Jencks SF, Williams DK, Kay TL. Assessing hospital-associated deaths from discharge data: the role of length of stay and comorbidities.  JAMA. 1988;260:2240-22463050163Google ScholarCrossref
7.
Krumholz HM, Wang Y, Mattera JA.  et al.  An administrative claims model suitable for profiling hospital performance based upon 30-day mortality rates among patients with an acute myocardial infarction.  Circulation. 2006;113:1683-169216549637Google ScholarCrossref
8.
 Performance Measurement Initiatives: Current Specification Manual for National Hospital Quality Measures. Joint Commission on Accreditation of Healthcare Organizations Web site. http://www.jointcommission.org/PerformanceMeasurement/PerformanceMeasurement/Current+NHQM+Manual.htm. Accessibility verified June 9, 2006
9.
Normand SL, Glickman ME, Gatsonis CA. Statistical methods for profiling providers of medical care: issues and applications.  J Am Stat Assoc. 1997;92:803-814Google ScholarCrossref
10.
Landrum MB, Normand SL. Analytic methods for constructing cross-sectional profiles of health providers.  Health Serv Outcomes Res Method. 2000;1:23-47Google ScholarCrossref
11.
Sidak Z. Rectangular confidence regions for the means of multivariate normal distributions.  J Am Stat Assoc. 1967;62:626-633Google Scholar
12.
Cronbach LJ. Coefficient alpha and the internal structure of tests.  Psychometrika. 1951;16:297-334Google ScholarCrossref
13.
Nunnelly JC, Bernstein IH. Psychometric Theory. 3rd ed. New York, NY: McGraw-Hill; 1994
14.
Mehta RH, Stalhandske EJ, McCarger PA, Ruane TJ, Eagle KA. Elderly patients at highest risk with acute myocardial infarction are more frequently transferred from community hospitals to tertiary centers: reality or myth?  Am Heart J. 1999;138:688-69510502215Google ScholarCrossref
15.
Kerr EA, Krein SL, Vijan S, Hofer TP, Hayward RA. Avoiding pitfalls in chronic disease quality measurement: a case for the next generation of technical quality measures.  Am J Manag Care. 2001;7:1033-104311725807Google Scholar
16.
McGlynn EA. Selecting common measures of quality and system performance.  Med Care. 2003;41:I39-I4712544815Google Scholar
17.
Spertus JA, Radford MJ, Every NR, Ellerbeck EF, Peterson ED, Krumholz HM. Challenges and opportunities in quantifying the quality of care for acute myocardial infarction.  Circulation. 2003;107:1681-169112668506Google ScholarCrossref
18.
Jacobs R, Goddard M, Smith PC. How robust are hospital ranks based on composite performance measure?  Med Care. 2005;43:1177-118416299428Google ScholarCrossref
Original Contribution
July 5, 2006

Hospital Quality for Acute Myocardial Infarction: Correlation Among Process Measures and Relationship With Short-term Mortality

Author Affiliations
 

Author Affiliations: Department of Epidemiology and Public Health (Drs Bradley and Krumholz and Mr Elbel), Section of Cardiovascular Medicine, Department of Medicine (Drs Herrin, McNamara, and Krumholz and Mr Wang), and Robert Wood Johnson Clinical Scholars Program (Drs Bradley and Krumholz),Yale University School of Medicine, and Yale-New Haven Hospital Center for Outcomes Research and Evaluation (Dr Krumholz), New Haven, Conn; Kaiser Permanente Clinical Research Unit, Aurora, Colo, and Department of Preventive Medicine and Biometrics and the Division of Emergency Medicine, University of Colorado Health Sciences Center, Denver (Dr Magid); Health Services Research and Development Center of Excellence, Ann Arbor Veterans Affairs Medical Center, and the Department of Internal Medicine, Division of Cardiovascular Disease, University of Michigan Medical School, Ann Arbor (Dr Nallamothu); Department of Health Care Policy, Harvard Medical School and Department of Biostatistics, Harvard School of Public Health, Boston, Mass (Dr Normand); Mid America Heart Institute and the University of Missouri–Kansas City (Dr Spertus).

JAMA. 2006;296(1):72-78. doi:10.1001/jama.296.1.72
Abstract

Context The Centers for Medicare & Medicaid Services (CMS) and the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) measure and report quality process measures for acute myocardial infarction (AMI), but little is known about how these measures are correlated with each other and the degree to which inferences about a hospital's outcomes can be made from its performance on publicly reported processes.

Objective To determine correlations among AMI core process measures and the degree to which they explain the variation in hospital-specific, risk-standardized, 30-day mortality rates.

Design, Setting, and Participants We assessed hospital performance in the CMS/JCAHO AMI core process measures using 2002-2003 data from 962 hospitals participating in the National Registry of Myocardial Infarction (NRMI) and correlated these measures with each other and with hospital-level, risk-standardized, 30-day mortality rates derived from Medicare claims data.

Main Outcome Measures Hospital performance on AMI core measures; hospital-specific, risk-standardized, 30-day mortality rates for AMI patients aged 66 years or older.

Results We found moderately strong correlations (correlation coefficients ≥0.40; P values <.001) for all pairwise comparisons between β-blocker use at admission and discharge, aspirin use at admission and discharge, and angiotensin-converting enzyme inhibitor use, and weaker, but statistically significant, correlations between these medication measures and smoking cessation counseling and time to reperfusion therapy measures (correlation coefficients <0.40; P values <.001). Some process measures were significantly correlated with risk-standardized, 30-day mortality rates (P values <.001) but together explained only 6.0% of hospital-level variation in risk-standardized, 30-day mortality rates for patients with AMI.

Conclusions The publicly reported AMI process measures capture a small proportion of the variation in hospitals' risk-standardized short-term mortality rates. Multiple measures that reflect a variety of processes and also outcomes, such as risk-standardized mortality rates, are needed to more fully characterize hospital performance.

As part of the national effort to improve hospital quality, the Centers for Medicare & Medicaid Services (CMS) and the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) monitor and publicly report hospital performance on acute myocardial infarction (AMI) “core” process measures approved by the Hospital Quality Alliance.1 Although the CMS/JCAHO process measures are considered indicators of quality of AMI care,2 little is known about how these measures track with each other. Five of the 7 CMS/JCAHO process measures assess medication prescription practices. Because these processes are likely to be amenable to similar quality improvement interventions, one might expect them to be strongly correlated at the hospital level. In contrast, timely reperfusion therapy, which involves coordination among various hospital services and personnel,3 may require other types of interventions and thus be less strongly correlated with the other process measures. Understanding how process measures are themselves correlated can suggest how sensitive hospital performance rankings may be to the process measures that are included.

Furthermore, whether inferences about a hospital's overall short-term risk-standardized mortality rate can be made from its performance on process measures is not known. Previous studies4,5 have emphasized the association between medication prescription rates and in-hospital mortality, but have been limited to ascertainment of in-hospital events, which can be substantially biased by length of stay.6 Thus, the degree to which process measure performance conveys meaningful information about short-term mortality rates remains unclear. Accordingly, we used data from the National Registry of Myocardial Infarction (NRMI) and CMS to determine the correlations among AMI process measures and the association between hospital performance on process measures and hospital-specific, risk-standardized, 30-day mortality rates. We calculated these mortality rates from CMS Medicare claims data using a risk-adjustment model7 endorsed by the National Quality Forum, which was previously validated against a model based on medical record data.

Methods
Study Design and Sample

We performed a cross-sectional analysis using hospitals that reported AMI discharges to the NRMI from January 2002 through March 2003 and CMS claims data on 30-day mortality for the same hospitals and time period. This was the most recent time period for which we could obtain NRMI data and CMS data with hospital identifiers included. As of April 2003, the NRMI deidentified hospitals in compliance with the Health Insurance Portability and Accountability Act. The NRMI includes patients who meet any of the following criteria for an AMI: total creatine kinase or creatine kinase MB values that were 2 or more times the upper limit of the normal range; electrocardiographic evidence of AMI; enzymatic, scintigraphic, or autopsy evidence of AMI; or a diagnosis of AMI according to the International Classification of Diseases, Ninth Revision, Clinical Modification (code 410.X1). Each process measure was calculated for each NRMI hospital that reported at least 10 eligible patients for the selected measure during the study period. Hospitals that did not report at least 10 eligible patients for any of the process measures during the study period were excluded. Consistent with our validated risk-standardized mortality model,7 we excluded hospitals with fewer than 12 patients with AMI in the CMS claims database for the study period.

Data and Measures

We examined hospital performance for the process measures known as the CMS/JCAHO “core measures” for AMI. These measures include β-blocker prescription at admission and discharge, aspirin prescription at admission and discharge, angiotensin-converting enzyme (ACE) inhibitor prescription at discharge, smoking cessation counseling for smokers during the admission, and time to reperfusion therapy. For these processes, calculated to be consistent with the CMS/JCAHO specifications,8 we classified patients as to whether they were eligible for and whether they received the selected process using patient-level data from the NRMI to calculate hospital performance indicated by each process measure. Similarly, for admissions medication prescriptions and timely reperfusion measures, we excluded patients who were transferred in, who were either transferred out or discharged, or who died on the day of arrival; for the discharge medication prescriptions and smoking cessation counseling measures, we excluded patients who were transferred out or died before discharge. Furthermore, for the smoking cessation counseling measure, we excluded nonsmokers, and for the time to reperfusion therapy measure, we excluded patients with ST-segment elevation myocardial infarction who did not receive fibrinolytic therapy or primary percutaneous coronary intervention within 6 hours of admission. Fulfillment of the time to reperfusion therapy process was defined as receiving fibrinolytic therapy within 30 minutes of hospital arrival or receiving percutaneous coronary intervention within 120 minutes of hospital arrival.

Statistical Analysis

We estimated performance (ie, fulfillment rate) for each hospital using a separate hierarchical generalized linear model (HGLM) for each process measure. The HGLM technique9,10 allowed us to estimate rates for hospitals that accounted for clustering of patients within hospitals and that reflected the precision due to the number of patients included at each hospital. In secondary analyses, we also calculated crude process rates for each hospital. These crude rates were calculated as the number of times the selected process (eg, β-blocker at admission, timely reperfusion) was accomplished for eligible patients at a selected hospital divided by the total number of eligible patients for that measure who were treated at that hospital. Crude rates were nearly identical to the HGLM estimated rates.

We calculated the risk-standardized, 30-day, all-cause mortality rate for each hospital using patient-level data from CMS Medicare claims data for patients aged 66 years or older with AMI discharged in the study period. We used CMS data because NRMI data do not extend beyond discharge. We included patients who were at least 66 years old to have a full year of prior claims history to establish comorbidities. For each hospital, 30-day, all-cause mortality rates for patients with AMI were risk standardized with a model7 that was developed using an HGLM. The model adjusts for clinical features of patients that are linked to differences in hospital-specific 30-day mortality rates, allowing one to identify the independent effects of process measures on mortality rates, adjusted for differences in case mix. This model,7 endorsed by the National Quality Forum, has good agreement with a model based on medical chart review data. Agreement between the hospital risk-standardized mortality rates estimated by our claims database model and the medical chart database model was high (correlation coefficient = 0.90; P<.001), and the mean difference in estimated hospital-specific, risk-standardized, 30-day mortality rates between the claims database model for each hospital and the medical chart database model was 0.00 (range, −0.03 to 0.03).7 The c statistic for the claims model was 0.77.

For comparison with a recent study,5 we also calculated the risk-standardized in-hospital mortality rate using NRMI data. This in-hospital mortality rate was estimated using an HGLM in which in-hospital mortality was adjusted for age, sex, body mass index, diabetes, chronic renal insufficiency, coronary artery disease, chronic obstructive pulmonary disease, use of prehospital electrocardiogram, heart rate, blood pressure, presence of ST-segment elevation myocardial infarction, heart failure type, and hours since symptom onset. We calculated the risk-standardized in-hospital mortality rate both with all patients included and then excluding patients who were transferred out.

Using the hospital performance estimates for each of the 7 process measures, we calculated the set of pair-wise correlations. Because different numbers of patients were eligible for the process measures at different hospitals, all analyses in which the hospital was the unit of analysis were weighted by the total number of patients from that hospital who were included in the calculation of process measures. For each correlation, ρ, we tested the hypothesis that ρ = 0, adjusting the P values for multiple comparisons.11 We also calculated the Cronbach α coefficient12 for the 7 process measures and assessed the item-scale correlation13 for each measure. The item-scale correlation13 reflects how well each measure correlates with the average of the remaining 6 measures.

We created a composite process measure using the measures that were most internally consistent based on the Cronbach α coefficients and item-scale correlations. The composite measure reflected the percentage of recommended processes fulfilled for each eligible patient. We used an HGLM with binomial response to estimate the average percentage of recommended processes that were fulfilled for patients in each hospital, accounting appropriately for patients clustered within hospitals.

For the primary analysis, we used correlation analysis to determine the association of hospital risk-standardized 30-day mortality rates with hospital performance estimates on the process measures. We report both the relevant correlation coefficients and the percentage of the hospital-specific variation in risk-standardized mortality rates explained, ie, the square of the correlation coefficient, as indicators of the strength of the associations. To facilitate interpretation, we also examined the percentage of variation in risk-standardized 30-day mortality rates explained by hospital attributes, including teaching status, AMI volume, and geographical region.

We performed several secondary analyses to evaluate the robustness of our results. First, we restricted the NRMI patient sample to those who were aged 66 years or older, matching the age distribution of patients with CMS data. Second, we repeated all analyses using crude process measure rates, rather than rates estimated from the hierarchical models. Third, we used an alternative composite measure of process performance that incorporated all 7 process scores, rather than the 5 most strongly correlated. Fourth, we conducted the same analysis using NRMI and CMS data from 2001 to assess consistency in our findings over time. Fifth, we repeated the 30-day mortality analysis for hospitals with less than 5% transferred-out patients. Last, to replicate previous research,5 we performed the analysis using risk-standardized in-hospital mortality instead of 30-day mortality, using all patients and then again excluding patients who were transferred out.

Analyses were conducted using Stata version 9 (StataCorp, College Station, Tex), SAS version 8.0 (SAS Institute, Cary, NC), and HLM6 (Scientific Software International, Lincolnwood, Ill). All reported P values are 2-sided and considered significant at <.05.

Results
Samples

The sample for the analysis of hospital performance on process measures included 208 238 patients treated in 962 NRMI hospitals, representing a broad range of teaching and nonteaching hospitals, geographic regions, rural/urban location, number of beds, and annual AMI volumes (Table 1). The sample for the analysis of risk-standardized, 30-day mortality rates included 83 330 patients aged 66 years or older treated in the 899 NRMI hospitals that could be matched to CMS data. Hospital rates for each of the process measures and the composite process measure as well as 30-day and in-hospital mortality are shown in Table 2.

Correlation of Process Measures

We found moderately strong correlations (correlation coefficients >0.40; P values <.001) among all the medication prescription–related process measures. Even stronger correlations (correlation coefficients >0.60; P values <.001) were apparent between β-blocker prescription at admission and at discharge, β-blocker and aspirin prescription at admission, β-blocker and aspirin prescription at discharge, β-blocker and ACE inhibitor prescription at discharge, and aspirin and ACE inhibitor prescription at discharge (Table 3). The smoking cessation counseling and timely reperfusion therapy measures were less strongly correlated (most correlation coefficients ≤0.30) with each of the other 5 processes, although most correlation coefficients were statistically significant (P values <.001).

Item-scale correlations (Table 4) also indicated that the estimated performance measures pertaining to β-blocker, aspirin, and ACE inhibitor prescription had good internal consistency, with the correlations between any single medication measure and the average score across the remaining process measures ranging from 0.54 (for aspirin at admission) to 0.67 (for β-blocker prescription at discharge). In contrast, the item-scale correlations between the estimated proportion receiving smoking cessation counseling or receiving timely reperfusion therapy and the remaining process measures were both only 0.27 (Table 4). The Cronbach α coefficients for the measures pertaining to β-blocker, aspirin, and ACE inhibitor use was 0.86, and the Cronbach α coefficient for all process measures was 0.81.

Process Measures and Hospital-Level Risk-Standardized Mortality Rates

The process measures each had a statistically significant but modest correlation with the risk-standardized, 30-day mortality rates (Table 3), individually explaining between 0.1% and 3.3% (Table 5) of hospital-level variation in risk-standardized, 30-day mortality rates. Hospitals across the 5 quintiles of the composite process measure had very similar risk-standardized, 30-day mortality rates (Figure). Furthermore, of the 180 hospitals in the top quintile of risk-standardized mortality rates, only 56 (31.1%) were in the top quintile of the composite process score. A model that included this composite measure and the smoking cessation counseling and timely reperfusion measures as independent variables explained only 6.0% of the hospital-level variation in risk-standardized, 30-day mortality rates.

To facilitate interpretation of this 6.0% of the variation explained, we compared it with the percentage of hospital-level variation in risk-standardized, 30-day mortality rates explained by hospital attributes and found that teaching status explained 6.5% of the variation, AMI volume explained 6.8% of the variation, and geographical location explained 4.5% of the variation in risk-standardized mortality rates.

Sensitivity Analysis

Our results did not differ substantially in secondary analyses using process measures for only patients who were 66 years or older or in analysis using crude rates, rather than rates estimated from hierarchical models. The results were also largely unchanged when we used a full composite process score using the mean performance on all of the CMS/JCAHO measures, rather than those most strongly correlated. In addition, our findings were consistent over time, showing similar results using earlier NRMI and CMS data from 2001. Furthermore, the results were similar in the analysis including only hospitals with transferred-out rates of less than 5%. However, we found that excluding patients who were transferred out from the calculation of in-hospital mortality rates led to a substantially stronger association between process measures and in-hospital mortality rates. In a model with the composite process measure, smoking cessation counseling measure, and time to reperfusion measure as independent variables and in-hospital mortality as the outcome, performance on process measures explained 13.0% of the variation in in-hospital mortality rates when we excluded patients who were transferred out and only 0.6% of the variation in in-hospital mortality rates when we included all patients. We also found that hospitals with higher transfer rates had significantly lower AMI volume (correlation coefficient = 0.43; P<.001) and had significantly worse performance (correlation coefficient = −0.78; P<.001) in the composite process measure.

Comment

We found that hospital performance on the CMS/JCAHO process measures for AMI explained only 6% of the hospital-level variation in short-term, risk-standardized mortality rates for patients with AMI. This finding suggests that a hospital's short-term mortality rates after AMI cannot be reliably inferred from performance on the publicly reported process measures. Our results highlight that the current process measures provide information that is complementary to, but not redundant with, a measure of 30-day mortality.

Our findings stand in contrast to a recent study by Peterson and colleagues,5 which concluded that there is a strong correlation between process and outcome. However, the finding by Peterson and colleagues may be largely due to the exclusion of patients who are transferred out, a group that is known to be healthier on average than patients who are not transferred.14 This approach biases upward the mortality rate at smaller hospitals, which transfer out more patients and, in our data, have significantly lower performance in processes. Therefore, the exclusion of transferred-out patients overestimates the association between process and in-hospital mortality. When Peterson and colleagues restricted their analysis to hospitals with lower transfer rates, the magnitude of the association of process and outcome was markedly attenuated, with an absolute difference of only 0.7% between the top and bottom quartiles of process measures.

There are several plausible reasons for the modest correlation between processes and risk-standardized mortality, none of which, in our view, undermine the importance of continued measurement of these process measures. The CMS/JCAHO core measures were not designed to be a surrogate for overall short-term hospital mortality and are, in fact, weighted toward strategies that improve long-term outcomes. In addition, there is relatively little variation across hospitals in some processes, such as aspirin use at admission, limiting the ability to discriminate hospitals based on their performance on this measure. Last, hospital mortality rates, even risk-standardized, are likely influenced by many factors that are independent of the core measures, including processes that involve patient safety, staffing, response to emergencies, and clinical strategies that may contribute to a hospital's outcome performance.

In light of the multiple determinants of hospital performance and patient survival that are currently not captured by the core measures, there is a need for new research to define these facets of care and construct new performance measures for key processes not addressed in the current core measures. As has been suggested by others,15-17 ideal measures will demonstrate a strong link to outcomes, provide actionable information, target populations at high risk for poor quality care, allow for patient exceptions that do not reflect differences in quality, include adequate risk adjustment, and be feasible to implement. Identifying processes of care that are particular to hospitals with exceptional performance in risk-standardized mortality rates and improving their implementation nationally offers great opportunity for further improving hospital performance, as measured by patient survival.

The analysis also revealed strong correlation among many of the national performance measures for AMI care, indicating that hospitals that perform well in one of these processes are likely to perform well in the other areas. However, other process measures were less strongly correlated with the admission and discharge medication–related measures. The finding indicates that different process measures reflect distinct components of quality in AMI care. Recent research18 has shown that hospital rankings can vary substantially depending on subtle changes in weighting and aggregation rules of composite performance measures. Our work similarly demonstrates that hospital performance rankings are likely to differ substantially depending on which of the performance measures are being evaluated.

Our results should be interpreted in light of several considerations. The process measures are derived from hospitals that were participating in the NRMI. Although NRMI hospitals reflect a spectrum of hospital types and were diverse in process and outcome measures of performance, they generally have greater AMI volume and more advanced cardiac facilities than hospitals not participating in the NRMI. It is possible that we inadequately adjusted for the risk profile of patients; however, our risk-adjustment model7 has good agreement with a medical record–based risk-adjustment model. Finally, our outcome was mortality, and other outcomes are also important to patients and should be evaluated in future studies.

In conclusion, although the core measures are important in pursuing improved AMI outcomes, they capture in aggregate only a small proportion of the hospital-level variation in short-term 30-day mortality rates. Until additional process measures are developed that explain more of the variation, reporting not only the current core measures but also short-term risk-standardized mortality rates is a reasonable approach to characterize hospitals' overall quality of care.

Back to top
Article Information

Corresponding Author: Harlan M. Krumholz, MD, SM, Yale University School of Medicine, 333 Cedar St, PO Box 208088, New Haven, CT 06520-8088 (harlan.krumholz@yale.edu).

Author Contributions: Drs Bradley, Herrin, and Krumholz had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

Study concept and design: Bradley, Herrin, Elbel, McNamara, Magid, Spertus, Krumholz.

Acquisition of data: McNamara, Wang, Krumholz.

Analysis and interpretation of data: Bradley, Herrin, Elbel, McNamara, Magid, Nallamothu, Wang, Normand, Spertus, Krumholz.

Drafting of the manuscript: Bradley.

Critical revision of the manuscript for important intellectual content: Bradley, Herrin, Elbel, McNamara, Magid, Nallamothu, Wang, Normand, Spertus, Krumholz.

Statistical analysis: Bradley, Herrin, Normand.

Obtained funding: Krumholz.

Administrative, technical, or material support: Elbel.

Financial Disclosures: None reported.

Funding/Support: This research was supported by National Heart, Lung, and Blood Institute grant R01HL072575. Dr Bradley is supported by the Patrick and Catherine Weldon Donaghue Medical Research Foundation (grant 02-102) and a grant from the Claude D. Pepper Older Americans Independence Center at Yale (grant P30AG21342). Genentech Inc, provided access to the NRMI data.

Role of the Sponsor: None of the above sponsors had any role in the design and conduct of the study, the management, analysis and interpretation of the data, or the preparation and revision of the manuscript.

Acknowledgment: The article has benefited from comments from John Brush, MD, Cardiology Consultants Ltd, and Eastern Virginia Medical School, Norfolk, Va; Jeptha Curtis, MD, Yale University School of Medicine, New Haven, Conn; and Martha Blaney, PharmD, Genentech Inc, South San Francisco, Calif.

References
1.
Jha AK, Li Z, Orav EJ, Epstein AM. Care in U.S. hospitals—the Hospital Quality Alliance program.  N Engl J Med. 2005;353:265-27416034012Google ScholarCrossref
2.
Antman EM, Anbe DT, Armstrong PW.  et al.  ACC/AHA guidelines for the management of patients with ST-elevation myocardial infarction: executive summary: a report of the ACC/AHA Task Force on Practice Guidelines (Committee to Revise the 1999 Guidelines on the Management of Patients with Acute Myocardial Infarction).  Circulation. 2004;110:588-63615289388Google ScholarCrossref
3.
Bradley EH, Curry LA, Webster TR.  et al.  Achieving rapid door-to-balloon times: how top hospitals improve complex clinical systems.  Circulation. 2006;113:1079-108516490818Google ScholarCrossref
4.
Granger CB, Steg PG, Peterson E.  et al.  Medication performance measures and mortality following acute coronary syndromes.  Am J Med. 2005;118:858-86516084178Google ScholarCrossref
5.
Peterson ED, Rose MT, Mulgund J.  et al.  Association between hospital process performance and outcomes among patients with acute coronary syndrome.  JAMA. 2006;295:1912-192016639050Google ScholarCrossref
6.
Jencks SF, Williams DK, Kay TL. Assessing hospital-associated deaths from discharge data: the role of length of stay and comorbidities.  JAMA. 1988;260:2240-22463050163Google ScholarCrossref
7.
Krumholz HM, Wang Y, Mattera JA.  et al.  An administrative claims model suitable for profiling hospital performance based upon 30-day mortality rates among patients with an acute myocardial infarction.  Circulation. 2006;113:1683-169216549637Google ScholarCrossref
8.
 Performance Measurement Initiatives: Current Specification Manual for National Hospital Quality Measures. Joint Commission on Accreditation of Healthcare Organizations Web site. http://www.jointcommission.org/PerformanceMeasurement/PerformanceMeasurement/Current+NHQM+Manual.htm. Accessibility verified June 9, 2006
9.
Normand SL, Glickman ME, Gatsonis CA. Statistical methods for profiling providers of medical care: issues and applications.  J Am Stat Assoc. 1997;92:803-814Google ScholarCrossref
10.
Landrum MB, Normand SL. Analytic methods for constructing cross-sectional profiles of health providers.  Health Serv Outcomes Res Method. 2000;1:23-47Google ScholarCrossref
11.
Sidak Z. Rectangular confidence regions for the means of multivariate normal distributions.  J Am Stat Assoc. 1967;62:626-633Google Scholar
12.
Cronbach LJ. Coefficient alpha and the internal structure of tests.  Psychometrika. 1951;16:297-334Google ScholarCrossref
13.
Nunnelly JC, Bernstein IH. Psychometric Theory. 3rd ed. New York, NY: McGraw-Hill; 1994
14.
Mehta RH, Stalhandske EJ, McCarger PA, Ruane TJ, Eagle KA. Elderly patients at highest risk with acute myocardial infarction are more frequently transferred from community hospitals to tertiary centers: reality or myth?  Am Heart J. 1999;138:688-69510502215Google ScholarCrossref
15.
Kerr EA, Krein SL, Vijan S, Hofer TP, Hayward RA. Avoiding pitfalls in chronic disease quality measurement: a case for the next generation of technical quality measures.  Am J Manag Care. 2001;7:1033-104311725807Google Scholar
16.
McGlynn EA. Selecting common measures of quality and system performance.  Med Care. 2003;41:I39-I4712544815Google Scholar
17.
Spertus JA, Radford MJ, Every NR, Ellerbeck EF, Peterson ED, Krumholz HM. Challenges and opportunities in quantifying the quality of care for acute myocardial infarction.  Circulation. 2003;107:1681-169112668506Google ScholarCrossref
18.
Jacobs R, Goddard M, Smith PC. How robust are hospital ranks based on composite performance measure?  Med Care. 2005;43:1177-118416299428Google ScholarCrossref
×