[Skip to Navigation]
Sign In
Table 1. 
Description of Quality Measures Collected by the JCAHO and the CMS
Description of Quality Measures Collected by the JCAHO and the CMS
Table 2. 
Distribution of Characteristics for Hospitals Participating and Not Participating in Performance Reporting to Either the JCAHO or the CMS*
Distribution of Characteristics for Hospitals Participating and Not Participating in Performance Reporting to Either the JCAHO or the CMS*
Table 3. 
Performance on Quality Indicators for AMI, CHF, and Pneumonia Using Data From the JCAHO or the CMS
Performance on Quality Indicators for AMI, CHF, and Pneumonia Using Data From the JCAHO or the CMS
Table 4. 
Multivariate Predictors of Performance for Disease-Specific and Functional Composites
Multivariate Predictors of Performance for Disease-Specific and Functional Composites
Table 5. 
Differences in Available Technology Between Hospitals in the Bottom Quartile and Top Quartile of the Technology Index*
Differences in Available Technology Between Hospitals in the Bottom Quartile and Top Quartile of the Technology Index*
1.
 View California healthcare quality ratings. HealthScope Web site. http://www.healthscope.org/. Accessed May 31, 2006
2.
 NCQA home page.  National Committee for Quality Assurance Web site. http://hprc.ncqa.org/index.asp. Accessed April 1, 2005Google Scholar
3.
 Medicare personal plan finder. Medicare Web site. http://www.medicare.gov/MPPF/Include/DataSection/Questions/Welcome.asp?version=default&browser=. Accessed April 27, 2005
4.
 Performance measurement initiatives. Joint Commission on Accreditation of Healthcare Organizations Web site. http://www.jointcommission.org/PerformanceMeasurement/PerformanceMeasurement/. Accessed April 15, 2005
5.
Williams  SCSchmaltz  SPMorton  DJKoss  RGLoeb  JM Quality of care in U.S. hospitals as reflected by standardized measures, 2002-2004.  N Engl J Med 2005;353255- 264PubMedGoogle ScholarCrossref
6.
US Department of Health and Human Services, Hospital Compare: a quality tool for adults, including people with Medicare. http://www.hospitalcompare.hhs.gov/Hospital/Static/Data-Professionals.asp?dest=NAV|Home|DataDetails|ProfessionalInfo#TabTop. Accessed April 27, 2005
7.
Jha  AKLi  ZOrav  EJEpstein  AM Care in U.S. hospitals—the Hospital Quality Alliance program.  N Engl J Med 2005;353265- 274PubMedGoogle ScholarCrossref
8.
Williams  SCWatt  ASchmaltz  SPKoss  RGLoeb  JM Assessing the reliability of standardized performance indicators.  Int J Qual Health Care 2006;18246- 255PubMedGoogle ScholarCrossref
9.
Watt  AWilliams  SLee  KRobertson  JKoss  RGLoeb  JM Keen eye on core measures: Joint Commission data quality study offers insights into data collection, abstracting processes.  J AHIMA 2003;7420- 25PubMedGoogle Scholar
10.
American Hospital Association, American Hospital Association Guide to the Health Care Fields.  Chicago, Ill Healthcare InfoSource1996;
11.
 Hospital quality incentive demonstration project: summary of composite quality scoring methodology. Premier, Inc Web site. http://www.premierinc.com/all/quality/hqi/resources/september-scoring-overview-september.pdf. Accessed April 27, 2005
12.
Mark  BAHarless  DWMcCue  MXu  Y A longitudinal examination of hospital registered nurse staffing and quality of care [published correction appears in Health Serv Res. 2004;39:1629].  Health Serv Res 2004;39279- 300PubMedGoogle ScholarCrossref
13.
Spetz  SBaker  L Has Managed Care Affected the Availability of Medical Technology?  San Francisco Public Policy Institute of California1999;
14.
Guttman  L Some necessary conditions for common-factor analyses.  Psychometrika 1954;19149- 161Google ScholarCrossref
15.
McGlynn  EAAsch  SMAdams  J  et al.  The quality of health care delivered to adults in the United States.  N Engl J Med 2003;3482635- 2645PubMedGoogle ScholarCrossref
16.
Aiken  LHClarke  SPSloane  DMSochalski  JSilber  JH Hospital nurse staffing and patient mortality, nurse burnout, and job dissatisfaction.  JAMA 2002;2881987- 1993PubMedGoogle ScholarCrossref
17.
Needleman  JBuerhaus  PMattke  SStewart  MZelevinsky  K Nurse-staffing levels and the quality of care in hospitals.  N Engl J Med 2002;3461715- 1722PubMedGoogle ScholarCrossref
18.
Person  SDAllison  JJKiefe  CI  et al.  Nurse staffing and mortality for Medicare patients with acute myocardial infarction.  Med Care 2004;424- 12PubMedGoogle ScholarCrossref
19.
Taylor  DH  JrWhellan  DJSloan  FA Effects of admission to a teaching hospital on the cost and quality of care for Medicare beneficiaries.  N Engl J Med 1999;340293- 299PubMedGoogle ScholarCrossref
20.
Sloan  FATrogdon  JGCurtis  LHSchulman  KA Does the ownership of the admitting hospital make a difference?  Med Care 2003;411193- 1205PubMedGoogle ScholarCrossref
21.
Allison  JJKiefe  CIWeissman  NW  et al.  Relationship of hospital teaching status with quality of care and mortality for Medicare patients with acute MI.  JAMA 2000;2841256- 1262PubMedGoogle ScholarCrossref
22.
Ayanian  JZWeissman  JSChasan-Taber  SEpstein  AM Quality of care for two common illnesses in teaching and nonteaching hospitals.  Health Aff (Millwood) 1998;17194- 205PubMedGoogle ScholarCrossref
23.
Ayanian  JZWeissman  JS Teaching hospitals and quality of care: a review of the literature.  Milbank Q 2002;80569- 593, vPubMedGoogle ScholarCrossref
24.
Baldwin  LMMacLehose  RFHart  LGBeaver  SKEvery  NChan  L Quality of care for acute myocardial infarction in rural and urban US hospitals.  J Rural Health 2004;2099- 108PubMedGoogle ScholarCrossref
25.
Jha  AKPerlin  JBKizer  KWDudley  RA Effect of the transformation of the Veterans Affairs Health Care System on the quality of care.  N Engl J Med 2003;3482218- 2227PubMedGoogle ScholarCrossref
26.
 Medicare demonstration shows hospital quality of care improves with payments tied to quality.  November15 2005;Centers for Medicare & Medicaid Services Web site. http://www.cms.hhs.gov/apps/media/press/release.asp?Counter=1729. Accessed October 3, 2006
27.
Jencks  SFCuerdon  TBurwen  DR  et al.  Quality of medical care delivered to Medicare beneficiaries.  JAMA 2000;2841670- 1676PubMedGoogle ScholarCrossref
28.
Boyd  CMDarer  JBoult  CFried  LPBoult  LWu  AW Clinical practice guidelines and quality of care for older patients with multiple comorbid diseases: implications for pay for performance.  JAMA 2005;294716- 724PubMedGoogle ScholarCrossref
29.
Werner  RMAsch  DA The unintended consequences of publicly reporting quality information.  JAMA 2005;2931239- 1244PubMedGoogle ScholarCrossref
Original Investigation
December 11/25, 2006

Quality of Care for the Treatment of Acute Medical Conditions in US Hospitals

Author Affiliations

Author Affiliations: Department of Health Care Policy, Harvard Medical School (Drs Landon, Normand, O’Malley, and McNeil and Mr Lessler), Division of General Medicine and Primary Care, Beth Israel Deaconess Medical Center (Dr Landon), Department of Biostatistics, Harvard School of Public Health (Dr Normand), and Department of Radiology, Brigham and Women's Hospital (Dr McNeil), Boston, Mass; and the Joint Commission on Accreditation of Healthcare Organizations, Chicago, Ill (Drs Schmaltz and Loeb).

Arch Intern Med. 2006;166(22):2511-2517. doi:10.1001/archinte.166.22.2511
Abstract

Background  The Joint Commission on Accreditation of Healthcare Organizations and the Centers for Medicare and Medicaid Services recently began reporting on quality of care for acute myocardial infarction, congestive heart failure, and pneumonia.

Methods  We linked performance data submitted for the first half of 2004 to American Hospital Association data on hospital characteristics. We created composite scales for each disease and used factor analysis to identify 2 additional composites based on underlying domains of quality. We estimated logistic regression models to examine the relationship between hospital characteristics and quality.

Results  Overall, 75.9% of patients hospitalized with these conditions received recommended care. The mean composite scores and their associated interquartile ranges were 0.85 (0.81-0.95), 0.64 (0.52-0.78), and 0.88 (0.80-0.97) for acute myocardial infarction, congestive heart failure, and pneumonia, respectively. After adjustment, for-profit hospitals consistently underperformed not-for-profit hospitals for each condition, with odds ratios (ORs) ranging from 0.79 (95% confidence interval [CI], 0.78-0.80) for the congestive heart failure composite measure to 0.90 (95% CI, 0.89-0.91) for the pneumonia composite. Major teaching hospitals had better performance on the treatment and diagnosis composite (OR, 1.37; 95% CI, 1.34-1.39) but worse performance on the counseling and prevention composite (OR, 0.83; 95% CI, 0.82-0.84). Hospitals with more technology available, higher registered nurse staffing, and federal/military designation had higher performance.

Conclusions  Patients are more likely to receive high-quality care in not-for-profit hospitals and in hospitals with high registered nurse staffing ratios and more investment in technology. Because payments and sources of payments affect some of these factors (eg, investments in technology and staffing ratios), policy makers should evaluate the effect of alternative payment approaches on quality.

Public reporting of standardized measures of quality has become an important component of quality improvement activities at national and local levels.1-3 Within the past several years, national reporting activities on hospitalized patients have begun. These activities started with a pilot activity by the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) in 2001, which was expanded in January 2004 to require submission of monthly data for JCAHO-accredited hospitals on performance measures across 3 of 5 selected disease conditions.4,5 In parallel, the Centers for Medicare and Medicaid Services (CMS) has disseminated performance data from hospitals that participate in the Hospital Quality Alliance.6 This program was significantly enhanced when the Medicare Modernization Act of 2003 required that, beginning in 2004, hospitals report their performance on 10 measures in the areas of congestive heart failure (CHF), acute myocardial infarction (AMI), and pneumonia to receive their full Medicare payment update. These CMS measures were selected to overlap with the larger set of JCAHO measures in these areas. Although most hospitals participate in both reporting initiatives, some selectively submit data to only 1 of the 2 organizations.

Recently, an analysis of hospitals reporting to the Hospital Quality Alliance demonstrated significant variability in hospital quality by hospital referral region and selected hospital characteristics.7 That study, however, included data only from the 10-measure CMS “starter set” and did not include hospitals that reported data only to the JCAHO. Thus, although results on reports from the JCAHO and CMS are available on the Web on a hospital-by-hospital basis, to our knowledge, these have not been analyzed jointly so as to obtain a complete national picture of quality from data available through both organizations. Nor have there been any analyses with these data on the characteristics of hospitals associated with high quality of care.

In this study we linked performance data reported to either the CMS or the JCAHO for the first half of 2004 from more than 4000 hospitals to data on hospital characteristics obtained from the American Hospital Association (AHA) National Survey of Hospitals to address 2 important questions. First, what is the quality of care in US hospitals for these 3 common medical conditions using the expanded set of indicators available through the JCAHO. Second, what hospital characteristics are associated with high-quality performance? As a derivative of the first question, we also asked whether hospitals that provided high quality for 1 condition were likely to do so for the other 2 and examined the extent to which indicators within and across conditions offered a consistent picture of quality.

Methods
Sources of data
Hospital Quality Data

For each hospital that submitted clinical data to either organization, we obtained data for all relevant discharges during January 1 through June 30, 2004.4,6 Because some of the CMS measures were submitted only for the second quarter of 2004, we preferentially used JCAHO data that were available for the entire 6-month reporting period. Of note, both data sets included all eligible patients and not just those covered by Medicare. The JCAHO measures that we examined were selected from a larger set of candidate measures under the direction of expert panels of nationally recognized physician leaders, and the Hospital Quality Alliance measures were a subset of these. The criteria for measure selection specified that the measure target improvement in the health of populations and be precisely defined and specified, reliable, valid, and interpretable. The measures then underwent extensive pilot testing and validation by the JCAHO, with most measures demonstrating agreement rates of more than 90% on reabstractions done at a sample of hospitals.8 Samples of the CMS data are audited to ensure that the data being reported are accurate and, through a quality improvement organization, the data are validated by reabstracting a sample of medical records.6,9 We focused on processes of care for pneumonia, AMI, and CHF (Table 1). We excluded from our analyses 3 measures in the JCAHO data that were reported at the hospital level rather than at the individual patient level. When available, we substituted results on similar measures in the CMS data that were provided at the patient level.

AHA National Survey

We used the 2003 Annual Survey of Hospitals from the AHA to define the population of hospitals operating in the United States.10 We restricted our analyses to general medical/surgical hospitals and specialty “heart hospitals” (a small number) that focus on cardiovascular care. The survey data contain a core set of variables that are available for all hospitals in the data set and an expanded set of variables for hospitals that responded to the 2003 survey. The response rate for general medical/surgical hospitals was approximately 90%.

Linking procedures

We linked the CMS and JCAHO databases to the central AHA database by using information contained within the files, as well as logical algorithms that were based on hospital name and location supplemented by Internet searches and telephone calls. We successfully linked all but 181 of the hospitals. Of these, 111 were military or specialty hospitals (eg, psychiatric or orthopedic), 12 opened in 2003 or later, and 58 were present in either or both of the JCAHO and CMS databases but no entry could be identified in the AHA database.

For overlapping measures in the 2 databases, we identified discrepancies in quality measures for fewer than 1% of the entries, suggesting that hospitals overwhelmingly submitted the same data to both data sets. Differences generally occurred if multiple hospitals in a system submitted aggregated data across the system to either the CMS or the JCAHO. We attempted to maintain the smallest reporting entity (eg, a single hospital instead of a system) from either source for our analyses.

Definition of composite measures

For analyses predicting quality of care, we used 2 distinct types of composite measures. The first are disease-specific composite measures that were created using an “opportunity” score approach. The disease-specific composites were constructed for each hospital by dividing the sum of the number of opportunities met across all measures within a disease by the total number of opportunities.11

The second type of composite measure was based on the results of a factor analysis that included all of the measures from each of the 3 conditions considered together. The factor analysis was used to identify those measures that were related to underlying domains of quality that were common to all 3 conditions; that is, these “functional” composites consisted of core types of processes that crosscut multiple conditions.

Hospital characteristics

Analyses linking quality to hospital characteristics included several items for all hospitals in the data set: number of beds, ownership (for-profit, not-for-profit, government, or military), region, metropolitan statistical area type (rural, small, medium, or large), and teaching status (major teaching [member of the Council of Teaching Hospitals], minor teaching [any other medical school affiliation or residency program], or nonteaching). In addition, for hospitals that responded to the 2003 AHA survey, we included measures that assessed the availability of advanced technologies (eg, magnetic resonance imaging and positron-emission tomography), nurse staffing patterns, and the number of Medicaid and Medicare discharges. Nurse staffing levels were calculated as the number of hours of care by a registered nurse or licensed practical nurse per adjusted inpatient day based on a standard work year of 2080 hours per full-time-equivalent nurse (40 h/wk for 52 weeks). Nurse staffing levels and the proportion of admissions covered by Medicare and Medicaid were divided into quartiles. We used data from the expanded set of variables to create a technology index because of the collinearity between the presence or absence of the individual technologies.12,13 The scale weighs the presence of a technology according to the percentage of hospitals that do not possess the technology and then creates an index based on the sum of these weights for each hospital. The key attribute of this index is that it increases with the addition of technologies that are rare.

Statistical analyses

We first compared hospitals participating in public reporting to those not participating. We defined a hospital as participating if performance data on any indicator were submitted to the JCAHO, the CMS, or both. We tested bivariate associations using 2-tailed t tests for continuous variables and χ2 tests for categorical variables.

Creating Functional Composite Measures

To perform the factor analysis, we first estimated a hospital-level covariance matrix using a multilevel model that allowed us to treat each hospital as if it had reported data on each measure. We used the principal factor method with oblique (Promax) rotation. The number of factors was chosen according to an adapted version of Guttman's criteria for factor selection.14 An item was initially assigned to a factor on which it had a factor loading of greater than 0.3, or to the factor with the highest loading. The resulting composites were then reviewed from a clinical perspective to ensure that the consequent assignments were sensible, and labels were assigned to reflect the underlying functions represented by the composites.

Measuring and Predicting Quality

For each individual and composite measure, we calculated the mean performance and the 25th and 75th percentiles. We then identified the top-performing quintile of hospitals for each of the 3 diseases and created cross-tabulations and correlations comparing top performers across the 3 disease-specific composites.

For analyses predicting quality performance, we analyzed the disease-specific and functional composite measures for the 3627 hospitals (89.4% of reporting hospitals) that responded to the AHA survey. To account for the varying number of opportunities across the sample of hospitals (largely because of sample size differences), we fit a binary logistic model using SAS statistical software (SAS Institute Inc, Cary, NC) to the grouped hospital data that modeled the number of opportunities met in each hospital per total number of opportunities at the hospital. This model is a random-effects logistic regression model that permitted the probability of a met opportunity to vary across hospitals. Logistic regression models were estimated separately for each composite.

Results
Hospital characteristics

We identified 4856 general medical/surgical or specialty heart hospitals to include in our analyses. Of these, 3066 submitted data to both the JCAHO and the CMS, 771 submitted to the CMS only, and 222 submitted to the JCAHO only, resulting in a total of 4059 hospitals for which we had performance data. Military and Veterans Affairs hospitals submitted data to the JCAHO only. The reporting hospitals were generally representative of hospitals in the United States, although hospitals that were small or located in rural areas or were nonteaching were less likely to report to either data source (Table 2). In aggregate, nonreporting hospitals accounted for fewer than 1.5% of hospital admissions nationally.

Quality of care

Overall, hospitalized patients with these conditions received 75.9% of recommended processes of care. Performance on the individual measures varied considerably by measure, ranging from a mean of 0.36 on thrombolytic therapy administered within 30 minutes of arrival (interquartile range, 0.00-0.67) to a mean of 0.98 (interquartile range, 0.98-1.00) for assessment of oxygenation for patients with pneumonia (Table 3). The mean composite scores and their associated interquartile ranges for AMI, CHF, and pneumonia were 0.85 (0.81-0.95), 0.64 (0.52-0.78), and 0.88 (0.80-0.97), respectively (higher values corresponded to better quality of care). When hospitals were grouped into quintiles of performance, 10.5% of them were in the top quintile for 2 of the 3 diseases, and only 3.8% were in the top quintile for all 3 diseases. Correlations of performance among the 3 disease-specific composite measures were generally low, ranging from 0.12 (for pneumonia and CHF) to 0.42 (for AMI and CHF) (data not shown).

The factor analyses suggested that 2 underlying domains of quality spanned across the 3 conditions. The first factor, treatment and diagnosis (Cronbach α = 0.92), includes items such as aspirin at arrival for AMI and assessment of left ventricular function for CHF. The second factor, counseling and prevention (Cronbach α = 0.83), contains items such as smoking cessation advice and discharge instructions for CHF. The items included in each factor are indicated in the footnotes to Table 3.

Multivariate predictors of quality

After multivariate adjustment (Table 4), for-profit hospitals consistently performed worse than not-for-profit hospitals for each condition, with odds ratios (ORs) ranging from 0.79 (95% confidence interval [CI], 0.78-0.80) for the CHF composite measure to 0.90 (95% CI, 0.89-0.91) for the pneumonia composite. In contrast, federal and military hospitals consistently had the highest performance, as did hospitals accredited by the JCAHO. The performance for rural hospitals was lower for AMI and CHF but higher for pneumonia.

Quality according to teaching status and number of beds was variable. Compared with nonteaching hospitals, major teaching hospitals provided higher quality for patients with AMI but not for CHF or pneumonia. They also had higher quality for the treatment and diagnosis composite (OR, 1.37; 95% CI, 1.34-1.39), but lower performance on the counseling and prevention composite (OR, 0.83; 95% CI, 0.82-0.84). As the share of Medicaid patients increased, performance decreased. Hospitals with more (vs less) technology available had higher performance, with the strongest relationship being with the treatment and diagnosis composite score (OR, 1.29; 95% CI, 1.26-1.32 for the highest quartile). Typical differences in specific technologies between hospitals in the highest quartile of the technology index and those in the lowest quartile are presented in Table 5. Finally, higher registered nurse staffing patterns were associated with higher-quality care on all of the measures examined, whereas increased licensed practical nurse staffing was associated with lower performance (Table 4).

Comment

We evaluated the quality of hospital care in 2004 for 3 diseases in more than 4000 hospitals and include data from both the JCAHO and the CMS, as well as 7 additional measures that, to our knowledge, have not been previously reported. Overall, hospitalized patients in the United States with AMI, CHF, and pneumonia received 76% of recommended processes of care, somewhat higher than that observed in outpatient settings.15 Our data also indicate that quality performance across the 3 conditions was not highly correlated, although approximately 15% of hospitals were in the top quintile of quality performance for at least 2 of the 3 diseases. This fact alone indicates the difficulty in making a generic rating about the quality of a hospital. A hospital that is best in one sphere may not be in another.

Because these data indicate the need for substantial improvement, we identified correlates of good care. Our data demonstrate that quality of care is best for hospitals that invest in technology, for federal and military hospitals, and for hospitals with high levels of registered nurse staffing. Conversely, for-profit hospitals and hospitals that served greater proportions of Medicaid patients had low quality across all of the conditions studied. Finally, the results of our factor analysis suggest that quality performance may vary more by functional roles in the hospital, such as treatment and diagnosis vs counseling and prevention, than by the particular disease being treated. Consequently, efforts to improve quality in hospitals should focus on core competencies that can improve care across multiple diagnoses.

Our study supports the importance of adequate nursing care to quality.16-18 Previous studies of nurse staffing have focused on outcomes or complication rates derived from administrative claims. For instance, Needleman and colleagues17 demonstrated an association between nurse staffing patterns and mortality and complications. Our data on processes of care support these associations and suggest potential processes through which they operate. Nurses, as the primary caregivers for hospitalized patients, provide a crucial link between physicians and patients, and high levels of nurse staffing also allow for more counseling and other duties to be performed by nurses. Also consistent with the published literature, hospital ownership and teaching status were significantly related to performance across each of the 3 conditions we examined.19-23

Our findings have implications for both policy and patient choice. From a policy perspective, several features of hospitals that were associated with quality performance are not remediable except through changes in policy. For instance, some regions of the country and rural locations were generally associated with low performance.24 Patients living in rural areas have little in the way of choice of hospitals without traveling long distances, and patients in low-performing regions of the country are unlikely to travel to other regions for their medical care. Additional resources aimed at bolstering performance in these parts of the country could mitigate against this finding. Conversely, other characteristics of hospitals, including ownership, teaching status, JCAHO accreditation, and investments in technology and nursing, were also strongly related to performance, and these characteristics are often remediable and can be used to influence patient choice. Because a large percentage of the federal and military hospitals are part of the Veterans Health Administration, lessons learned from their decade-long experience in quality improvement likely deserve further study.25

Our study is subject to several limitations. First, although we studied quality of care for 3 medical conditions that account for a sizable number of medical discharges, treatments such as surgery are not represented in these data. Second, hospitals in our study were scored on the basis of a number of measures for which they qualified, without adjustment for disease or case mix at the individual hospitals. Consequently, there might have been differences based on case mix or severity that were not captured in our data. However, these measures were designed with specific exclusion and inclusion criteria so that all suitable candidates were eligible for the measure. Third, a substantial number of hospitals did not report data to either data set. We note, however, that these hospitals provide care for fewer than 1.5% of hospital admissions nationally. Fourth, our data are cross-sectional in nature. Thus, the associations we report are not proof of causality. Finally, the measures of quality that we examined have been the focus of national attention, and improvement in quality using these measures has already been demonstrated.26,27 However, the extent to which these data are indicative of quality for other conditions is unknown.28,29

Our study results indicate that hospitalized patients with pneumonia, CHF, and AMI receive about 76% of the recommended processes of care we studied. This rate is higher than that previously observed for outpatient care; however, substantial gaps in performance still exist. Our results also suggest that characteristics of hospitals, including ownership, teaching status, location, and accreditation, are significant predictors of performance. Efforts to improve hospital quality that focus on domains of treatment that apply across multiple types of conditions are likely to have more impact than efforts aimed at improving quality for a single condition.

Back to top
Article Information

Correspondence: Bruce E. Landon, MD, MBA, Department of Health Care Policy, Harvard Medical School, 180 Longwood Ave, Boston, MA 02115 (landon@hcp.med.harvard.edu).

Accepted for Publication: September 15, 2006.

Author Contributions:Study concept and design: Landon, Normand, Schmaltz, Loeb, and McNeil. Acquisition of data: Landon, Lessler, Schmaltz, Loeb, and McNeil. Analysis and interpretation of data: Landon, Normand, O’Malley, Schmaltz, and McNeil. Drafting of the manuscript: Landon, Normand, Lessler, and McNeil. Critical revision of the manuscript for important intellectual content: Normand, Lessler, O’Malley, Schmaltz, Loeb, and McNeil. Statistical analysis: Normand, O’Malley, and Schmaltz. Obtained funding: McNeil. Administrative, technical, and material support: Lessler. Study supervision: Normand and McNeil.

Financial Disclosure: None reported.

Acknowledgment: We are indebted to Lin Ding, PhD, and Amy Cohen for assistance with expert statistical programming and to Aimee Wickman for assistance with manuscript preparation.

References
1.
 View California healthcare quality ratings. HealthScope Web site. http://www.healthscope.org/. Accessed May 31, 2006
2.
 NCQA home page.  National Committee for Quality Assurance Web site. http://hprc.ncqa.org/index.asp. Accessed April 1, 2005Google Scholar
3.
 Medicare personal plan finder. Medicare Web site. http://www.medicare.gov/MPPF/Include/DataSection/Questions/Welcome.asp?version=default&browser=. Accessed April 27, 2005
4.
 Performance measurement initiatives. Joint Commission on Accreditation of Healthcare Organizations Web site. http://www.jointcommission.org/PerformanceMeasurement/PerformanceMeasurement/. Accessed April 15, 2005
5.
Williams  SCSchmaltz  SPMorton  DJKoss  RGLoeb  JM Quality of care in U.S. hospitals as reflected by standardized measures, 2002-2004.  N Engl J Med 2005;353255- 264PubMedGoogle ScholarCrossref
6.
US Department of Health and Human Services, Hospital Compare: a quality tool for adults, including people with Medicare. http://www.hospitalcompare.hhs.gov/Hospital/Static/Data-Professionals.asp?dest=NAV|Home|DataDetails|ProfessionalInfo#TabTop. Accessed April 27, 2005
7.
Jha  AKLi  ZOrav  EJEpstein  AM Care in U.S. hospitals—the Hospital Quality Alliance program.  N Engl J Med 2005;353265- 274PubMedGoogle ScholarCrossref
8.
Williams  SCWatt  ASchmaltz  SPKoss  RGLoeb  JM Assessing the reliability of standardized performance indicators.  Int J Qual Health Care 2006;18246- 255PubMedGoogle ScholarCrossref
9.
Watt  AWilliams  SLee  KRobertson  JKoss  RGLoeb  JM Keen eye on core measures: Joint Commission data quality study offers insights into data collection, abstracting processes.  J AHIMA 2003;7420- 25PubMedGoogle Scholar
10.
American Hospital Association, American Hospital Association Guide to the Health Care Fields.  Chicago, Ill Healthcare InfoSource1996;
11.
 Hospital quality incentive demonstration project: summary of composite quality scoring methodology. Premier, Inc Web site. http://www.premierinc.com/all/quality/hqi/resources/september-scoring-overview-september.pdf. Accessed April 27, 2005
12.
Mark  BAHarless  DWMcCue  MXu  Y A longitudinal examination of hospital registered nurse staffing and quality of care [published correction appears in Health Serv Res. 2004;39:1629].  Health Serv Res 2004;39279- 300PubMedGoogle ScholarCrossref
13.
Spetz  SBaker  L Has Managed Care Affected the Availability of Medical Technology?  San Francisco Public Policy Institute of California1999;
14.
Guttman  L Some necessary conditions for common-factor analyses.  Psychometrika 1954;19149- 161Google ScholarCrossref
15.
McGlynn  EAAsch  SMAdams  J  et al.  The quality of health care delivered to adults in the United States.  N Engl J Med 2003;3482635- 2645PubMedGoogle ScholarCrossref
16.
Aiken  LHClarke  SPSloane  DMSochalski  JSilber  JH Hospital nurse staffing and patient mortality, nurse burnout, and job dissatisfaction.  JAMA 2002;2881987- 1993PubMedGoogle ScholarCrossref
17.
Needleman  JBuerhaus  PMattke  SStewart  MZelevinsky  K Nurse-staffing levels and the quality of care in hospitals.  N Engl J Med 2002;3461715- 1722PubMedGoogle ScholarCrossref
18.
Person  SDAllison  JJKiefe  CI  et al.  Nurse staffing and mortality for Medicare patients with acute myocardial infarction.  Med Care 2004;424- 12PubMedGoogle ScholarCrossref
19.
Taylor  DH  JrWhellan  DJSloan  FA Effects of admission to a teaching hospital on the cost and quality of care for Medicare beneficiaries.  N Engl J Med 1999;340293- 299PubMedGoogle ScholarCrossref
20.
Sloan  FATrogdon  JGCurtis  LHSchulman  KA Does the ownership of the admitting hospital make a difference?  Med Care 2003;411193- 1205PubMedGoogle ScholarCrossref
21.
Allison  JJKiefe  CIWeissman  NW  et al.  Relationship of hospital teaching status with quality of care and mortality for Medicare patients with acute MI.  JAMA 2000;2841256- 1262PubMedGoogle ScholarCrossref
22.
Ayanian  JZWeissman  JSChasan-Taber  SEpstein  AM Quality of care for two common illnesses in teaching and nonteaching hospitals.  Health Aff (Millwood) 1998;17194- 205PubMedGoogle ScholarCrossref
23.
Ayanian  JZWeissman  JS Teaching hospitals and quality of care: a review of the literature.  Milbank Q 2002;80569- 593, vPubMedGoogle ScholarCrossref
24.
Baldwin  LMMacLehose  RFHart  LGBeaver  SKEvery  NChan  L Quality of care for acute myocardial infarction in rural and urban US hospitals.  J Rural Health 2004;2099- 108PubMedGoogle ScholarCrossref
25.
Jha  AKPerlin  JBKizer  KWDudley  RA Effect of the transformation of the Veterans Affairs Health Care System on the quality of care.  N Engl J Med 2003;3482218- 2227PubMedGoogle ScholarCrossref
26.
 Medicare demonstration shows hospital quality of care improves with payments tied to quality.  November15 2005;Centers for Medicare & Medicaid Services Web site. http://www.cms.hhs.gov/apps/media/press/release.asp?Counter=1729. Accessed October 3, 2006
27.
Jencks  SFCuerdon  TBurwen  DR  et al.  Quality of medical care delivered to Medicare beneficiaries.  JAMA 2000;2841670- 1676PubMedGoogle ScholarCrossref
28.
Boyd  CMDarer  JBoult  CFried  LPBoult  LWu  AW Clinical practice guidelines and quality of care for older patients with multiple comorbid diseases: implications for pay for performance.  JAMA 2005;294716- 724PubMedGoogle ScholarCrossref
29.
Werner  RMAsch  DA The unintended consequences of publicly reporting quality information.  JAMA 2005;2931239- 1244PubMedGoogle ScholarCrossref
×