Customize your JAMA Network experience by selecting one or more topics from the list below.
Krumholz HM, Rathore SS, Chen J, Wang Y, Radford MJ. Evaluation of a Consumer-Oriented Internet Health Care Report Card: The Risk of Quality Ratings Based on Mortality Data. JAMA. 2002;287(10):1277–1287. doi:10.1001/jama.287.10.1277
Author Affiliations: Section of Cardiovascular Medicine, Department of Medicine (Drs Krumholz, Chen, and Radford, and Messrs Rathore and Wang), and Section of Health Policy and Administration, Department of Epidemiology and Public Health (Dr Krumholz), Yale University School of Medicine, New Haven, Conn; Yale-New Haven Hospital Center for Outcomes Research and Evaluation, New Haven, Conn (Drs Krumholz and Radford); and Qualidigm, Middletown, Conn (Drs Krumholz and Radford). Dr Chen is currently affiliated with the Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia.
Context Health care "report cards" have attracted significant consumer interest,
particularly publicly available Internet health care quality rating systems.
However, the ability of these ratings to discriminate between hospitals is
Objective To determine whether hospital ratings for acute myocardial infarction
(AMI) mortality from a prominent Internet hospital rating system accurately
discriminate between hospitals' performance based on process of care and outcomes.
Design, Setting, and Patients Data from the Cooperative Cardiovascular Project, a retrospective systematic
medical record review of 141 914 Medicare fee-for-service beneficiaries
65 years or older hospitalized with AMI at 3363 US acute care hospitals during
a 4- to 8-month period between January 1994 and February 1996 were compared
with ratings obtained from HealthGrades.com (1-star: worse outcomes than predicted,
5-star: better outcomes than predicted) based on 1994-1997 Medicare data.
Main Outcome Measures Quality indicators of AMI care, including use of acute reperfusion therapy,
aspirin, β-blockers, angiotensin-converting enzyme inhibitors; 30-day
Results Patients treated at higher-rated hospitals were significantly more likely
to receive aspirin (admission: 75.4% 5-star vs 66.4% 1-star, P for trend = .001; discharge: 79.7% 5-star vs 68.0% 1-star, P = .001) and β-blockers (admission: 54.8% 5-star
vs 35.7% 1-star, P = .001; discharge: 63.3% 5-star
vs 52.1% 1-star, P = .001), but not angiotensin-converting
enzyme inhibitors (59.6% 5-star vs 57.4% 1-star, P
= .40). Acute reperfusion therapy rates were highest for patients treated
at 2-star hospitals (60.6%) and lowest for 5-star hospitals (53.6% 5-star, P = .008). Risk-standardized 30-day mortality rates were
lower for patients treated at higher-rated than lower-rated hospitals (21.9%
1-star vs 15.9% 5-star, P = .001). However, there
was marked heterogeneity within rating groups and substantial overlap of individual
hospitals across rating strata for mortality and process of care; only 3.1%
of comparisons between 1-star and 5-star hospitals had statistically lower
risk-standardized 30-day mortality rates in 5-star hospitals. Similar findings
were observed in comparisons of 30-day mortality rates between individual
hospitals in all other rating groups and when comparisons were restricted
to hospitals with a minimum of 30 cases during the study period.
Conclusion Hospital ratings published by a prominent Internet health care quality
rating system identified groups of hospitals that, in the aggregate, differed
in their quality of care and outcomes. However, the ratings poorly discriminated
between any 2 individual hospitals' process of care or mortality rates during
the study period. Limitations in discrimination may undermine the value of
health care quality ratings for patients or payers and may lead to misperceptions
of hospitals' performance.
Increasing interest in the quality of health care has led to the development
of "report cards" to grade and compare the quality of care and outcomes of
and managed care plans.3 The organizations
that produce these evaluations span the spectrum of popular periodicals, federal
and state agencies, nonprofit accreditation organizations, consulting companies,
and for-profit health care information companies.4
In addition, the Centers for Medicare and Medicaid Services (formerly called
the Health Care Financing Administration) has recently expressed interest
in developing a public performance report for hospitals.5
One of the most prominent organizations involved in providing health
care quality ratings is HealthGrades.com, Inc. This company has developed
"Hospital Report Cards" as part of an effort to provide comparative information
about quality of health care providers via the Internet.6-8
The company's Web site indicates that as "the healthcare quality experts,"
it is "creating the standard of healthcare quality."9
Using primarily publicly available Medicare administrative data to calculate
risk-adjusted mortality rates for a variety of conditions, HealthGrades.com
claims to provide "accurate and objective ratings" for hospitals to enable
patients to make "well-informed decisions about where to receive their care."
As a free service, public interest in the Web site is substantial, with over
1 million visitors in 2001 and discussion of the company's rating system in
publications such as Yahoo! Internet Life10 and in print stories in USA Today and the Los Angeles Times.11,12
HealthGrades.com is publicly traded on NASDAQ and reported over $7 million
in revenue in 2000, with a 640% increase in ratings revenue over the fourth
quarter of 1999.13 With ratings soon appearing
for nursing homes, hospices, home health agencies, fertility clinics, linkages
to data concerning individual health plans and providers, and a recently announced
partnership with The Leapfrog Group,14 this
is one of the most ambitious health ratings resources available online today.
While hospital ratings are widely disseminated to the public, little
information is available about their validity. The HealthGrades.com rating
system uses publicly available Medicare Part A billing data for many of its
ratings, but its statistical methods have not been published in the peer-reviewed
literature, nor has any published study, to our knowledge, evaluated its performance.
By providing ready access to ratings for all US hospitals via a free, public-access
Web site, this rating system offers consumers, who may be unfamiliar with
the limitations of rating systems, an option that no other rating system today
provides—the opportunity to directly compare 2 individual hospitals'
"performance" for a variety of conditions. Use of such ratings may have substantial
benefit if it encourages hospitals to compete on quality, but may have significant,
unintended, and potentially deleterious consequences if the ratings do not
accurately discriminate between individual hospitals' performance. Accordingly,
we sought to determine if these ratings could discriminate between hospitals
based on their quality of care and outcomes.
For this evaluation we used data from the Cooperative Cardiovascular
Project (CCP), a national initiative to improve quality of care for Medicare
beneficiaries hospitalized with acute myocardial infarction (AMI). The CCP
involved the systematic abstraction of clinically relevant information from
more than 200 000 hospitalizations for AMI nationwide. As a highly prevalent
condition with significant morbidity and mortality and established quality
of care and outcomes measures, AMI is well suited to an assessment of hospital
performance. We compared hospitals ratings with process-based measures of
the quality of AMI care and risk-standardized 30-day mortality based on medical
record review. Since the public is expected to be particularly interested
in comparisons between individual hospitals, we determined how often individual
higher-rated hospitals performed better than lower-rated hospitals in head
to head comparisons.
Create a personal account or sign in to: