Customize your JAMA Network experience by selecting one or more topics from the list below.
Public reporting of the quality of care delivered by physicians, hospitals, and other health care organizations has been around for a while. Some of the earliest efforts began in the 1990s, when the New York State Department of Health began reporting risk-adjusted mortality rates for surgeons performing cardiac surgery in that state (http://bit.ly/241M0FX). The early reports could be obtained by mailing a request to the Department of Health, which would send along a paper copy of the latest data.
Over time, as technology improved, so did the breadth and depth of public reporting. By 2004, the Centers for Medicare and Medicaid Services (CMS) was reporting performance data for nearly every hospital in the country, dozens of states were reporting their own data, and many private entities were publicly grading hospitals (http://go.cms.gov/1Gn8edX). Despite the proliferation of public reporting websites, CMS’ Hospital Compare uses the most validated set of metrics available and has remained the most comprehensive resource (http://1.usa.gov/1Rt7DGW).
There’s been just one problem: it’s unclear whether anyone actually uses it. Physicians and hospitals seem to use it to see how they compare with their competitors, but there’s no evidence that consumers use it (http://bit.ly/1XJhs8h). And that’s not surprising, because Hospital Compare is difficult to navigate, presenting performance data based on dozens of metrics in ways that are technically correct but incomprehensible for most consumers (http://1.usa.gov/1NlJoP6).
Making Reporting Accessible
So in an effort to make Hospital Compare data more accessible, CMS launched the stars program. The notion was simple: grade all the hospitals using a 1- to 5-star rating, in the way we grade restaurants or hotels. Although a restaurant’s most important qualities can be boiled down to 1 or 2 things (food and service), how do we best capture the multifaceted nature of hospital quality? CMS began by focusing on 1 domain (patient experience), with the expectation that it would build from there. In 2015, the agency released its first ratings, assigning each hospital 1 to 5 stars based on patient experience scores.
The approach has been controversial, with some arguing (http://bit.ly/1XJhE7x) that
patient experience is a poor measure of quality (http://theatln.tc/1YHB50B)
and others suggesting that stars oversimplify the complexity of hospital care. Thus,
a crucial question remains: is this rating useful for consumers? Or might it
actually do more harm than good?
To address this issue, my colleagues and I recently published a study in JAMA Internal
Medicine (http://bit.ly/1WpE27e) that
answered a simple question: if a patient used the star ratings, would he or she end
up at a lower mortality hospital? Although there are many features of hospital
quality that matter, it’s hard to imagine one that is more important than
risk-adjusted mortality. Patients want to avoid complications and to be treated with
dignity and respect, but nothing matters as much as avoiding premature death. And
some hospitals are better at that than others (http://bit.ly/1U6PFzE).
Risk-adjusted mortality rates are the best measure of hospital quality we currently
We examined whether, holding all other factors constant, picking a 5-star hospital would lead patients to a hospital with lower mortality hospital than that of a 1-star hospital? It turns out that it does—a lot. The effect size is substantial: there is a 1.4 percentage point difference in mortality between a 5-star hospital (mortality rate 9.8%) and a 1-star hospital (mortality rate 11.2%), with a monotonic relationship (more stars, lower mortality). For every 70 patients shifted from a 1-star hospital to a 5-star hospital, we would save 1 life. That’s an important effect.
When Are Stars Useful? And When Are They Not?
A few more things worth reflecting on. Our model held a lot of factors constant. We adjusted for hospital size, teaching status, location (urban vs rural), and even local health care markets (as measured by hospital referral region). This is likely the most patient-centered view. Patients aren’t likely to use stars in a vacuum and are likely choosing among a small subset of similar hospitals.
But what if patients ignored everything else and just focused on the stars? Would they still be useful? When we reran the analysis without accounting for size, teaching status, and other factors, morality rates adjusted only for risk were virtually identical for all hospitals (10.8% to 10.9%). We saw no relationship between number of stars and mortality rates.
This means that stars aren’t a substitute for other information. For example, we know that for certain major conditions, large, teaching hospitals may have better outcomes (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2690120/). Patients shouldn’t ignore that. But when choosing among large teaching hospitals in their region, for instance, stars can be helpful. Holding those other factors constant, stars help patients identify lower-mortality hospitals.
Why are stars, which are based on patient experience, so helpful? Stars likely measure the
effectiveness of an organization’s underlying management and culture
(http://bit.ly/1Swj07p). Hospitals’ performance on patient
experience is influenced by a variety of factors, such as patients’
socioeconomic status or severity of illness (http://bit.ly/1rlMtVX).
That’s why it’s not useful to compare the patient experience scores (and
thus, the star ratings) of a small rural hospital to an urban teaching hospital.
They care for very different populations. But among urban teaching
hospitals—for instance, those with better patient experience—are likely
better managed and appear to have lower mortality.
New Star Ratings Coming
CMS recently announced that the agency plans to release a new star rating that will be far more comprehensive, combining approximately 60 different measures into a single star rating (http://bit.ly/1XJhs8h).
Will it be as useful to consumers? It depends on how well it holds up to examination. The ratings combine both useful measures, such as mortality and patient experience, with flawed ones, such as patient safety indicators calculated using claims data. CMS might have done better by just stopping with the patient experience stars, lest they have a program with contradictory star ratings that create more confusion, diluting the progress made so far.
In the complex world of measuring hospital quality, CMS’ system of hospital star ratings based on patient experience scores has been a good step forward. If used correctly, it can help steer patients to the right hospitals. But it has to be understood and used in context—namely, when comparing similar hospitals. The new ratings CMS is about to launch are far more comprehensive, including many more metrics. This may seem like a good idea, but it’s worth remembering that when it comes to quality measures, as in so many things in life, more isn’t better. Better is better. We need to focus on what we can measure well and, most important, focus on what matters most to patients.
Corresponding Author: Ashish K. Jha, MD, MPH (email@example.com).
Published online: April 20, 2016, at http://newsatjama.jama.com/category/the-jama-forum/.
Disclaimer: Each entry in The JAMA Forum expresses the opinions of the author but does not necessarily reflect the views or opinions of JAMA, the editorial staff, or the American Medical Association.
Additional Information: Information about The JAMA Forum is available at http://newsatjama.jama.com/about/. Information about disclosures of potential conflicts of interest may be found at http://newsatjama.jama.com/jama-forum-disclosures/.
Jha AK. The Stars of Hospital Care: Useful or a Distraction? JAMA. 2016;315(21):2265–2266. doi:10.1001/jama.2016.5638
Coronavirus Resource Center
Create a personal account or sign in to: