[Skip to Content]
Sign In
Individual Sign In
Create an Account
Institutional Sign In
OpenAthens Shibboleth
Purchase Options:
[Skip to Content Landing]
Views 9,092
Citations 0
July 27, 2020

National Hospital Quality Rankings: Improving the Value of Information in Hospital Rating Systems

Author Affiliations
  • 1Department of Medicine, Duke University School of Medicine, Durham, North Carolina
  • 2Duke University Health System, Durham, North Carolina
  • 3Department of Population Health Sciences, Duke University, Durham, North Carolina
  • 4Duke Clinical Research Institute, Durham, North Carolina
JAMA. Published online July 27, 2020. doi:10.1001/jama.2020.11165

Every year, hospitals are ranked or rated by public and private organizations that aim to identify centers that provide high-quality health care. Although the reports are intended to help guide consumers in determining where to seek care, these ranking systems often yield conflicting information or, worse, misinformation for patients and their clinicians.1 As an example, the US News & World Report Best Hospitals rankings correlate poorly with the Leapfrog Hospital Safety Grades (spearman correlation, 0.28) and the Centers for Medicare & Medicaid Services (CMS) Star Ratings (spearman correlation, 0.33).1 This conflicting information may lead hospitals and health systems to misdirect resources toward improving rankings on a particular measure and potentially miss opportunities to improve health and health care delivery.

Misinformation in Public Rankings

Despite the potential merits of public ranking systems, methodological limitations can result in misinformation for patients. For example, US News & World Report generates hospital rankings each year as a tool for patients to “…find sources of especially skilled inpatient care.”2 US News & World Report acknowledges the limitations of its methodology3 but less attention is given to the shortcomings of the risk adjustment. While the US News & World Report approach accounts for differences in patient age, sex, and coded prevalence of comorbid conditions, it does not capture differences in the underlying health status of the populations served. The result is an incomplete, even potentially misguided view of hospital quality.

Socioeconomic factors have a major effect on patient health, and people of lower socioeconomic status experience comparatively worse health outcomes.4 Similarly, race and ethnicity are associated with health outcomes, and some groups, particularly African Americans, experience comparatively poorer health.5 Patterns of socioeconomic deprivation, race, and ethnicity vary markedly by region, and individuals in some regions are more likely than those in other regions to experience serious chronic illnesses (eg, obesity, diabetes, cardiovascular disease, and cancer), higher rates of associated complications, and lower life expectancy.6 The coronavirus disease 2019 (COVID-19) pandemic demonstrates all too clearly that pervasive disparities lead to poor health outcomes and excess mortality.7

The broad effect of regional variation in health outcomes is exemplified by widely varying life expectancy across the US.8 Areas with the lowest life expectancy have lower socioeconomic status9 and a higher proportion of African Americans.10 When the US News & World Report Best Hospital Honor Roll of 21 hospitals and life expectancy are combined (eFigure in the Supplement), a pattern emerges: only regions with higher life expectancy have hospitals on the Honor Roll, whereas regions with lower life expectancy (such as the entire Southeast region, including Alabama, Florida, Georgia, North Carolina, South Carolina, and Tennessee) have none. Many hospitals in these states provide excellent care, yet are not recognized on this listing.

A likely explanation for these regional differences is that hospital rankings reflect the underlying health status of the population served, as influenced by its socioeconomic environment, racial and ethnic composition, and life expectancy. The majority of the 21 hospitals on the honor roll have in their communities at least one separate city- or county-supported public hospital that serves a disproportionate number of disadvantaged patients in the community. As a result, hospitals listed on the honor roll are advantaged in that most do not care for the most disadvantaged patients who tend to have worse health outcomes.

How, then, are consumers to understand and apply these rankings? Ordinal rankings convey a false sense of precision and do not truly differentiate the 50th best hospital from the first, let alone the first from the second. In addition, the rankings do not facilitate decision-making if there are no top-rated hospitals in the region and patients do not want to or cannot travel to a different state. Before COVID-19, traveling by plane to a “best hospital” was a luxury afforded only to those patients with socioeconomic advantage. The effects of COVID-19 on travel and personal finances will further limit which patients can afford to travel for health care.

How Rankings Create Perverse Incentives

The visibility of hospital rankings and the desire to achieve and promote a high ranking in consumer-facing advertisements may prompt hospitals to invest in strategies to improve their rankings whether or not those strategies improve care delivery and outcomes. Even more insidiously, hospitals may avoid caring for the highest-risk, most vulnerable patients, further exacerbating disparities in care delivery. The expenditures for licensing fees to use US News & World Report Best Hospital rankings in hospital advertisements or on improvement efforts targeted toward improving the rankings are unlikely to benefit patients directly.

Improving the Value of Information Reported

Several potential solutions could contribute to hospital ranking systems that improve both patient decision-making and population health.

Account for Population Differences Between Regions

Groups or organizations that develop hospital ranking or rating systems should account for regional differences in the overall health status of the population, particularly socioeconomic status, and in the care delivery environment. Incorporating regional variation in life expectancy would, for example, begin to normalize population differences.

Shift From Rankings to Ratings

Because current ranking systems are unable to adequately account for the overall health of a population or differences in socioeconomic status, the rankings lack the precision to differentiate where care is of higher quality. In the absence of such precision, ratings not rankings should be reported. While imperfect, rating systems such as the Leapfrog Hospital Safety Grades (A through F) or the CMS Star Program (1 through 5 stars) eliminate much of the concern for false precision. If rankings are to persist, they should at a minimum be presented in tiers to avoid conveying a false sense of precision to consumers.

Bring Attention to Measures That Matter to Patients

Rankings and rating systems could offer value by bringing attention to patient-centric measures of health. These include measures of quality of life, preventive care, longitudinal outcomes, and health care value. Adopting these measures could potentially decrease some of the disparities in health care delivery and outcomes observed in different regions of the country.

Increase Transparency and Reproducibility of Results

Groups that create and publish hospital quality rankings and ratings must ensure greater transparency regarding the data and methods they use. They must also ensure open access to what is developed and published. If hospitals cannot reproduce or validate their own performance, then they will also be unable to identify specific opportunities to improve the care they deliver.


Hospital ratings can potentially provide useful information for patients if the ratings are clear and accurate and the methods used to generate the ratings are rigorous and transparent. By embracing thoughtful engagement and partnership with patients and clinicians, organizations that develop and report hospital ratings have an opportunity to reduce misinformation and provide information that could improve public understanding of health care delivery.

Back to top
Article Information

Corresponding Author: Adrian F. Hernandez, MD, MHS, 200 Morris St, Durham, NC 27701 (Adrian.hernandez@duke.edu).

Published Online: July 27, 2020. doi:10.1001/jama.2020.11165

Conflict of Interest Disclosures: Dr Curtis reported receiving grants from GlaxoSmithKline, Novartis, US Food and Drug Administration, Patient-Centered Outcomes Research Institute, the National Institutes of Health, Verily, and Medical Device Innovation Consortium outside the submitted work. No other disclosures were reported.

Additional Contributions: We thank A. Eugene Washington, MD (Duke University Health System), and Bradley Hammill, PhD (Duke Clinical Research Institute), for their valuable contributions to conceptualizing and developing the manuscript. We also thank Jonathan McCall, MS (Duke Forge, Duke University), for editorial assistance with the manuscript. These persons received no compensation for their contributions.

Hota  B, Webb  T, Chatrathi  A, McAninch  E, Lateef  O.  Disagreement between hospital rating systems: measuring the correlation of multiple benchmarks and developing a quality composite rank.   Am J Med Qual. 2020;35(3):222-230. doi:10.1177/1062860619860250PubMedGoogle ScholarCrossref
 How and why we rank and rate hospitals.   US News & World Report. July 29, 2019. Accessed March 2, 2020. https://health.usnews.com/health-care/best-hospitals/articles/faq-how-and-why-we-rank-and-rate-hospitals Google Scholar
Harder  B, Comarow  A.  Hospital quality reporting by US News & World Report: why, how, and what’s ahead.   JAMA. 2015;313(19):1903-1904. doi:10.1001/jama.2015.4566PubMedGoogle ScholarCrossref
Stringhini  S, Carmeli  C, Jokela  M,  et al; LIFEPATH Consortium.  Socioeconomic status and the 25×25 risk factors as determinants of premature mortality: a multicohort study and meta-analysis of 1·7 million men and women.   Lancet. 2017;389(10075):1229-1237. doi:10.1016/S0140-6736(16)32380-7PubMedGoogle ScholarCrossref
Cunningham  TJ, Croft  JB, Liu  Y, Lu  H, Eke  PI, Giles  WH.  Vital Signs: racial disparities in age-specific mortality among blacks or African Americans: United States, 1999-2015.   MMWR Morb Mortal Wkly Rep. 2017;66(17):444-456. doi:10.15585/mmwr.mm6617e1PubMedGoogle ScholarCrossref
Rosenberg  BL, Kellar  JA, Labno  A,  et al.  Quantifying geographic variation in health care outcomes in the United States before and after risk-adjustment.   PLoS One. 2016;11(12):e0166762. doi:10.1371/journal.pone.0166762PubMedGoogle Scholar
Wadhera  RK, Wadhera  P, Gaba  P,  et al.  Variation in COVID-19 hospitalizations and deaths across new york city boroughs.   JAMA. 2020. Published online April 29, 2020. doi:10.1001/jama.2020.7197PubMedGoogle Scholar
National Center for Health Statistics. U.S. Small-Area Life Expectancy Estimates Project (USALEEP). National Center for Health Statistics. 2018. Updated June 9, 2020. https://www.cdc.gov/nchs/nvss/usaleep/usaleep.html
Chetty  R, Stepner  M, Abraham  S,  et al.  The association between income and life expectancy in the United States, 2001-2014.   JAMA. 2016;315(16):1750-1766. doi:10.1001/jama.2016.4226PubMedGoogle ScholarCrossref
Arias  E.  United States life tables, 2017.   Natl Vital Stat Rep. 2019;68(7):1-66.PubMedGoogle Scholar
Limit 200 characters
Limit 25 characters
Conflicts of Interest Disclosure

Identify all potential conflicts of interest that might be relevant to your comment.

Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.

Err on the side of full disclosure.

If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.

Not all submitted comments are published. Please see our commenting policy for details.

Limit 140 characters
Limit 3600 characters or approximately 600 words
    2 Comments for this article
    Ranking Versus Rating Hospital Quality
    Michael McAleer, PhD(Econometrics),Queen's | Asia University, Taiwan
    Any academic who has been associated with ranking or interpreting ranking systems of individual research output, academic standards of research journals, and associated departments, faculties, and universities, is well aware that measuring quality, efficacy, impact, importance, and influence is based on numerous arbitrary factors that are not necessarily widely accepted.

    The same argument applies to annual rankings of departments, faculties, colleges, business schools, and universities in an endeavour to attract prospective students.

    Whichever rankings or ratings methods are used, it is critical that any factors that are presented are simple, identifiable, convincing, transparent, balanced, informative, useful, easily interpretable, rigorous,
    measurable, accountable, reproducible and, most of all, accurate.

    As analyzed in the excellent Viewpoint, hospital rankings and ratings can reduce misinformation in the process of expanding and explaining information about public health care delivery outcomes.  

    It is recognized that there can be significant differences across large geographic, financial affordability, socioeconomic environment, racial and ethnic composition, among other pervasive disparities, so any informative ranking or rating system would be more helpful if the healthcare boundaries were localized.

    Measuring health care quality and outcomes provided by hospitals is subject to numerous arguable and arbitrary factors, such that creating qualitative and quantitative tiers and group ratings might be preferable to numerical rankings that are statistically problematic in the absence of meaningful confidence intervals.
    Hospitals -- Unlike Major Universities -- Serve Local Populations
    Brad Davis |
    @Michael McAleer you state that "The same argument applies to [hospitals as to] the annual rankings of departments, faculties, colleges, business schools, and universities . . . ."

    No. The entire point of this article is that local populations disproportionately impact rankings with respect to major hospitals vs. major universities, which can be selective about their faculty and student populations. Hospitals serve primarily local populations, and so any "rankings" measure that does not include local health graphics is misleading.