[Skip to Content]
Access to paid content on this site is currently suspended due to excessive activity being detected from your IP address 54.197.65.227. Please contact the publisher to request reinstatement.
[Skip to Content Landing]
Download PDF
Figure.
Survey Responses
Survey Responses
Table 1.  
Characteristics of Respondent and Nonrespondent Hospitalsa
Characteristics of Respondent and Nonrespondent Hospitalsa
Table 2.  
Association of Response and Respondent Job
Association of Response and Respondent Job
Table 3.  
Association Between Attitudes Toward Quality Measures and Hospital Performancea
Association Between Attitudes Toward Quality Measures and Hospital Performancea
Table 4.  
Use of Quality Measures by Hospitals and Association With Hospital Performancea
Use of Quality Measures by Hospitals and Association With Hospital Performancea
1.
Goodrich  K, Garcia  E, Conway  PH.  A history of and a vision for CMS quality measurement programs. Jt Comm J Qual Patient Saf. 2012;38(10):465-470.
PubMed
2.
Centers for Medicare & Medicaid Services. Measures displayed on Hospital Compare. http://www.medicare.gov/hospitalcompare/Data/Measures-Displayed.html?AspxAutoDetectCookieSupport=1. Accessed March 21, 2014.
3.
Conway  PH, Mostashari  F, Clancy  C.  The future of quality measurement for improvement and accountability. JAMA. 2013;309(21):2215-2216.
PubMedArticle
4.
Blumenthal  D, Jena  AB.  Hospital value-based purchasing. J Hosp Med. 2013;8(5):271-277.
PubMedArticle
5.
Berwick  DM, James  B, Coye  MJ.  Connections between quality measurement and improvement. Med Care. 2003;41(1)(suppl):I30-I38.
PubMed
6.
Fung  CH, Lim  Y-W, Mattke  S, Damberg  C, Shekelle  PG.  Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med. 2008;148(2):111-123.
PubMedArticle
7.
Marshall  MN, Shekelle  PG, Leatherman  S, Brook  RH.  The public release of performance data: what do we expect to gain? a review of the evidence. JAMA. 2000;283(14):1866-1874.
PubMedArticle
8.
Werner  RM, Asch  DA.  The unintended consequences of publicly reporting quality information. JAMA. 2005;293(10):1239-1244.
PubMedArticle
9.
Bhalla  R, Kalkut  G.  Could Medicare readmission policy exacerbate health care system inequity? Ann Intern Med. 2010;152(2):114-117.
PubMedArticle
10.
Farmer  SA, Black  B, Bonow  RO.  Tension between quality measurement, public quality reporting, and pay for performance. JAMA. 2013;309(4):349-350.
PubMedArticle
11.
Dimick  JB, Welch  HG, Birkmeyer  JD.  Surgical mortality as an indicator of hospital quality: the problem with small sample size. JAMA. 2004;292(7):847-851.
PubMedArticle
12.
Thomas  JW, Hofer  TP.  Accuracy of risk-adjusted mortality rate as a measure of hospital quality of care. Med Care. 1999;37(1):83-92.
PubMedArticle
13.
Hafner  JM, Williams  SC, Koss  RG, Tschurtz  BA, Schmaltz  SP, Loeb  JM.  The perceived impact of public reporting hospital performance data. Int J Qual Health Care. 2011;23(6):697-704.
PubMedArticle
14.
Jha  A, Epstein  A.  Hospital governance and the quality of care. Health Aff (Millwood). 2010;29(1):182-187.
PubMedArticle
15.
Vaughn  T, Koepke  M, Kroch  E,  et al.  Engagement of leadership in quality improvement initiatives. J Patient Saf. 2006;2(1):2-9.
16.
Coles  J.  Public disclosure of health care performance reports: a response from the UK. Int J Qual Health Care. 1999;11(2):104-105.
PubMedArticle
17.
Dubois  RW, Rogers  WH, Moxley  JH  III, Draper  D, Brook  RH.  Hospital inpatient mortality. N Engl J Med. 1987;317(26):1674-1680.
PubMedArticle
18.
Krakauer  H, Bailey  RC, Skellan  KJ,  et al.  Evaluation of the HCFA model for the analysis of mortality following hospitalization. Health Serv Res. 1992;27(3):317-335.
PubMed
19.
Berwick  DM, Wald  DL.  Hospital leaders’ opinions of the HCFA mortality data. JAMA. 1990;263(2):247-249.
PubMedArticle
20.
Kohn  LT, Corrigan  JM, Donaldson  MS, eds. To Err Is Human: Building a Safer Health System. Washington, DC: The National Academies Press; 2000.
21.
Committee on Quality of Health Care in America, Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: The National Academies Press; 2001.
22.
Wachter  RM, Pronovost  PJ.  The 100,000 Lives Campaign: a scientific and policy review. Jt Comm J Qual Patient Saf. 2006;32(11):621-627.
PubMed
23.
Dellinger  RP, Levy  MM, Carlet  JM,  et al; International Surviving Sepsis Campaign Guidelines Committee; American Association of Critical-Care Nurses; American College of Chest Physicians; American College of Emergency Physicians; Canadian Critical Care Society; European Society of Clinical Microbiology and Infectious Diseases; European Society of Intensive Care Medicine; European Respiratory Society; International Sepsis Forum; Japanese Association for Acute Medicine; Japanese Society of Intensive Care Medicine; Society of Critical Care Medicine; Society of Hospital Medicine; Surgical Infection Society; World Federation of Societies of Intensive and Critical Care Medicine.  Surviving Sepsis Campaign: international guidelines for management of severe sepsis and septic shock: 2008. Crit Care Med. 2008;36(1):296-327.
PubMedArticle
24.
Krumholz  HM, Brindis  RG, Brush  JE,  et al; American Heart Association; Quality of Care and Outcomes Research Interdisciplinary Writing Group; Council on Epidemiology and Prevention; Stroke Council; American College of Cardiology Foundation; endorsed by the American College of Cardiology Foundation.  Standards for statistical models used for public reporting of health outcomes: an American Heart Association scientific statement from the Quality of Care and Outcomes Research Interdisciplinary Writing Group: cosponsored by the Council on Epidemiology and Prevention and the Stroke Council. Circulation. 2006;113(3):456-462.
PubMedArticle
Original Investigation
December 2014

Attitudes of Hospital Leaders Toward Publicly Reported Measures of Health Care Quality

Author Affiliations
  • 1Center for Quality of Care Research, Baystate Medical Center, Springfield, Massachusetts
  • 2Division of General Medicine, Baystate Medical Center, Springfield, Massachusetts
  • 3Tufts University School of Medicine, Boston, Massachusetts
  • 4Section of General Internal Medicine and the Robert Wood Johnson Clinical Scholars Program, Department of Internal Medicine, Yale University School of Medicine, New Haven, Connecticut
  • 5Center for Outcomes Research and Evaluation, Yale–New Haven Hospital, New Haven, Connecticut
  • 6School of Public Health and Health Sciences, University of Massachusetts, Amherst
  • 7Department of Medicine, Medicine Institute, Cleveland Clinic, Cleveland, Ohio
JAMA Intern Med. 2014;174(12):1904-1911. doi:10.1001/jamainternmed.2014.5161
Abstract

Importance  Public reporting of quality is considered a key strategy for stimulating improvement efforts at US hospitals; however, little is known about the attitudes of hospital leaders toward existing quality measures.

Objectives  To describe US hospital leaders’ attitudes toward hospital quality measures found on the Centers for Medicare & Medicaid Services’ Hospital Compare website, assess use of these measures for quality improvement, and examine the association between leaders’ attitudes and hospital quality performance.

Design, Setting, and Participants  We mailed a 21-item questionnaire from January 1 through September 31, 2012, to senior hospital leaders from a stratified random sample of 630 US hospitals, including equal numbers with better-than-expected, as-expected, and worse-than-expected performance on mortality and readmission measures.

Main Outcomes and Measures  We assessed levels of agreement with statements concerning quality measures, examined use of measures for improvement activities, and analyzed the association between leaders’ attitudes and hospital performance.

Results  Of 630 hospitals surveyed, 380 (60.3%) responded. For each of the mortality, readmission, process, and patient experience measures, more than 70% of hospitals agreed with the statement that “public reporting stimulates quality improvement activity at my institution”; agreement for measures of cost and volume was 65.2% and 53.3%, respectively. A similar pattern was observed for the statement that “our hospital is able to influence performance on this measure”; agreement for processes of care and patient experience measures was 96.4% and 94.2%, respectively. A total of 89.7% of hospitals agreed that the hospital’s reputation was influenced by patient experience measures; agreement was 77.4% for mortality, 69.9% for readmission, 76.3% for process measures, 66.1% for cost measures, and 54.0% for volume measures. A total of 87.1% of hospitals reported incorporating performance on publicly reported measures into their hospital’s annual goals, whereas 90.2% reported regularly reviewing the results with the hospital’s board of trustees and 94.3% with senior clinical and administrative leaders. When compared with chief executive officers and chief medical officers, respondents who identified themselves as chief quality officers or vice presidents of quality were less likely to agree that public reporting stimulates quality improvement and that measured differences are large enough to differentiate among hospitals.

Conclusions and Relevance  Hospital leaders indicated that the measures reported on the Hospital Compare website exert strong influence over local planning and improvement efforts. However, they expressed concerns about the clinical meaningfulness, unintended consequences, and methods of public reporting.

Introduction

During the past decade, one of the principal strategies of the Centers for Medicare & Medicaid Services (CMS) for improving the outcomes of hospitalized patients has been to make information about health care quality more transparent through public reporting programs.1 Performance measures currently published on the CMS’s Hospital Compare website include those focused on processes of care (eg, percentage of patients hospitalized for acute myocardial infarction treated with β-blockers); care outcomes, such as condition-specific mortality and readmission rates; patients’ experience and satisfaction with care; and measures of hospitalization costs and case volumes.2 Since 2003, the CMS has steadily expanded the number of measures included in public reporting efforts, and many of these measures now serve as the basis for the value-based purchasing program legislated in the Patient Protection and Affordable Care Act.3,4

In addition to helping consumers make more informed choices about where to obtain care, one of the primary goals of public reporting is to stimulate improvement efforts by health care professionals.57 The extent to which hospital leaders view these data to be valid and meaningful may influence the effectiveness of this strategy. We therefore sought to describe the attitudes of hospital leaders toward the measures of hospital quality reported on the CMS’s Hospital Compare website and to assess how these measures are being used for performance improvement. Because we hypothesized that more favorable attitudes toward publicly reported measures might reflect greater institutional commitment toward improvement, we also examined the association between the views of hospital senior leaders and hospital quality performance rankings.

Methods
Sample Identification

The study protocol was approved by the institutional review board at Baystate Medical Center. Written informed consent was obtained during the process of inviting participation in the survey. Using information from the Hospital Compare website, we categorized hospitals into 1 of 3 groups based on their 30-day risk-standardized mortality and readmission rates for pneumonia, heart failure, and acute myocardial infarction. The CMS uses hierarchical modeling to calculate rates for each hospital based on the ratio of predicted to expected outcomes multiplied by the national observed outcome rate. This approach conceptually allows for a comparison of a particular hospital’s performance given its case mix to an average hospital’s performance with the same case mix. Hospital performance is then compared relative to other institutions across the nation. For the purposes of the study, hospitals were considered to be better than expected if they were identified by the CMS as “better than the US national rate” on at least one outcome measure and had no measures in which they were “worse than the US national rate.” Hospitals were correspondingly categorized as worse than expected if they were identified as having worse performance than the US national rate on at least one measure and no measures in which they were better than the US national rate. Hospitals that were neither better nor worse than the US national performance on any outcome measure were considered to be performing as expected. We excluded a small group of hospitals with mixed performance (ie, those better than the national rate for some measures and worse for others). We matched sampled hospitals to the 2009 American Hospital Association Survey data to obtain hospital characteristics, including size, teaching status, population served, and region.

Of 4459 hospitals in the Hospital Compare database, we excluded 624 (14.0%) because of missing data for one or both performance measures (resulting from low case volumes that did not meet the CMS threshold for reporting) and 136 (3.1%) that had mixed performance. Of the remaining 3699 hospitals, 471 (12.7%) were better than expected, 2644 (71.5%) were as expected, and 584 (15.8%) were worse than expected. We randomly selected 210 hospitals from each of the performance strata to reach 80% power to detect a 20% difference in the proportion responding strongly agree or agree among top and bottom performers with 95% confidence, allowing for multiple comparisons and a projected 60% response rate.

Survey Administration

We identified the names, addresses, and telephone numbers of the chief executive officer and the senior executive responsible for quality at the hospital (eg, chief quality officer, director of quality, or vice president of medical affairs) through telephone inquiries and web searches. Two weeks before mailing the survey, we sent a postcard alerting potential participants of the goals and timing of the study. After an initial mailing of the survey, we sent up to 3 reminders to hospitals that did not respond and made up to 3 attempts to contact the remaining nonrespondents by telephone. A $2 bill was included in the initial mailing as an incentive to participate. Survey administration was conducted from January 1 through September 31, 2012.

Survey Content

The survey consisted of 10 Likert-style questions that assessed level of agreement on a 4-item scale (strongly disagree to strongly agree), with statements about the role, strengths, and limitations of 6 types of performance measures reported on the Hospital Compare website: processes of care, mortality, readmission, patient experience, cost, and volume. Questions addressed the following concepts: whether public reporting of the measures stimulates quality improvement, whether the hospital is able to influence performance on the measures, whether the hospital’s reputation is influenced by performance on the measures, whether the measures accurately reflect quality of care for the conditions being measured, and whether performance on the measures can be used to draw inferences about quality of care more generally at the hospital. In addition, we assessed levels of agreement with a number of common concerns raised about quality measures, including whether measured differences are clinically meaningful, whether efforts to maximize performance on the measures can result in neglect of other more important matters (ie, teaching to the test), whether hospitals may attempt to maximize their performance primarily by making changes to documentation and coding rather than improving clinical care (ie, gaming), whether the risk adjustment methods are adequate to account for differences in patient case mix, and whether random variation has a substantial likelihood of affecting the hospital’s ranking (eTable in the Supplement).812

Finally, we included 6 questions that focused on how quality measures were used at the respondent’s institution, including whether performance levels were incorporated into annual hospital goals and whether performance was regularly reviewed with a hospital’s board of trustees, senior administrative and clinical leaders, and frontline clinical staff. We also asked whether quality performance was used in the variable compensation or bonus program for senior hospital leaders and for hospital-based physicians.

Statistical Analysis

All analyses were performed using SAS statistical software, version 9.3 (SAS Institute Inc). We compared the characteristics of respondent and nonrespondent hospitals to ascertain potential nonresponse bias via the χ2 test. In those instances in which a hospital returned more than 1 questionnaire, we selected the first response received. For survey responses, we constructed summary statistics weighted to account for sampling in each of the 3 performance strata, using PROC SURVEYFREQ in SAS statistical software.

We investigated the potential association between survey responses and respondent job title (eg, chief executive officer) using logistic regression (PROC SURVEYLOGISTIC in SAS statistical software), grouping responses as strongly agree or agree vs disagree or strongly disagree. For this analysis, we selected 4 items that we thought captured overall attitudes: whether the measures stimulated quality improvement, whether the hospital could influence performance, and the items that address clinical meaningfulness and gaming. We included the following hospital characteristics in the model: number of beds, teaching status, urban or rural location, and geographic region. To investigate the potential association between hospital performance (as measured by risk-standardized mortality and readmission rates) and the views of hospital leaders about those measures, we modeled responses across the 3 performance groups using logistic regression. We performed a similar analysis for questions focused on the use of the performance measures at the respondent’s institution. These analyses were adjusted for hospital characteristics and respondent job title. Bonferroni adjustment was made for all pairwise tests among the performance strata. P < .05 was considered significant.

Results

Of the 630 hospitals surveyed, 380 (60.3%) responded (Table 1). Respondent hospitals were similar to nonrespondent hospitals with regard to size, teaching status, urban or rural setting, and quality performance. Hospitals in the Northeast were slightly more likely to respond. The individual completing the questionnaire was most often the chief medical officer or equivalent (eg, vice president of medical affairs or chief of staff; 40.5%), chief executive officer (30.3%), or the chief quality officer or equivalent (eg, vice president of quality or director of quality; 20.3%).

Attitudes Toward Existing Quality Measures

Responses to the attitude questions suggest that public reporting has captured the attention of hospital leaders. For each of the mortality, readmission, process, and patient experience measures, more than 70% of hospitals agreed with the statement that “public reporting stimulates quality improvement activity at my institution”; agreement for measures of cost and volume was 65.2% and 53.3%, respectively (Figure and eTable in the Supplement). A similar pattern was observed for the statement that “our hospital is able to influence performance on this measure”; agreement for processes of care and patient experience measures was 96.4% and 94.2%, respectively. A total of 89.7% of hospitals agreed that the hospital’s reputation was influenced by patient experience measures; agreement was 77.4% for mortality, 69.9% for readmission, 76.3% for process measures, 66.1% for cost measures, and 54.0% for volume measures.

Respondents expressed concerns about the clinical meaningfulness, unintended consequences, and methods of quality measures (Figure and eTable in the Supplement). Although 73.8% of respondents agreed with the statement that process and patient experience measures provided an accurate reflection of quality of care for the conditions measured, this number decreased to 48.5% for measures of mortality, 49.9% for readmission, and lower still for measures of cost and volume. A similar pattern was observed when we asked whether measured performance could be used to draw inferences about quality of care in general, with higher agreement for measures of process and patient experience. In addition, less than 50% of respondents agreed with the statement that measured differences among hospitals were clinically meaningful for mortality, readmission, cost, and volume measures. A total of 45.7% to 58.6% of hospital leaders expressed concern that focus on the publicly reported quality measures might lead to neglect of other more important topics, and there were similar levels of concern (ranging from 32.0% to 57.6%) that hospitals might try to game the system by focusing their efforts primarily on changing documentation and coding rather than by making actual improvements in clinical care. Concern about the potential role of random variation affecting measured performance ranged from 45.5% for measures of cost to 67.4% for readmission measures.

Association Between Respondent Role and Attitudes

When compared with chief executive officers and chief medical officers, respondents who identified themselves as chief quality officers or vice presidents of quality were less likely to agree that public reporting stimulates quality improvement and that measured differences are large enough to differentiate among hospitals. Chief quality officers were also the group most concerned about the possibility that public reporting might lead to gaming through changes in documentation (Table 2).

Association Between Hospital Attitudes and Performance

We observed few differences in attitudes toward mortality and readmission measures associated with hospital performance on these measures (Table 3). Hospitals categorized as having better-than-expected performance were more likely to agree that differences in mortality rates were large enough to meaningfully differentiate among hospitals but had similar views about whether the mortality measures stimulate improvement activity, the hospital’s ability to influence performance, and concerns about gaming. A similar pattern was seen with regard to views about the readmission measures, although hospitals with better-than-expected performance were also somewhat less likely to express concern about gaming.

Use of Quality Measures by Hospitals

A total of 87.1% of hospitals reported incorporating performance on publicly reported measures into their hospital’s annual goals, whereas 90.2% reported regularly reviewing the results with the hospital’s board of trustees and 94.3% with senior clinical and administrative leaders (Table 4). Approximately 3 of 4 hospitals (78.1%) stated that they regularly review results with frontline clinical staff. Half (51.3%) of hospitals reported that performance on measures was used in the variable compensation programs of senior hospital leaders, whereas roughly one-third (30.1%) used these measures in the variable compensation plan for hospital-based physicians.

Association Between Use of Measures and Hospital Performance

With 2 exceptions, we observed no differences in the use of quality measures by hospitals across the 3 levels of performance (Table 4). Hospitals with better-than-expected performance and those with worse-than-expected performance were somewhat more likely to report incorporating performance on publicly reported quality measures into the hospital’s annual goals compared with hospitals whose performance was as expected (94.0%, 92.1%, and 84.9%, respectively; P = .004). In addition, hospitals with better-than-expected performance were more likely to incorporate performance on quality measures into the variable compensation plan of hospital-based physicians than hospitals with as-expected or worse-than-expected performance (44.8%, 27.7%, and 26.2%, respectively; P = .002).

Discussion

In this study of senior leaders from a diverse sample of 380 US hospitals, we found high levels of engagement with the quality measures currently made available to the public on the CMS’s Hospital Compare website. There was a strong belief that measures of care processes, patient experience, mortality, and readmission stimulate quality improvement efforts, a sense of empowerment that hospitals are capable of bettering their performance, and an understanding that the public is paying attention. We also found that these measures are near universally reviewed with a hospital’s board and senior administrative and clinical leaders and are commonly shared with frontline staff. Nevertheless, there were important concerns about the adequacy of risk adjustment and unintended consequences of public reporting, including neglect of other clinically important areas (teaching to the test) and improving performance primarily through changes in documentation and coding (gaming). Equally troubling, roughly one-half of the leaders did not believe that measures accurately portrayed the quality of care for the conditions they addressed or could be used to draw inferences about quality at the hospital more generally, and more than one-half reported that the measures were not meaningful for differentiating among hospitals. Respondents from hospitals categorized as having better than expected performance on mortality and readmission measures were somewhat more likely to believe that the differences observed in mortality and readmission rates across institutions were clinically meaningful.

Our results are largely consistent with several other studies that examined attitudes of hospital leaders toward quality measures, which also demonstrate high engagement,13,14 skepticism about methods,13 and some association between attitudes and quality performance.14,15 It is also interesting to compare the results of our study with one conducted almost a quarter of a century ago, when public reporting was in its infancy. In 1987, the Health Care Financing Administration first disclosed risk-adjusted mortality rates to the public after a Freedom of Information Act request by journalists.1618 Shortly thereafter, Berwick and Wald19 surveyed hospital leaders from a sample of 195 institutions, including those with high, low, and average mortality rates, to assess their attitudes toward the mortality measure, their use of the data, and problems incurred by release to the public. They found limited support for transparency about overall hospital mortality rates. Few respondents believed the data to be valuable to the public, and only 31% believed that they were useful in guiding efforts to study or improve quality. In contrast, more than 70% of hospitals in the present study agreed that mortality measures are effective at stimulating improvement efforts.

It is perhaps not surprising that engagement with publicly reported quality measures has increased in the last quarter century. In the wake of multiple Institute of Medicine reports on the quality and safety of health care, the emergence of organizations such as the Institute for Healthcare Improvement and the National Patient Safety Foundation, and the growth of national initiatives such as the 100 000 Lives and Surviving Sepsis Campaigns, the environment in which quality measurement is being performed today would be hardly recognizable to the senior hospital leaders surveyed in the late 1980s.2023 The fields of quality improvement and patient safety now routinely warrant their own vice presidents, positions that were probably unimaginable to hospital leaders then. In addition, a number of advances in the science of quality measurement have been made since the late 1980s, including the emergence of process and patient experience measures and improvements in methods for risk adjustment.24 Furthermore, along with many other organizations, the CMS now relies on the National Quality Forum to vet proposed quality measures. This process evaluates concerns raised by stakeholders about issues such as clinical meaningfulness and risk adjustment and includes input from professional societies, payers, and hospital organizations. In addition, pay-for-performance programs now provide powerful incentives to pay attention to publicly reported measures. Our study confirms that hospitals are indeed paying attention, it also documents persistent concerns about the methods used to measure performance and of the unintended consequences of these programs. Indeed, concerns about the measures’ accuracy in representing quality of care and the adequacy of risk adjustment largely echo those reported by Berwick and Wald19 almost a quarter of a century ago.

Such concerns notwithstanding, public reporting programs show no sign of going away, and the number of measures continues to expand. Most of the recent growth has been centered in the development of outcome measures. In this context, our study findings are notable in that responses were generally much more favorable toward process and patient experience measures than for measures of outcomes, cost, and volume. We suspect that this variation in part reflects the reality that processes are more directly and readily controlled than outcomes.

Our study has several strengths. We included a large and diverse set of US hospitals and elicited a detailed view of attitudes toward and uses of quality measures. Although prior work has examined the role of hospital boards or reported on the views of frontline staff and middle management, we focused on the senior leaders responsible for overseeing quality improvement work. We also illuminated important differences across measure types.

Our findings should be interpreted in light of several limitations. First, we achieved a less-than-ideal response rate of 60.3%. However, our analysis of nonresponders suggests that our respondent sample was not biased because observable hospital characteristics, including quality performance, were similar in both groups. Fourteen percent of identified hospitals were excluded because case volumes did not meet thresholds for CMS reporting; therefore, our findings do not reflect these smaller hospitals. In addition, the opinions expressed by respondents to the survey may not represent the views of other clinical or administrative leaders or certainly the views of frontline clinical staff. The analysis of potential associations between responses and performance level also has limitations. First and most important, because this was a cross-sectional study, we cannot be sure whether the more sanguine attitudes expressed toward quality measures by the senior leaders at better performing institutions were the cause or result of their performance designation. We suspect that both explanations may be partially true; hospitals that are more invested in quality measurement and improvement are also more apt to be successful at it. At the same time, recognition for superior performance may more generally have positive effects on one’s attitude toward quality measures. Second, we categorized hospitals on the basis of their performance on mortality and readmission measures, and it is possible that the associations we observed between attitudes toward quality measures and hospital performance might have been different had we used other measures for this purpose.

Conclusions

Quality measurement and reporting has taken center stage in US health care policy and in the evaluation and reimbursement of hospitals. Our study indicates that quality measures reported on the CMS’s Hospital Compare website play a major role in hospital planning and improvement effort. However, important concerns about the clinical meaningfulness, unintended consequences, and methods of measurement programs are common.

Back to top
Article Information

Accepted for Publication: June 1, 2014.

Corresponding Author: Peter K. Lindenauer, MD, MSc, Center for Quality of Care Research, Baystate Medical Center, 280 Chestnut St, Third Floor, Springfield, MA 01199 (peter.lindenauer@baystatehealth.org).

Published Online: October 6, 2014. doi:10.1001/jamainternmed.2014.5161.

Author Contributions: Dr Lindenauer had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Study concept and design: Lindenauer, Lagu, Ross, Hannon, Rothberg, Benjamin.

Acquisition, analysis, or interpretation of data: All authors.

Drafting of the manuscript: Lindenauer.

Critical revision of the manuscript for important intellectual content: All authors.

Statistical analysis: Pekow.

Administrative, technical, or material support: Lindenauer, Lagu, Shatz, Hannon, Benjamin.

Study supervision: Lindenauer, Lagu, Pekow.

Conflict of Interest Disclosures: Dr Lagu reported receiving an honorarium from the Institute for Healthcare Improvement for her input on a project to help health systems achieve disability competence. Drs Lindenauer and Ross reported receiving support from the Centers for Medicare & Medicaid Services to develop and maintain performance measures that are used for public reporting. Dr Ross is a member of a scientific advisory board for FAIR Health Inc. No other disclosures were reported.

Funding/Support: Dr Lagu reported receiving support from award K01HL114745 from the National Heart, Lung, and Blood Institute of the National Institutes of Health. Dr Ross reported receiving support from grant K08 AG032886 from the National Institute on Aging and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program.

Role of the Funder/Sponsor: The funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Additional Contributions: Maureen Bisognano, MS, president and chief executive officer of Institute for Healthcare Improvement, cosigned the survey invitation.

References
1.
Goodrich  K, Garcia  E, Conway  PH.  A history of and a vision for CMS quality measurement programs. Jt Comm J Qual Patient Saf. 2012;38(10):465-470.
PubMed
2.
Centers for Medicare & Medicaid Services. Measures displayed on Hospital Compare. http://www.medicare.gov/hospitalcompare/Data/Measures-Displayed.html?AspxAutoDetectCookieSupport=1. Accessed March 21, 2014.
3.
Conway  PH, Mostashari  F, Clancy  C.  The future of quality measurement for improvement and accountability. JAMA. 2013;309(21):2215-2216.
PubMedArticle
4.
Blumenthal  D, Jena  AB.  Hospital value-based purchasing. J Hosp Med. 2013;8(5):271-277.
PubMedArticle
5.
Berwick  DM, James  B, Coye  MJ.  Connections between quality measurement and improvement. Med Care. 2003;41(1)(suppl):I30-I38.
PubMed
6.
Fung  CH, Lim  Y-W, Mattke  S, Damberg  C, Shekelle  PG.  Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med. 2008;148(2):111-123.
PubMedArticle
7.
Marshall  MN, Shekelle  PG, Leatherman  S, Brook  RH.  The public release of performance data: what do we expect to gain? a review of the evidence. JAMA. 2000;283(14):1866-1874.
PubMedArticle
8.
Werner  RM, Asch  DA.  The unintended consequences of publicly reporting quality information. JAMA. 2005;293(10):1239-1244.
PubMedArticle
9.
Bhalla  R, Kalkut  G.  Could Medicare readmission policy exacerbate health care system inequity? Ann Intern Med. 2010;152(2):114-117.
PubMedArticle
10.
Farmer  SA, Black  B, Bonow  RO.  Tension between quality measurement, public quality reporting, and pay for performance. JAMA. 2013;309(4):349-350.
PubMedArticle
11.
Dimick  JB, Welch  HG, Birkmeyer  JD.  Surgical mortality as an indicator of hospital quality: the problem with small sample size. JAMA. 2004;292(7):847-851.
PubMedArticle
12.
Thomas  JW, Hofer  TP.  Accuracy of risk-adjusted mortality rate as a measure of hospital quality of care. Med Care. 1999;37(1):83-92.
PubMedArticle
13.
Hafner  JM, Williams  SC, Koss  RG, Tschurtz  BA, Schmaltz  SP, Loeb  JM.  The perceived impact of public reporting hospital performance data. Int J Qual Health Care. 2011;23(6):697-704.
PubMedArticle
14.
Jha  A, Epstein  A.  Hospital governance and the quality of care. Health Aff (Millwood). 2010;29(1):182-187.
PubMedArticle
15.
Vaughn  T, Koepke  M, Kroch  E,  et al.  Engagement of leadership in quality improvement initiatives. J Patient Saf. 2006;2(1):2-9.
16.
Coles  J.  Public disclosure of health care performance reports: a response from the UK. Int J Qual Health Care. 1999;11(2):104-105.
PubMedArticle
17.
Dubois  RW, Rogers  WH, Moxley  JH  III, Draper  D, Brook  RH.  Hospital inpatient mortality. N Engl J Med. 1987;317(26):1674-1680.
PubMedArticle
18.
Krakauer  H, Bailey  RC, Skellan  KJ,  et al.  Evaluation of the HCFA model for the analysis of mortality following hospitalization. Health Serv Res. 1992;27(3):317-335.
PubMed
19.
Berwick  DM, Wald  DL.  Hospital leaders’ opinions of the HCFA mortality data. JAMA. 1990;263(2):247-249.
PubMedArticle
20.
Kohn  LT, Corrigan  JM, Donaldson  MS, eds. To Err Is Human: Building a Safer Health System. Washington, DC: The National Academies Press; 2000.
21.
Committee on Quality of Health Care in America, Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: The National Academies Press; 2001.
22.
Wachter  RM, Pronovost  PJ.  The 100,000 Lives Campaign: a scientific and policy review. Jt Comm J Qual Patient Saf. 2006;32(11):621-627.
PubMed
23.
Dellinger  RP, Levy  MM, Carlet  JM,  et al; International Surviving Sepsis Campaign Guidelines Committee; American Association of Critical-Care Nurses; American College of Chest Physicians; American College of Emergency Physicians; Canadian Critical Care Society; European Society of Clinical Microbiology and Infectious Diseases; European Society of Intensive Care Medicine; European Respiratory Society; International Sepsis Forum; Japanese Association for Acute Medicine; Japanese Society of Intensive Care Medicine; Society of Critical Care Medicine; Society of Hospital Medicine; Surgical Infection Society; World Federation of Societies of Intensive and Critical Care Medicine.  Surviving Sepsis Campaign: international guidelines for management of severe sepsis and septic shock: 2008. Crit Care Med. 2008;36(1):296-327.
PubMedArticle
24.
Krumholz  HM, Brindis  RG, Brush  JE,  et al; American Heart Association; Quality of Care and Outcomes Research Interdisciplinary Writing Group; Council on Epidemiology and Prevention; Stroke Council; American College of Cardiology Foundation; endorsed by the American College of Cardiology Foundation.  Standards for statistical models used for public reporting of health outcomes: an American Heart Association scientific statement from the Quality of Care and Outcomes Research Interdisciplinary Writing Group: cosponsored by the Council on Epidemiology and Prevention and the Stroke Council. Circulation. 2006;113(3):456-462.
PubMedArticle
×