eTable. Agreement of Hospital Senior Leaders With Statements Concerning Publicly Reported Quality Measures
Lindenauer PK, Lagu T, Ross JS, Pekow PS, Shatz A, Hannon N, Rothberg MB, Benjamin EM. Attitudes of Hospital Leaders Toward Publicly Reported Measures of Health Care Quality. JAMA Intern Med. 2014;174(12):1904-1911. doi:10.1001/jamainternmed.2014.5161
Copyright 2014 American Medical Association. All Rights Reserved. Applicable FARS/DFARS Restrictions Apply to Government Use.
Public reporting of quality is considered a key strategy for stimulating improvement efforts at US hospitals; however, little is known about the attitudes of hospital leaders toward existing quality measures.
To describe US hospital leaders’ attitudes toward hospital quality measures found on the Centers for Medicare & Medicaid Services’ Hospital Compare website, assess use of these measures for quality improvement, and examine the association between leaders’ attitudes and hospital quality performance.
Design, Setting, and Participants
We mailed a 21-item questionnaire from January 1 through September 31, 2012, to senior hospital leaders from a stratified random sample of 630 US hospitals, including equal numbers with better-than-expected, as-expected, and worse-than-expected performance on mortality and readmission measures.
Main Outcomes and Measures
We assessed levels of agreement with statements concerning quality measures, examined use of measures for improvement activities, and analyzed the association between leaders’ attitudes and hospital performance.
Of 630 hospitals surveyed, 380 (60.3%) responded. For each of the mortality, readmission, process, and patient experience measures, more than 70% of hospitals agreed with the statement that “public reporting stimulates quality improvement activity at my institution”; agreement for measures of cost and volume was 65.2% and 53.3%, respectively. A similar pattern was observed for the statement that “our hospital is able to influence performance on this measure”; agreement for processes of care and patient experience measures was 96.4% and 94.2%, respectively. A total of 89.7% of hospitals agreed that the hospital’s reputation was influenced by patient experience measures; agreement was 77.4% for mortality, 69.9% for readmission, 76.3% for process measures, 66.1% for cost measures, and 54.0% for volume measures. A total of 87.1% of hospitals reported incorporating performance on publicly reported measures into their hospital’s annual goals, whereas 90.2% reported regularly reviewing the results with the hospital’s board of trustees and 94.3% with senior clinical and administrative leaders. When compared with chief executive officers and chief medical officers, respondents who identified themselves as chief quality officers or vice presidents of quality were less likely to agree that public reporting stimulates quality improvement and that measured differences are large enough to differentiate among hospitals.
Conclusions and Relevance
Hospital leaders indicated that the measures reported on the Hospital Compare website exert strong influence over local planning and improvement efforts. However, they expressed concerns about the clinical meaningfulness, unintended consequences, and methods of public reporting.
During the past decade, one of the principal strategies of the Centers for Medicare & Medicaid Services (CMS) for improving the outcomes of hospitalized patients has been to make information about health care quality more transparent through public reporting programs.1 Performance measures currently published on the CMS’s Hospital Compare website include those focused on processes of care (eg, percentage of patients hospitalized for acute myocardial infarction treated with β-blockers); care outcomes, such as condition-specific mortality and readmission rates; patients’ experience and satisfaction with care; and measures of hospitalization costs and case volumes.2 Since 2003, the CMS has steadily expanded the number of measures included in public reporting efforts, and many of these measures now serve as the basis for the value-based purchasing program legislated in the Patient Protection and Affordable Care Act.3,4
In addition to helping consumers make more informed choices about where to obtain care, one of the primary goals of public reporting is to stimulate improvement efforts by health care professionals.5- 7 The extent to which hospital leaders view these data to be valid and meaningful may influence the effectiveness of this strategy. We therefore sought to describe the attitudes of hospital leaders toward the measures of hospital quality reported on the CMS’s Hospital Compare website and to assess how these measures are being used for performance improvement. Because we hypothesized that more favorable attitudes toward publicly reported measures might reflect greater institutional commitment toward improvement, we also examined the association between the views of hospital senior leaders and hospital quality performance rankings.
The study protocol was approved by the institutional review board at Baystate Medical Center. Written informed consent was obtained during the process of inviting participation in the survey. Using information from the Hospital Compare website, we categorized hospitals into 1 of 3 groups based on their 30-day risk-standardized mortality and readmission rates for pneumonia, heart failure, and acute myocardial infarction. The CMS uses hierarchical modeling to calculate rates for each hospital based on the ratio of predicted to expected outcomes multiplied by the national observed outcome rate. This approach conceptually allows for a comparison of a particular hospital’s performance given its case mix to an average hospital’s performance with the same case mix. Hospital performance is then compared relative to other institutions across the nation. For the purposes of the study, hospitals were considered to be better than expected if they were identified by the CMS as “better than the US national rate” on at least one outcome measure and had no measures in which they were “worse than the US national rate.” Hospitals were correspondingly categorized as worse than expected if they were identified as having worse performance than the US national rate on at least one measure and no measures in which they were better than the US national rate. Hospitals that were neither better nor worse than the US national performance on any outcome measure were considered to be performing as expected. We excluded a small group of hospitals with mixed performance (ie, those better than the national rate for some measures and worse for others). We matched sampled hospitals to the 2009 American Hospital Association Survey data to obtain hospital characteristics, including size, teaching status, population served, and region.
Of 4459 hospitals in the Hospital Compare database, we excluded 624 (14.0%) because of missing data for one or both performance measures (resulting from low case volumes that did not meet the CMS threshold for reporting) and 136 (3.1%) that had mixed performance. Of the remaining 3699 hospitals, 471 (12.7%) were better than expected, 2644 (71.5%) were as expected, and 584 (15.8%) were worse than expected. We randomly selected 210 hospitals from each of the performance strata to reach 80% power to detect a 20% difference in the proportion responding strongly agree or agree among top and bottom performers with 95% confidence, allowing for multiple comparisons and a projected 60% response rate.
We identified the names, addresses, and telephone numbers of the chief executive officer and the senior executive responsible for quality at the hospital (eg, chief quality officer, director of quality, or vice president of medical affairs) through telephone inquiries and web searches. Two weeks before mailing the survey, we sent a postcard alerting potential participants of the goals and timing of the study. After an initial mailing of the survey, we sent up to 3 reminders to hospitals that did not respond and made up to 3 attempts to contact the remaining nonrespondents by telephone. A $2 bill was included in the initial mailing as an incentive to participate. Survey administration was conducted from January 1 through September 31, 2012.
The survey consisted of 10 Likert-style questions that assessed level of agreement on a 4-item scale (strongly disagree to strongly agree), with statements about the role, strengths, and limitations of 6 types of performance measures reported on the Hospital Compare website: processes of care, mortality, readmission, patient experience, cost, and volume. Questions addressed the following concepts: whether public reporting of the measures stimulates quality improvement, whether the hospital is able to influence performance on the measures, whether the hospital’s reputation is influenced by performance on the measures, whether the measures accurately reflect quality of care for the conditions being measured, and whether performance on the measures can be used to draw inferences about quality of care more generally at the hospital. In addition, we assessed levels of agreement with a number of common concerns raised about quality measures, including whether measured differences are clinically meaningful, whether efforts to maximize performance on the measures can result in neglect of other more important matters (ie, teaching to the test), whether hospitals may attempt to maximize their performance primarily by making changes to documentation and coding rather than improving clinical care (ie, gaming), whether the risk adjustment methods are adequate to account for differences in patient case mix, and whether random variation has a substantial likelihood of affecting the hospital’s ranking (eTable in the Supplement).8- 12
Finally, we included 6 questions that focused on how quality measures were used at the respondent’s institution, including whether performance levels were incorporated into annual hospital goals and whether performance was regularly reviewed with a hospital’s board of trustees, senior administrative and clinical leaders, and frontline clinical staff. We also asked whether quality performance was used in the variable compensation or bonus program for senior hospital leaders and for hospital-based physicians.
All analyses were performed using SAS statistical software, version 9.3 (SAS Institute Inc). We compared the characteristics of respondent and nonrespondent hospitals to ascertain potential nonresponse bias via the χ2 test. In those instances in which a hospital returned more than 1 questionnaire, we selected the first response received. For survey responses, we constructed summary statistics weighted to account for sampling in each of the 3 performance strata, using PROC SURVEYFREQ in SAS statistical software.
We investigated the potential association between survey responses and respondent job title (eg, chief executive officer) using logistic regression (PROC SURVEYLOGISTIC in SAS statistical software), grouping responses as strongly agree or agree vs disagree or strongly disagree. For this analysis, we selected 4 items that we thought captured overall attitudes: whether the measures stimulated quality improvement, whether the hospital could influence performance, and the items that address clinical meaningfulness and gaming. We included the following hospital characteristics in the model: number of beds, teaching status, urban or rural location, and geographic region. To investigate the potential association between hospital performance (as measured by risk-standardized mortality and readmission rates) and the views of hospital leaders about those measures, we modeled responses across the 3 performance groups using logistic regression. We performed a similar analysis for questions focused on the use of the performance measures at the respondent’s institution. These analyses were adjusted for hospital characteristics and respondent job title. Bonferroni adjustment was made for all pairwise tests among the performance strata. P < .05 was considered significant.
Of the 630 hospitals surveyed, 380 (60.3%) responded (Table 1). Respondent hospitals were similar to nonrespondent hospitals with regard to size, teaching status, urban or rural setting, and quality performance. Hospitals in the Northeast were slightly more likely to respond. The individual completing the questionnaire was most often the chief medical officer or equivalent (eg, vice president of medical affairs or chief of staff; 40.5%), chief executive officer (30.3%), or the chief quality officer or equivalent (eg, vice president of quality or director of quality; 20.3%).
Responses to the attitude questions suggest that public reporting has captured the attention of hospital leaders. For each of the mortality, readmission, process, and patient experience measures, more than 70% of hospitals agreed with the statement that “public reporting stimulates quality improvement activity at my institution”; agreement for measures of cost and volume was 65.2% and 53.3%, respectively (Figure and eTable in the Supplement). A similar pattern was observed for the statement that “our hospital is able to influence performance on this measure”; agreement for processes of care and patient experience measures was 96.4% and 94.2%, respectively. A total of 89.7% of hospitals agreed that the hospital’s reputation was influenced by patient experience measures; agreement was 77.4% for mortality, 69.9% for readmission, 76.3% for process measures, 66.1% for cost measures, and 54.0% for volume measures.
Respondents expressed concerns about the clinical meaningfulness, unintended consequences, and methods of quality measures (Figure and eTable in the Supplement). Although 73.8% of respondents agreed with the statement that process and patient experience measures provided an accurate reflection of quality of care for the conditions measured, this number decreased to 48.5% for measures of mortality, 49.9% for readmission, and lower still for measures of cost and volume. A similar pattern was observed when we asked whether measured performance could be used to draw inferences about quality of care in general, with higher agreement for measures of process and patient experience. In addition, less than 50% of respondents agreed with the statement that measured differences among hospitals were clinically meaningful for mortality, readmission, cost, and volume measures. A total of 45.7% to 58.6% of hospital leaders expressed concern that focus on the publicly reported quality measures might lead to neglect of other more important topics, and there were similar levels of concern (ranging from 32.0% to 57.6%) that hospitals might try to game the system by focusing their efforts primarily on changing documentation and coding rather than by making actual improvements in clinical care. Concern about the potential role of random variation affecting measured performance ranged from 45.5% for measures of cost to 67.4% for readmission measures.
When compared with chief executive officers and chief medical officers, respondents who identified themselves as chief quality officers or vice presidents of quality were less likely to agree that public reporting stimulates quality improvement and that measured differences are large enough to differentiate among hospitals. Chief quality officers were also the group most concerned about the possibility that public reporting might lead to gaming through changes in documentation (Table 2).
We observed few differences in attitudes toward mortality and readmission measures associated with hospital performance on these measures (Table 3). Hospitals categorized as having better-than-expected performance were more likely to agree that differences in mortality rates were large enough to meaningfully differentiate among hospitals but had similar views about whether the mortality measures stimulate improvement activity, the hospital’s ability to influence performance, and concerns about gaming. A similar pattern was seen with regard to views about the readmission measures, although hospitals with better-than-expected performance were also somewhat less likely to express concern about gaming.
A total of 87.1% of hospitals reported incorporating performance on publicly reported measures into their hospital’s annual goals, whereas 90.2% reported regularly reviewing the results with the hospital’s board of trustees and 94.3% with senior clinical and administrative leaders (Table 4). Approximately 3 of 4 hospitals (78.1%) stated that they regularly review results with frontline clinical staff. Half (51.3%) of hospitals reported that performance on measures was used in the variable compensation programs of senior hospital leaders, whereas roughly one-third (30.1%) used these measures in the variable compensation plan for hospital-based physicians.
With 2 exceptions, we observed no differences in the use of quality measures by hospitals across the 3 levels of performance (Table 4). Hospitals with better-than-expected performance and those with worse-than-expected performance were somewhat more likely to report incorporating performance on publicly reported quality measures into the hospital’s annual goals compared with hospitals whose performance was as expected (94.0%, 92.1%, and 84.9%, respectively; P = .004). In addition, hospitals with better-than-expected performance were more likely to incorporate performance on quality measures into the variable compensation plan of hospital-based physicians than hospitals with as-expected or worse-than-expected performance (44.8%, 27.7%, and 26.2%, respectively; P = .002).
In this study of senior leaders from a diverse sample of 380 US hospitals, we found high levels of engagement with the quality measures currently made available to the public on the CMS’s Hospital Compare website. There was a strong belief that measures of care processes, patient experience, mortality, and readmission stimulate quality improvement efforts, a sense of empowerment that hospitals are capable of bettering their performance, and an understanding that the public is paying attention. We also found that these measures are near universally reviewed with a hospital’s board and senior administrative and clinical leaders and are commonly shared with frontline staff. Nevertheless, there were important concerns about the adequacy of risk adjustment and unintended consequences of public reporting, including neglect of other clinically important areas (teaching to the test) and improving performance primarily through changes in documentation and coding (gaming). Equally troubling, roughly one-half of the leaders did not believe that measures accurately portrayed the quality of care for the conditions they addressed or could be used to draw inferences about quality at the hospital more generally, and more than one-half reported that the measures were not meaningful for differentiating among hospitals. Respondents from hospitals categorized as having better than expected performance on mortality and readmission measures were somewhat more likely to believe that the differences observed in mortality and readmission rates across institutions were clinically meaningful.
Our results are largely consistent with several other studies that examined attitudes of hospital leaders toward quality measures, which also demonstrate high engagement,13,14 skepticism about methods,13 and some association between attitudes and quality performance.14,15 It is also interesting to compare the results of our study with one conducted almost a quarter of a century ago, when public reporting was in its infancy. In 1987, the Health Care Financing Administration first disclosed risk-adjusted mortality rates to the public after a Freedom of Information Act request by journalists.16- 18 Shortly thereafter, Berwick and Wald19 surveyed hospital leaders from a sample of 195 institutions, including those with high, low, and average mortality rates, to assess their attitudes toward the mortality measure, their use of the data, and problems incurred by release to the public. They found limited support for transparency about overall hospital mortality rates. Few respondents believed the data to be valuable to the public, and only 31% believed that they were useful in guiding efforts to study or improve quality. In contrast, more than 70% of hospitals in the present study agreed that mortality measures are effective at stimulating improvement efforts.
It is perhaps not surprising that engagement with publicly reported quality measures has increased in the last quarter century. In the wake of multiple Institute of Medicine reports on the quality and safety of health care, the emergence of organizations such as the Institute for Healthcare Improvement and the National Patient Safety Foundation, and the growth of national initiatives such as the 100 000 Lives and Surviving Sepsis Campaigns, the environment in which quality measurement is being performed today would be hardly recognizable to the senior hospital leaders surveyed in the late 1980s.20- 23 The fields of quality improvement and patient safety now routinely warrant their own vice presidents, positions that were probably unimaginable to hospital leaders then. In addition, a number of advances in the science of quality measurement have been made since the late 1980s, including the emergence of process and patient experience measures and improvements in methods for risk adjustment.24 Furthermore, along with many other organizations, the CMS now relies on the National Quality Forum to vet proposed quality measures. This process evaluates concerns raised by stakeholders about issues such as clinical meaningfulness and risk adjustment and includes input from professional societies, payers, and hospital organizations. In addition, pay-for-performance programs now provide powerful incentives to pay attention to publicly reported measures. Our study confirms that hospitals are indeed paying attention, it also documents persistent concerns about the methods used to measure performance and of the unintended consequences of these programs. Indeed, concerns about the measures’ accuracy in representing quality of care and the adequacy of risk adjustment largely echo those reported by Berwick and Wald19 almost a quarter of a century ago.
Such concerns notwithstanding, public reporting programs show no sign of going away, and the number of measures continues to expand. Most of the recent growth has been centered in the development of outcome measures. In this context, our study findings are notable in that responses were generally much more favorable toward process and patient experience measures than for measures of outcomes, cost, and volume. We suspect that this variation in part reflects the reality that processes are more directly and readily controlled than outcomes.
Our study has several strengths. We included a large and diverse set of US hospitals and elicited a detailed view of attitudes toward and uses of quality measures. Although prior work has examined the role of hospital boards or reported on the views of frontline staff and middle management, we focused on the senior leaders responsible for overseeing quality improvement work. We also illuminated important differences across measure types.
Our findings should be interpreted in light of several limitations. First, we achieved a less-than-ideal response rate of 60.3%. However, our analysis of nonresponders suggests that our respondent sample was not biased because observable hospital characteristics, including quality performance, were similar in both groups. Fourteen percent of identified hospitals were excluded because case volumes did not meet thresholds for CMS reporting; therefore, our findings do not reflect these smaller hospitals. In addition, the opinions expressed by respondents to the survey may not represent the views of other clinical or administrative leaders or certainly the views of frontline clinical staff. The analysis of potential associations between responses and performance level also has limitations. First and most important, because this was a cross-sectional study, we cannot be sure whether the more sanguine attitudes expressed toward quality measures by the senior leaders at better performing institutions were the cause or result of their performance designation. We suspect that both explanations may be partially true; hospitals that are more invested in quality measurement and improvement are also more apt to be successful at it. At the same time, recognition for superior performance may more generally have positive effects on one’s attitude toward quality measures. Second, we categorized hospitals on the basis of their performance on mortality and readmission measures, and it is possible that the associations we observed between attitudes toward quality measures and hospital performance might have been different had we used other measures for this purpose.
Quality measurement and reporting has taken center stage in US health care policy and in the evaluation and reimbursement of hospitals. Our study indicates that quality measures reported on the CMS’s Hospital Compare website play a major role in hospital planning and improvement effort. However, important concerns about the clinical meaningfulness, unintended consequences, and methods of measurement programs are common.
Accepted for Publication: June 1, 2014.
Corresponding Author: Peter K. Lindenauer, MD, MSc, Center for Quality of Care Research, Baystate Medical Center, 280 Chestnut St, Third Floor, Springfield, MA 01199 (firstname.lastname@example.org).
Published Online: October 6, 2014. doi:10.1001/jamainternmed.2014.5161.
Author Contributions: Dr Lindenauer had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Lindenauer, Lagu, Ross, Hannon, Rothberg, Benjamin.
Acquisition, analysis, or interpretation of data: All authors.
Drafting of the manuscript: Lindenauer.
Critical revision of the manuscript for important intellectual content: All authors.
Statistical analysis: Pekow.
Administrative, technical, or material support: Lindenauer, Lagu, Shatz, Hannon, Benjamin.
Study supervision: Lindenauer, Lagu, Pekow.
Conflict of Interest Disclosures: Dr Lagu reported receiving an honorarium from the Institute for Healthcare Improvement for her input on a project to help health systems achieve disability competence. Drs Lindenauer and Ross reported receiving support from the Centers for Medicare & Medicaid Services to develop and maintain performance measures that are used for public reporting. Dr Ross is a member of a scientific advisory board for FAIR Health Inc. No other disclosures were reported.
Funding/Support: Dr Lagu reported receiving support from award K01HL114745 from the National Heart, Lung, and Blood Institute of the National Institutes of Health. Dr Ross reported receiving support from grant K08 AG032886 from the National Institute on Aging and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program.
Role of the Funder/Sponsor: The funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Additional Contributions: Maureen Bisognano, MS, president and chief executive officer of Institute for Healthcare Improvement, cosigned the survey invitation.