AMI indicates acute myocardial infarction; CHF, congestive heart failure; EFFECT, Enhanced Feedback for Effective Cardiac Treatment.aHospital corporation indicates all hospital sites in a given hospital corporation (eg, some hospital corporations include multiple hospital sites. The number of hospital sites ranged from 1 to 5).bThe follow-up period was from April 1, 2004, to March 31, 2005.
Tu JV, Donovan LR, Lee DS, Wang JT, Austin PC, Alter DA, Ko DT. Effectiveness of Public Report Cards for Improving the Quality of Cardiac CareThe EFFECT Study: A Randomized Trial. JAMA. 2009;302(21):2330-2337. doi:10.1001/jama.2009.1731
Author Affiliations: Institute for Clinical Evaluative Sciences, Toronto, Ontario, Canada (Drs Tu, Lee, Austin, Alter, and Ko and Mss Donovan and Wang); Division of Cardiology, Schulich Heart Centre, Sunnybrook Health Sciences Centre, Toronto, Ontario (Drs Tu and Ko); Departments of Medicine (Drs Tu, Lee, Alter, and Ko) and Health Policy Management and Evaluation (Drs Tu and Austin), Dalla Lana School of Public Health (Drs Tu and Austin), University of Toronto, Toronto, Ontario; Division of Cardiology, University Health Network, Toronto, Ontario (Dr Lee); Division of Cardiology, Li Ka Shing Knowledge Institute of St Michael's Hospital, Toronto, Ontario (Dr Alter); and Toronto Rehabilitation Institute, Toronto, Ontario (Dr Alter).
Context Publicly released report cards on hospital performance are increasingly common, but whether they are an effective method for improving quality of care remains uncertain.
Objective To evaluate whether the public release of data on cardiac quality indicators effectively stimulates hospitals to undertake quality improvement activities that improve health care processes and patient outcomes.
Design, Setting, and Patients Population-based cluster randomized trial (Enhanced Feedback for Effective Cardiac Treatment [EFFECT]) of 86 hospital corporations in Ontario, Canada, with patients admitted for acute myocardial infarction (AMI) or congestive heart failure (CHF).
Intervention Participating hospital corporations were randomized to early (January 2004) or delayed (September 2005) feedback of a public report card on their baseline performance (between April 1999 and March 2001) on a set of 12 process-of-care indicators for AMI and 6 for CHF. Follow-up performance data (between April 2004 and March 2005) also were collected.
Main Outcome Measures The coprimary outcomes were composite AMI and CHF indicators based on 12 AMI and 6 CHF process-of-care indicators. Secondary outcomes were the individual process-of-care indicators, a hospital report card impact survey, and all-cause AMI and CHF mortality.
Results The publication of the early feedback hospital report card did not result in a significant systemwide improvement in the early feedback group in either the composite AMI process-of-care indicator (absolute change, 1.5%; 95% confidence interval [CI], −2.2% to 5.1%; P = .43) or the composite CHF process-of-care indicator (absolute change, 0.6%; 95% CI, −4.5% to 5.7%; P = .81). During the follow-up period, the mean 30-day AMI mortality rates were 2.5% lower (95% CI, 0.1% to 4.9%; P = .045) in the early feedback group compared with the delayed feedback group. The hospital mortality rates for CHF were not significantly different.
Conclusion Public release of hospital-specific quality indicators did not significantly improve composite process-of-care indicators for AMI or CHF.
Trial Registration http://clinicaltrials.gov Identifier: NCT00187460
Trial Registration Published online November 18, 2009 (doi:10.1001/jama.2009.1731).
Public release of hospital performance data is increasingly being mandated by policy makers with the goal of improving the quality of care.1,2 Advocates of report cards believe that publicly releasing performance data on hospitals will stimulate hospitals and clinicians to engage in quality improvement activities and increase the accountability and transparency of the health care system.3,4 Critics argue that publicly released report cards may contain data that are misleading or inaccurate and may unfairly harm the reputations of hospitals and clinicians.5- 7 They also are concerned that report card initiatives may divert resources away from other important needs. Although there has been considerable debate, few empirical data exist to determine whether publicly released report cards on hospital performance improve the overall quality of care provided.
While several uncontrolled studies have suggested that certain report card initiatives have had a beneficial effect, no large randomized trials, to our knowledge, have been conducted to evaluate the effectiveness of public report cards as a method for improving quality of care.8,9 To address this gap, we conducted a population-based, cluster randomized trial to determine whether publicly released report card data could improve the quality of cardiac care delivered in Ontario, Canada.
Rigorous evaluations of quality improvement methods such as publicly released report cards are difficult to conduct due to resistance and lack of interest from participating stakeholders and lack of support from funding agencies.10 We chose to focus our study on hospitals that treat patients with acute myocardial infarction (AMI) and congestive heart failure (CHF) because of considerable evidence of a large gap between actual and ideal practice patterns in patients with these 2 common conditions.11- 14 Ontario, Canada's most populated province, represented an ideal setting in which to conduct this study because there were no other similar public reporting initiatives under way at the time this study was launched.
The Canadian Cardiovascular Outcomes Research Team's Enhanced Feedback for Effective Cardiac Treatment (EFFECT) study began in 2002. The Canadian Cardiovascular Outcomes Research Team is a national team of cardiovascular outcomes researchers from across Canada. Using the Canadian Institute for Health Information hospital discharge administrative database for 1999-2001, 130 acute care hospital corporations (herein referred to as hospitals) were identified in Ontario, Canada (Figure). Forty-two hospitals were excluded because they had treated fewer than 15 patients with AMI per year. In addition, 2 hospitals were excluded because they were no longer involved in acute patient care. The CEOs of the remaining 86 hospitals were approached and participation in the study was requested.
A description of the study was provided and each hospital was requested to identify a clinical contact and an administrative contact. The clinical contact was defined as the individual most responsible for cardiac care at each institution and was the chief of cardiology or chief of staff at most institutions. The CEOs and clinical contacts were provided copies of the hospital report card data for review and dissemination within their hospitals.
All eligible hospitals agreed to participate in the study; however, 1 hospital withdrew from the baseline phase and 4 withdrew from the follow-up phase due to resource constraints. Participating hospitals were classified as teaching hospitals, large community hospitals, or small hospitals according to the classification system of the Ontario Hospital Association. Some hospitals had multiple sites (n = 12), but all study data were publicly reported at the level of the hospital corporation. The study protocol was submitted to and approved by the research ethics boards at the participating institutions. A waiver of informed consent for collecting the study data was approved by the research ethics boards due to the minimal risk nature of the study.15,16
The participating hospitals were randomized to receive either early (January 2004) or delayed (September 2005) feedback of a publicly released report card on their baseline performance for a set of national process-of-care quality indicators for AMI and CHF care, which were developed and endorsed by the Canadian Cardiovascular Outcomes Research Team and the Canadian Cardiovascular Society.17,18 The indicator definitions used in the study were developed through a modified Delphi expert consensus panel process, which was described in detail elsewhere.19,20 Ideal candidates who had recommended clinical indications and did not have a contraindication to an intervention were identified for each quality indicator. The indicator definitions were consistent between the baseline and follow-up phases of the study with the exception that angiotensin II receptor blockers were considered an equivalent substitute for angiotensin-converting enzyme inhibitors in the follow-up data collection. Data on hospital-specific outcome indicators were not publicly released. The 30-day and 1-year mortality status of patients was determined by linking the study data to the Ontario Registered Persons vital statistics database.
At each participating hospital, for the baseline assessment, a target sample of 125 charts (or all patients if <125 patients were treated) for patients receiving care for AMI and/or CHF between April 1, 1999, and March 31, 2001, was abstracted by an experienced cardiology research nurse. The nurse abstractors involved in the study were employed by the central study team and traveled to the participating hospitals. All study data were transmitted electronically to a secure database at the Institute for Clinical Evaluative Sciences in Toronto, Ontario.
The randomization of participating hospitals was stratified by type of hospital and was performed by a study statistician (P.C.A.). All study data were analyzed by 1 of 2 study statisticians (J.T.W. and P.C.A.). Data were collected from the hospitals in the early feedback group of the study first; however, it was not possible to blind the hospitals to their status.
The early feedback hospitals received their baseline performance data in October 2003 to permit internal validation checks; following this, the results were publicly released at a press conference and on the Web (http://www.ccort.ca/effect.asp) in January 2004.17 The baseline EFFECT study data received extensive media coverage through multiple television (n = 28), radio (n = 34), and newspaper (n = 41) stories in Canada, with an estimated audience of more than 12 million Canadians being exposed to the study results. Based on the baseline performance data, hospitals were encouraged to develop standardized admitting orders and discharge plans for cardiac patients, although the exact nature of quality improvement activities was left to the discretion of the hospitals.
Participating hospitals were told that follow-up data were going to be collected and the results would be shared with them. Quality indicator data for the delayed feedback hospitals also were collected, sent to the hospitals for internal validation, and released to all participating hospitals and on the Web in September 2005.18 However, there was no associated press release or media coverage.
To determine the effect of the publicly released early feedback report card, similar methods were used to collect clinical information through chart reviews of 15 997 patients treated at the study hospitals during the fiscal 2004 period (April 1, 2004, to March 31, 2005). The coprimary outcome measures of the study were the mean performance of the hospitals on each of 2 composite process-of-care indicators: (1) a composite EFFECT AMI quality indicator, defined as the percentage of opportunities for applying each of 12 AMI indicators that were actually fulfilled; and (2) a composite EFFECT CHF quality indicator defined in a similar manner based on 6 CHF process-of-care indicators. Secondary outcome measures included the individual process-of-care indicators included in the coprimary composite quality indicators, the results of a hospital report card impact survey, and hospital mortality rates.
The study had 84% power to detect a 5% absolute difference on the composite quality indicators between the study groups. The power calculation assumed a baseline performance rate on each composite indicator of 70% (standard deviation [SD], 10%) in each study group, and that there would be a secular improvement to 75% (SD, 7.5%) in the composite indicator, independent of the study intervention.
In addition to measuring actual performance, we also conducted a survey of participating hospitals regarding quality improvement initiatives launched in response to the early feedback report cards. The survey was sent by mail to the CEO and clinical contact at each hospital in both groups of the study beginning in June 2004. Nonrespondents were contacted by telephone and provided with additional copies of the survey.
All of the study data were analyzed using SAS version 9.1 (SAS Institute Inc, Cary, North Carolina). Pair-wise comparisons of categorical variables were conducted using χ2 tests. To measure the clinical severity of patients in the study, we calculated the Global Registry for Acute Coronary Syndromes in-hospital mortality risk score for each patient with AMI and the EFFECT 30-day mortality risk score for each patient with CHF.21,22
The primary analyses of the study data took into account the clustered nature of the data and examined the difference in the mean hospital-specific performance between the 2 study groups, using methods appropriate for the analysis of repeated cross-sectional cluster randomized trials.23 For each hospital, the hospital-specific performance for each quality indicator was determined in the baseline and follow-up data. The hospital-specific performance for the follow-up data on a given indicator was then regressed for the following variables: study group (early vs delayed feedback group), stratification factors (teaching vs large vs small hospital), and the hospital-specific performance on the given indicator in the baseline data.
The regression coefficient for the study group indicator denotes the mean difference in the performance on a given quality indicator between the 2 groups of the study, adjusted for baseline performance. Model-based significance levels and 95% confidence intervals (CIs) were obtained. A P value of less than .05 was considered statistically significant. However, all P values should be interpreted with the consideration that the study had multiple secondary outcome measures.
The characteristics of the participating hospitals and patients are shown in Table 1. The hospitals were well-balanced across the 2 feedback groups (early vs delayed) in terms of the clinical characteristics of the patients with AMI or CHF in both the baseline and follow-up cohorts. In particular, the Global Registry for Acute Coronary Syndromes risk scores and the EFFECT CHF risk scores were similar across the 2 groups of the study.
Table 2 shows the change over time in performance on the various AMI quality indicators after the public release of the results for the early feedback group. The coprimary composite AMI indicator did not significantly improve in the early feedback group compared with the delayed feedback group (absolute change, 1.5%; 95% CI, −2.2% to 5.1%; P = .43). Only the percentage of patients receiving fibrinolytic therapy prior to transfer to a coronary care or intensive care unit improved significantly more in the early feedback group. Primary percutaneous coronary intervention was only used in 1% of patients with ST-segment elevation MI who were receiving reperfusion therapy in the baseline cohort, and 11% of those in the follow-up cohort, in part because of geographical and other resource constraints related to this treatment option in Ontario during the study period.24
Table 3 shows that there was no significant improvement in the coprimary CHF composite indicator (absolute change, 0.6%; 95% CI, −4.5% to 5.7%; P = .81) in the early feedback group after the public release of the report card. The absolute rate of angiotensin-converting enzyme inhibitor and angiotensin II receptor blocker use in patients with left ventricular dysfunction increased by 5.9% (95% CI, 1.0% to 10.7%; P = .02), but this was the only indicator that improved significantly more in the early feedback group.
The main results from the survey of hospital responses to the early feedback report card data are summarized herein (eTable). The survey responses showed that hospitals in the early feedback group were significantly more likely to report starting 1 or more quality improvement initiatives (73.2% of early feedback group vs 46.7% of delayed feedback group for AMI care [P = .003] and 61.0% of early feedback group vs 50.0% of delayed feedback group for CHF care [P = .04]) in response to the publicly released early feedback report card. While the exact nature and focus of the activities varied considerably by hospital, approximately half of the early feedback hospitals reported that they introduced new or modified standard order sets and/or clinical pathways or care maps for the management of patients with AMI (53.7%) or CHF (43.9%). Approximately two-fifths of the early feedback group (39.0%) reported conducting initiatives to improve door-to-needle times for patients receiving fibrinolytic therapy.
After adjusting for the baseline mortality rates, the mean 30-day AMI mortality rates were 2.5% lower (95% CI, 0.1% to 4.9%; P = .045) in the early feedback group compared with the delayed feedback group. In an exploratory subgroup analysis, the relative improvement was greatest among patients with ST-segment elevation MI (Table 4).
After adjusting for baseline mortality rates, the mean 1-year CHF mortality rates were 2.8% lower (95% CI, −0.5% to 6.0%; P = .10) in the early feedback group compared with the delayed feedback group. In an exploratory subgroup analysis, the rate of improvement was significantly better among the patients with CHF who had documented left ventricular dysfunction (Table 4).
Publicly releasing information on hospital performance is an increasingly common, albeit controversial, method for attempting to improve quality of care. In this controlled experiment, we observed that the public release of hospital-specific, clinical data on a set of well-established cardiac quality indicators did not significantly improve mean hospital performance on either a composite AMI or CHF process-of-care indicator in the early feedback group compared with the delayed feedback group. Only 1 of 12 individual AMI process-of-care indicators and 1 of 6 individual CHF process-of-care indicators improved significantly more in the early feedback group. No process or outcome indicators were significantly improved in the delayed feedback group. The process-of-care findings suggest that public release of hospital-specific performance data may not be a particularly effective systemwide intervention for measurably improving processes of care for either AMI or CHF.
Our hospital survey suggested that a majority of hospitals in the early feedback group undertook 1 or more quality improvement initiatives in response to the publicly released report card. Yet, we did not detect a significant systemwide difference in either coprimary composite process-of-care indicators. One reason for the lack of apparent benefit may relate to considerable heterogeneity in terms of the nature, timing, intensity, and focus of the quality improvement initiatives that occurred. The hospital survey and anecdotal feedback from clinicians suggested that many hospitals tailored their quality improvement initiatives to a few key indicators (eg, reperfusion and/or medication indicators), depending in part on their local baseline results. This heterogeneity meant that only a minority of hospitals in the early feedback group directed their efforts at most of the individual indicators and reduced the likelihood of us detecting significant differences when comparing the average performance of 2 large groups of hospitals across a range of indicators, even though certain institutions may have made major process changes in response to the publicly released report cards.
One unanticipated observation was that several hospitals in the delayed feedback group reported that they also initiated some quality improvement activities after becoming aware of the publicly released early feedback report cards before receiving their own hospital-specific results. We could not blind the delayed feedback group to the media coverage and associated publicity surrounding the study results but this Hawthorne effect also may have decreased the opportunity to detect significant differences in the study indicators.
An interesting finding was the lower mean 30-day AMI mortality rates (especially in patients with ST-segment elevation MI) in the hospitals randomized to the early feedback group. A hypothesis-generating subgroup analysis also suggested there may have been improved 1-year outcomes in patients with CHF with left ventricular dysfunction. While it is possible that these findings were entirely due to chance, our 2 study groups were well-matched on patient characteristics in both the baseline and follow-up cohorts, and the report card intervention was the only major difference between the 2 groups.
Almost two-fifths of the early feedback group reported conducting initiatives to improve timely reperfusion including 10 hospitals that reported changing their policies to allow emergency department physicians to administer fibrinolytics without specialist consultation. Five hospitals in the early feedback group reported opening up CHF clinics; and there was also significantly greater use of angiotensin-converting enzyme inhibitors and angiotensin II receptor blockers in the early feedback group. We hypothesize that the mortality results might be a reflection of the cumulative synergistic impact of multiple diverse local quality improvement initiatives that collectively may have improved outcomes even if most individual process-of-care indicators were not significantly improved.
Our study has important limitations. We elected to hire independent cardiology research nurses to collect the study data to avoid the problems associated with potential gaming of the data by hospitals, which was observed in other report card initiatives,5 and to enhance the credibility of the information among participating stakeholders in Ontario. While this type of report card has many strengths, one important limitation is that there was a considerable time delay involved in collecting both the baseline and follow-up data that might have been avoided had we relied on the hospitals to collect and submit their own data. Our study also involved a 1-time intervention. It is possible that more frequent and timely feedback of publicly released report cards on a regular basis might have been more effective. The effectiveness of the intervention may also have been enhanced by recruiting local opinion leaders and multidisciplinary quality improvement teams at each hospital to implement a consistent systemwide approach to improving quality of care across hospitals based on the report card results.25,26
In summary, this study demonstrated that a carefully designed publicly released report card based on high-quality clinical information did not result in a measurable greater systemwide improvement in 2 composite AMI or CHF process-of-care indicators at the early feedback hospitals in Ontario. However, the EFFECT study data likely stimulated some important local, hospital-specific changes in delivery of care that may have contributed to the better outcomes observed at the early feedback hospitals. Policy makers and clinicians may wish to consider the findings from the EFFECT study in the design and evaluation of future public reporting initiatives. Greater attention to developing common strategies across hospitals for addressing report card results might enhance the systemwide effectiveness of future report cards.
Corresponding Author: Jack V. Tu, MD, PhD, Institute for Clinical Evaluative Sciences, G106-2075 Bayview Ave, Toronto, ON M4N 3M5, Canada (firstname.lastname@example.org).
Published Online: November 18, 2009 (doi:10.1001/jama.2009.1731).
Author Contributions: Dr Tu had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Tu, Lee, Austin, Alter.
Acquisition of data: Tu, Donovan, Lee.
Analysis and interpretation of data: Tu, Lee, Wang, Austin, Alter, Ko.
Drafting of the manuscript: Tu, Lee, Ko.
Critical revision of the manuscript for important intellectual content: Donovan, Lee, Wang, Austin, Alter, Ko.
Statistical analysis: Wang, Austin.
Obtained funding: Tu, Lee, Alter.
Administrative, technical, or material support: Donovan, Lee.
Study supervision: Tu.
Financial Disclosures: Dr Alter reported being a scientific advisor for INTERxVENT Canada, a lifestyle and disease management company. None of the other authors reported financial disclosures.
Funding/Support: The EFFECT study was supported by a Canadian Institutes of Health Research team grant in cardiovascular outcomes research to the Canadian Cardiovascular Outcomes Research Team; it was initially funded by a Canadian Institutes of Health Research Interdisciplinary Health Research Team grant and a grant from the Heart and Stroke Foundation of Canada. The Institute for Clinical Evaluative Sciences is supported by an operating grant from the Ontario Ministry of Health and Long-Term Care. Dr Tu is supported by a Tier 1 Canada Research Chair in Health Services Research and a career investigator award from the Heart and Stroke Foundation of Ontario. Dr Lee is supported by a Clinician Scientist Award from the Canadian Institutes of Health Research. Drs Austin and Alter are supported by Career Investigator Awards from the Heart and Stroke Foundation of Ontario. Dr Ko is supported by a Clinician Scientist Award from the Heart and Stroke Foundation of Ontario.
Role of the Sponsors: The design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, and approval of the manuscript were conducted completely independently of the sponsors.
Disclaimer: The results and conclusions are those of the authors, and should not be attributed to any of the funding agencies.
Additional Contributions: We thank all of the clinicians, administrators, and health records departments at all of the hospitals from across Ontario who participated in the EFFECT study. We also thank the EFFECT study research nurses who collected the data, Tara O’Neill, BES, and Francine Duquette, BA, for administrative support, and Virginia Flintoft, MSc, BN, RN, for her role in helping coordinate the launch of the EFFECT study; each was employed by the study investigators. We also thank David Henry, MBCHB, MRCP, FRCP (Edin), and Merrick Zwarenstein, MBBCh, MSc, of Sunnybrook Health Science Centre, who were not financially compensated for helpful comments on early drafts of the manuscript.