Beck CA, Richard H, Tu JV, Pilote L. Administrative Data Feedback for Effective Cardiac TreatmentAFFECT, A Cluster Randomized Trial. JAMA. 2005;294(3):309-317. doi:10.1001/jama.294.3.309
Author Affiliations: Division of Clinical Epidemiology,
McGill University Health Centre, Montreal, Quebec (Ms Beck, Mr Richard, and
Dr Pilote); Institute for Clinical Evaluative Sciences and Division of General
Internal Medicine, Sunnybrook and Women’s College Health Sciences Centre,
University of Toronto, Toronto, Ontario (Dr Tu).
Context Hospital report cards are increasingly being implemented for quality
improvement despite lack of strong evidence to support their use.
Objective To determine whether hospital report cards constructed using linked
hospital and prescription administrative databases are effective for improving
quality of care for acute myocardial infarction (AMI).
Design The Administrative Data Feedback for Effective Cardiac Treatment (AFFECT)
study, a cluster randomized trial.
Setting and Patients Patients with AMI who were admitted to 76 acute care hospitals in Quebec
that treated at least 30 AMI patients per year between April 1, 1999, and
March 31, 2003.
Intervention Hospitals were randomly assigned to receive rapid (immediate; n = 38
hospitals and 2533 patients) or delayed (14 months; n = 38 hospitals and 3142
patients) confidential feedback on quality indicators constructed using administrative
Main Outcome Measures Quality indicators pertaining to processes of care and outcomes of patients
admitted between 4 and 10 months after randomization. The primary indicator
was the proportion of elderly survivors of AMI at each study hospital who
filled a prescription for a β-blocker within 30 days after discharge.
Results At follow-up, adjusted prescription rates within 30 days after discharge
were similar in the early vs late groups (for β-blockers, odds ratio
[OR], 1.06; 95% confidence interval [CI], 0.82-1.37; for angiotensin-converting
enzyme inhibitors, OR, 1.17; 95% CI, 0.90-1.52; for lipid-lowering drugs,
OR, 1.14; 95% CI, 0.86-1.50; and for aspirin, OR, 1.05; 95% CI, 0.84-1.33).
In addition, adjusted mortality was similar in both groups, as were length
of in-hospital stay, physician visits after discharge, waiting times for invasive
cardiac procedures, and readmissions for cardiac complications.
Conclusions Feedback based on one-time, confidential report cards constructed using
administrative data is not an effective strategy for quality improvement regarding
care of patients with AMI. A need exists for further studies to rigorously
evaluate the effectiveness of more intensive report card interventions.
Despite widespread dissemination of evidence-based guidelines for management
of acute myocardial infarction (AMI), many patients are not receiving recommended
treatments.1,2 For example, from
1997 to 2000, rates of prescription for β-blockers within 30 days of
discharge for elderly patients with AMI were as low as 43% in certain Canadian
regions.3 In the United States in 1998, less
than 70% of Medicare patients were prescribed β-blockers at discharge
from certain Michigan hospitals.4 There is
therefore increasing interest in implementing quality improvement strategies
for AMI care.5
One quality improvement strategy that has been suggested is provision
of feedback on “quality indicators” to hospitals and clinicians
treating AMI patients.6 Quality indicators
are defined as a summary of clinical performance over a specified time.7 It is suggested that “report cards” presenting
a summary of quality indicators relevant to care provided by individual hospitals
can catalyze quality improvement at these hospitals.5 Ideally,
hospital report cards provide clinicians with an accurate picture of the care
they deliver and provide benchmarks for comparison, such as the care delivered
at other hospitals or recommended target rates. Although public reporting
strategies have been used, some have argued that confidential data feedback
is sufficient to stimulate quality improvement.8
Hospital report cards are increasingly being implemented in the United
States and some parts of Canada as a strategy for quality improvement in many
areas of health care.9- 11 However,
there has been limited implementation of hospital report cards specific to
AMI care. Some critics are skeptical of risk-adjustment methods and the accuracy
of data coding.12- 14 Others
are concerned that they will bring only small gains for large uses of health
care resources.15 In Canada, the unique, population-based
administrative databases available to describe the care and outcomes of AMI
present an opportunity to address these concerns. Hospital discharge databases
can be linked to physician claims and outpatient prescription databases, providing
a comprehensive summary of care and outcomes. The construction of report cards
using these linked databases requires considerably fewer resources in comparison
with those constructed using abstracted data from hospital charts. In addition,
the accuracy of these data and validity of risk-adjustment methods using these
databases have been extensively evaluated and confirmed.16- 18 Of
note, a Medicare drug benefit may eventually permit construction of similar
linked databases in the United States.19
We used a controlled experiment to determine whether hospital report
cards constructed using linked administrative databases are effective for
improving AMI care. We conducted this study in the Canadian province of Quebec,
a report card–naive region, where most health care practitioners had
no prior experience with either publicized or confidential report cards. Quebec
acute care hospitals were randomized to receive rapid (immediately after randomization)
or delayed (14 months after randomization) confidential feedback on quality
indicators constructed using administrative data. We used this cluster randomization
approach to minimize the potential for contamination among individual physicians
and because our observations were aimed at the hospital level. Confidential
data reporting minimized the potential for contamination between study groups.
Confidential reporting also permitted an initial evaluation of effectiveness
in a report card–naive region without marked potential for antagonizing
the medical community, as sometimes occurs with public feedback. To the best
of our knowledge, this is the first randomized trial to evaluate the effectiveness
of administrative data report cards, including those specific to AMI care
as well as other areas of health care.
We used encrypted Medicare numbers to link the Quebec hospital discharge
summary database (Maintenance et Exploitation des Données pour l’Étude
de la Clientèle Hospitalière [Med-Echo]) with provincial physician
and drug claims databases (la Régie de l’Assurance Maladie du
Québec [RAMQ]). The Med-Echo database was used to identify AMI patients
for inclusion in the study cohort as well as to obtain patient demographic
and comorbid disease characteristics. The RAMQ physician claims database was
used to obtain data on inpatient and outpatient cardiac procedures and physician
visits. The RAMQ drug claims database was used to obtain data on outpatient
prescriptions filled for all patients aged 65 years or more who are enrolled
in the provincial drug plan (approximately 96%). A previous study demonstrated
the validity and accuracy of these data.16 Survival
data were obtained for close to 100% of the AMI cohort from the RAMQ database.20
For the creation of hospital report cards, we obtained data on all AMI
patients admitted during the 1999-2000 fiscal year (April 1, 1999, to March
31, 2000). Complete follow-up data were available from the date of admission
to March 31, 2000. For the analyses of report card effectiveness, we obtained
data on all AMI patients admitted between October 1, 2002, and March 31, 2003.
Complete Med-Echo and RAMQ follow-up data were available from the date of
admission to March 31, 2003, and March 31, 2004, respectively.
The inclusion and exclusion criteria were established by a Canadian
consensus panel.21 Briefly, the inclusion criterion
was a most responsible diagnosis of AMI (International Classification
of Diseases, Ninth Revision code 410.x). Exclusion criteria were (1)
not admitted to an acute care hospital; (2) admission to noncardiac surgical
service; (3) transfer from another acute care facility; (4) AMI coded as an
in-hospital complication; (5) discharge alive with total length of stay of
2 days or less; (6) previous AMI within the past year; (7) age younger than
20 years or older than 105 years; and (8) invalid health card number.
All acute care hospitals in Quebec admitting at least 30 AMI patients
per year were eligible to participate in this study (n = 77). The
cutoff of 30 patients was used to attempt to ensure an adequate sample size
for statistically stable estimates.
The trial intervention included provision of confidential feedback to
coronary care unit directors, chief executive officers, and directors of professional
services of the study hospitals. Feedback was provided in the form of a hospital
report card presenting information on 12 quality indicators for AMI care that
were developed by a Canadian consensus panel21 (Figure 1). Most indicators summarized processes
of care because previous surveys of physicians have indicated that such “actionable”
indicators have greater utility than indicators related to patient outcomes.22 The content and format of the report cards were developed
partly based on recommendations from previous studies that the data be benchmarked
against a reasonable comparison group,4 as
well as limited and graphically displayed.23 Quality
indicators reflecting patient outcomes were risk adjusted according to validated
methods,18 as previous studies have reported
that health care practitioners are skeptical of the comparability of these
outcomes among different hospitals.12 The suggestions
of 2 cardiologists working in McGill University hospitals were also taken
into account when designing report cards.
Attempts were made to encourage dissemination of report card data. Each
contact person received a package containing (1) a cover letter; (2) an information
sheet; (3) 10 paper copies of the report card; (4) an electronic copy of the
report card and a PowerPoint presentation (Microsoft Inc, Redmond, Wash) summarizing
the report card data; (5) acetate copies of the PowerPoint presentation; and
(6) a stamped, self-addressed postcard to indicate that the report card had
been received. After 2 months, a reminder was sent to each contact person
to encourage report card dissemination.
The report card package was delivered to all hospitals in the rapid
feedback group in May 2002 and to the delayed feedback group in July 2003.
The primary outcome variable was the proportion of elderly survivors
of AMI at each study hospital who filled a prescription for a β-blocker
within 30 days after discharge. The secondary outcomes were 12 additional
We chose the primary outcome variable for a number of reasons. First,
previous studies have suggested that the provision of data feedback is most
likely to improve prescribing practice rather than improve other processes
and outcomes of care.9,23 Second,
this indicator received one of the highest ratings from a Canadian consensus
panel in terms of potential for improvement, meaningfulness, usefulness, and
impact.21 Third, it was feasible to create
this indicator since we had extensive experience creating, validating, and
using this variable.20 Fourth, unlike several
of the other Canadian indicators, this indicator had been used to describe
quality of care and effectiveness of data feedback in other study populations.5 Therefore, using this indicator permitted comparison
with other studies. Finally, β-blocker use is almost universally recommended
after AMI, while the recommended target rates for other processes and outcomes
of AMI care are less certain.
Our power calculations were based on the formula outlined by Donner
and Klar24 for cluster randomized trials with
a binary study outcome. We judged a difference in prescription rates of 5%
between intervention and control hospitals to be the minimum clinically important
difference. This estimate was based on our clinical judgment and on a nonrandomized
study that found an increase in β-blocker prescription rates of 5.6%
(95% confidence interval, 2.7%-8.6%) after 6 months at hospitals that received
data feedback.25 Based on data for patients
admitted in 1999, the analysis of variance estimator of the intracluster correlation
coefficient was calculated as 0.015. With 38 hospitals in each group, we had
80% power to detect the 5% difference in β-blocker prescription rates
at the 5% level of significance, assuming that the number of patients per
hospital (m) equaled 79. The estimate of m was derived using the data for patients admitted in 1999 and the
formula for the case of varying cluster sizes.24
The study hospital was the unit of randomization. Intervention allocation
was based on a stratified randomization procedure with a blocking size of
4. Hospitals were stratified by volume of AMI admissions during 1999-2000
(high or low volume, defined by the 50th percentile values across all study
hospitals) and by availability of on-site cardiac catheterization facilities.
These variables have been shown to be associated with differences in aspects
of AMI care, such as prescription rates for β-blockers,26,27 use
and waiting times for cardiac procedures,28 specialty
of the treating physician, and hospital teaching status.28 It
has also been reported that health care practitioners do not believe data
from high-volume hospitals to be relevant to care at smaller-volume hospitals.14
A research assistant used computer-generated randomization procedures.
To minimize the potential for selection bias, the research assistant was blinded
to the name of the study hospitals until randomization was complete.
Additional efforts were made to minimize information bias. The study
investigators and data handlers were blinded to the intervention status of
the study hospitals. One study investigator (L.P.), however, was available
to answer any questions from the contacts at the study hospitals.
The unit of inference in this trial was directed at the hospital, or
cluster, level. Cluster-level analyses were appropriate in this case because
the primary research questions focused more on the randomized unit as a whole
than on individual patients.24 An intention-to-treat
analysis strategy was applied, comparing outcomes at all hospitals that were
randomly allocated to receive rapid vs delayed administrative data feedback.
Adjusted odds ratios were calculated using generalized estimating equation
extensions of logistic regression procedures for cluster randomized trials.24 The variables used in these adjustments were age,
sex, comorbidities, hospital volume of AMI admissions, hospital teaching status,
and presence of on-site catheterization facilities. In a further set of generalized
estimating equation models, we also adjusted for the baseline measures of
the quality indicator corresponding to the outcome variable of interest. We
explored time trends in quality indicators following rapid vs delayed administrative
data feedback through subgroup analyses according to month of admission for
AMI. Our final set of analyses was at the hospital level. We measured the
mean change in quality indicator values between baseline and follow-up among
individual hospitals in the rapid and delayed feedback groups. We then compared
the unadjusted difference in mean rates of change between the 2 hospital groups.
Statistical analyses were performed using Stata, version 6.0 (Stata
Corp, College Station, Tex) and SAS, version 8.1 (SAS Institute Inc, Cary,
NC) statistical software.
The McGill University Health Centre Ethics Board provided approval for
the design and conduct of this study.
A large proportion of hospitals in the rapid feedback group sent back
their completed postcards acknowledging receipt of the study intervention
materials (82%). Several contact individuals also showed their interest in
the study through e-mail and/or telephone contact, with requests for further
information and/or additional copies of study materials.
A total of 76 eligible hospitals were randomized (Figure 2). Baseline patient characteristics were generally similar
in each group (Table 1). However, despite
randomization techniques that used stratification by hospital volume and on-site
catheterization status, a smaller proportion of patients in the rapid feedback
group were admitted to hospitals with a high volume of AMI admissions and/or
on-site catheterization facilities. There was room for improvement in most
quality indicators at baseline, but the indicators were similar in both groups
At follow-up, patient characteristics were also similar in each group
(Table 3). In general, quality of care
improved from baseline in each group (Table 4). For example, rates of prescription for β-blockers increased
by approximately 10 percentage points between 1999-2000 and 2002-2003. However,
the overall quality of care remained similar in each group. The percentages
of patients prescribed β-blockers in the rapid and delayed feedback groups
were 74% and 76%, respectively (adjusted odds ratio, 1.1; 95% confidence interval,
0.8-1.4; P = .67). Adjusted mortality was
similar in both groups, as were length of in-hospital stay, physician visits
after discharge, waiting times for invasive cardiac procedures, and readmissions
for cardiac complications (Table 4).
In a further set of multivariable models, which adjusted for baseline values
of the corresponding quality indicators, all quality indicator values remained
similar in each group. There were no obvious time trend differences in rates
of prescription for β-blockers in any 1-month period between baseline
and the end of the follow-up period (Figure 3).
The average difference between follow-up and baseline in rates of prescription
of β-blockers at the hospital level was an increase of 9.6% in the rapid
feedback group and an increase of 5.4% in the delayed feedback group (Table 5). The difference between these 2 groups
was not statistically significant but showed a trend toward a modest clinically
significant benefit for the rapid feedback group (4.1%; 95% confidence interval,
−2.9% to 11.2%). Among the 38 hospitals randomized to rapid feedback,
24 hospitals improved rates of β-blocker prescription by at least 5%
and 20 hospitals improved by at least 10%. Among the hospitals randomized
to delayed feedback, 20 hospitals improved by at least 5% and 14 hospitals
by at least 10%. A small number of hospitals decreased their overall rates
of prescription for β-blockers. There were no significant differences
between the 2 groups for all additional quality indicators.
In this cluster randomized controlled trial, confidential feedback provided
to hospitals in the form of report cards constructed using linked administrative
data was not effective in improving quality of AMI care. Our results suggest
that even if the United States eventually acquires these types of administrative
data through the Medicare program, confidential feedback based on these data
are unlikely to be a sufficient strategy for health care quality improvement.
The lack of previous studies of effectiveness of hospital report cards
constructed using administrative data limits interpretation of the generalizability
of our findings to other regions. However, our findings are consistent with
recent observational evidence. In Canada, prior to 2003 the release of hospital-level
administrative data on quality indicators for AMI care had been limited to
the publication of an atlas of cardiovascular care in Ontario in 1999.29 Follow-up observational studies comparing quality
indicators between Ontario and other provinces did not detect a systematic
impact of this atlas on AMI care or outcomes compared with other provinces.3,30,31
Although we received anecdotal evidence that our report card intervention
was well received at the study hospitals, a detailed exploration of the AMI
care providers’ perceptions and use of the intervention was beyond the
scope of this study. Nonetheless, when interpreted in the context of results
from previous studies, our results point toward several potential reasons
for the lack of effectiveness of the study intervention. One potential reason
is that the administrative data were perceived as invalid or irrelevant to
practice.32 It is possible that report cards
constructed using chart review data may be more effective than those constructed
using administrative data because physicians are less skeptical of their data
quality. The presence of chart abstractors in hospitals could also increase
physicians’ awareness of performance monitoring and affect their practice.
Evidence from an observational study in the United States supports this hypothesis.25 However, rates on quality indicators obtained from
these data compared with chart reviews have been found to be similar in the
Canadian context. A large, randomized controlled trial of effectiveness of
report cards constructed using chart review data currently under way in Ontario
will provide further evidence.33
Related to the perception of data validity is the fact that it takes
time to develop credibility of performance data within a hospital.32 For the practical reason of lag time between AMI
admissions and data availability, our intervention represented the first and
only introduction of performance measures to the study hospitals. It is possible
that the physicians at the study hospitals were not supportive of the concept
of hospital report cards because it was new to them. It is also possible that
they would have been more supportive should the report card intervention have
been repetitively introduced.
Another potential reason is that our intervention was not intensive
enough to have an impact on quality of AMI care. For example, several cluster
randomized trials have provided evidence of the effectiveness of more intensive
or multimodal quality improvement interventions in non-AMI patient populations.34,35 One effective intervention consisted
of a combination of chart reviews and physician-specific and benchmark feedback.36 In cardiac populations, it has been suggested that
an intensive intervention involving quarterly, interactive multidisciplinary
team workshops among health care practitioners, as well as a Web-based performance
feedback tool, is effective for quality improvement.37 Another
study suggests that the use of practice guideline–based tools, such
as standard orders, is more effective for quality improvement than are interventions
involving feedback on quality indicators.38 Unfortunately,
the amount of resources necessary to provide chart review–based report
cards and more intensive interventions on a continuing basis is likely prohibitive
in many regions.
One remaining possibility is that administrative data feedback would
have been effective had it been publicized. Perhaps public awareness of deficiencies
in quality of care is a major and necessary incentive for quality improvement.
Some argue that the coronary artery bypass graft surgery report cards based
on administrative data have had a positive impact on quality of care in the
United States.13 The fact that public reporting
may be required even in the context of Canada’s universal health care
system is an interesting finding. Canadian hospitals are funded by global
budgets and, thus, there are no major market or government incentives stimulating
quality, only professional pride. This finding warrants further exploration
of motivators for health care quality improvement in public vs market economies.
In summary, our results suggest that one-time provision of confidential
hospital report cards constructed using administrative data does not appear
to be sufficient for quality improvement in AMI care. More intensive interventions,
which could include chart review and continuous and/or public data feedback
accompanied by other multimodal interventions, such as team workshops and
standard orders, may be effective, but a need remains to study these interventions
and their cost-benefit ratios in well-controlled randomized trials.
Corresponding Author: Louise Pilote, MD,
MPH, PhD, Division of Clinical Epidemiology, McGill University Health Centre,
1650 Cedar Ave, Suite L10-421, Montreal, Quebec, Canada H3G 1A4 (firstname.lastname@example.org).
Author Contributions: Dr Pilote had full access
to all of the data in the study and takes responsibility for the integrity
of the data and the accuracy of the data analysis.
Study concept and design: Beck, Tu, Pilote.
Acquisition of data: Richard, Pilote.
Analysis and interpretation of data: Beck,
Drafting of the manuscript: Beck, Pilote.
Critical revision of the manuscript for important
intellectual content: Beck, Richard, Tu, Pilote.
Statistical analysis: Beck, Richard, Pilote.
Obtained funding: Tu, Pilote.
Administrative, technical, or material support:
Study supervision: Pilote.
Financial Disclosures: None reported.
Funding/Support: Ms Beck was supported by a
PhD fellowship from the Canadian Cardiovascular Outcomes Research Team, jointly
funded by the Canadian Institutes for Health Research and the Heart and Stroke
Foundation of Canada. Dr Tu is supported by a Canada Research Chair in Health
Services Research. Dr Pilote is a research scholar of the Canadian Institutes
for Health Research and a William Dawson professor of Medicine at McGill University.
This project was jointly supported by operating grants to the Canadian Cardiovascular
Outcomes Research Team from the Canadian Institutes for Health Research and
the Heart and Stroke Foundation of Canada.
Role of the Sponsors: The funding organizations
had no role in the design and conduct of the study; collection, management,
analysis, and interpretation of the data; or preparation, review, or approval
of the manuscript.
Acknowledgment: From the Division of Clinical
Epidemiology, McGill University Health Center, we acknowledge Lawrence Joseph,
PhD, for his advice concerning the statistical analyses used in this study,
as well as Hassan Behouli, PhD, for his assistance with data analyses. We
also acknowledge the physicians and directors at Quebec hospitals.