Context Pay for performance has been promoted as a tool for improving quality of care. In 2003, the Centers for Medicare & Medicaid Services (CMS) launched the largest pay-for-performance pilot project to date in the United States, including indicators for acute myocardial infarction.
Objective To determine if pay for performance was associated with either improved processes of care and outcomes or unintended consequences for acute myocardial infarction at hospitals participating in the CMS pilot project.
Design, Setting, and Participants An observational, patient-level analysis of 105 383 patients with acute non–ST-segment elevation myocardial infarction enrolled in the Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes With Early Implementation of the American College of Cardiology/American Heart Association (ACC/AHA) Guidelines (CRUSADE) national quality-improvement initiative. Patients were treated between July 1, 2003, and June 30, 2006, at 54 hospitals in the CMS program and 446 control hospitals.
Main Outcome Measures The differences in the use of ACC/AHA class I guideline recommended therapies and in-hospital mortality between pay for performance and control hospitals.
Results Among treatments subject to financial incentives, there was a slightly higher rate of improvement for 2 of 6 targeted therapies at pay-for-performance vs control hospitals (odds ratio [OR] comparing adherence scores from 2003 through 2006 at half-year intervals for aspirin at discharge, 1.31; 95% confidence interval [CI], 1.18-1.46 vs OR, 1.17; 95% CI, 1.12-1.21; P = .04) and for smoking cessation counseling (OR, 1.50; 95% CI, 1.29-1.73 vs OR, 1.28; 95% CI, 1.22-1.35; P = .05). There was no significant difference in a composite measure of the 6 CMS rewarded therapies between the 2 hospital groups (change in odds per half-year period of receiving CMS therapies: OR, 1.23; 95% CI, 1.15-1.30 vs OR, 1.17; 95% CI, 1.14-1.20; P = .16). For composite measures of acute myocardial infarction treatments not subject to incentives, rates of improvement were not significantly different (OR, 1.09; 95% CI, 1.05-1.14 vs OR, 1.08; 95% CI, 1.06-1.09; P = .49). Overall, there was no evidence that improvements in in-hospital mortality were incrementally greater at pay-for-performance sites (change in odds of in-hospital death per half-year period, 0.91; 95% CI, 0.84-0.99 vs 0.97; 95% CI, 0.94-0.99; P = .21).
Conclusions Among hospitals participating in a voluntary quality-improvement initiative, the pay-for-performance program was not associated with a significant incremental improvement in quality of care or outcomes for acute myocardial infarction. Conversely, we did not find evidence that pay for performance had an adverse association with improvement in processes of care that were not subject to financial incentives. Additional studies of pay for performance are needed to determine its optimal role in quality-improvement initiatives.
The concept of providing financial incentives to health care givers to improve quality of care, known as pay for performance, has received national attention as a potential means of narrowing well-documented gaps between health care guidelines and clinical practice.1-5 In collaboration with Premier Inc, a nationwide organization of nonprofit hospitals, the Centers for Medicare & Medicaid Services (CMS) launched the Hospital Quality Incentive Demonstration in 2003. As part of this pay-for-performance pilot project, 260 Premier Inc hospitals agreed to provide CMS with performance measurement information for 5 clinical conditions, including acute myocardial infarction. Hospitals in the 2 highest deciles of performance for a condition received a reimbursement bonus while those with the poorest performance risked future financial penalty.6 Bonuses for the first 2 years of the incentive program totaled $17.55 million across the 5 clinical conditions, payable to a total of 123 hospitals in the first year and 115 hospitals in the second year.
An internal study of the of the incentive program found that composite performance scores increased significantly for all 5 conditions during the first 2 years of the program and were associated with improvements in mortality at participating hospitals.7,8 A recently published analysis of the impact of the incentive program suggested a modest positive incremental effect on hospital performance.9 However, these analyses left unanswered questions about the effectiveness and potential consequences of pay-for-performance programs: (1) What is the incremental benefit of pay for performance on processes of care compared with more traditional quality-improvement efforts (such as voluntary quality-improvement registries)? (2) Will hospitals become unduly concerned about specific “graded care processes” to the detriment of unrewarded aspects of care? and (3) Will overall patient outcomes be measurably improved by pay-for-performance incentives that target only a few selected process measures?
Using data from a quality-improvement initiative for patients with non–ST-segment elevation acute myocardial infarction enrolled in Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes With Early Implementation of the American College of Cardiology/American Heart Association (ACC/AHA) Guidelines (CRUSADE), we compared trends in cardiac care and outcomes among hospitals that were or were not participants in the program. Specifically, we examined whether hospitals participating in the pay-for-performance program showed improvement in process measures and outcomes for acute myocardial infarction beyond that found in hospitals not participating in the program. We also examined the effect of pay for performance on other important processes of care recommended by the ACC/AHA national guidelines but not subject to financial incentives.
CRUSADE is a voluntary, observational quality-improvement initiative begun on January 1, 2001,10-14 that was designed to improve the quality of evidence-based care for patients with non–ST-segment elevation acute myocardial infarction. Participating hospitals collect and submit clinical information about in-hospital care and outcomes for patients with non–ST-segment elevation acute myocardial infarction with high-risk clinical features. Patients must present at a CRUSADE hospital within 24 hours of ischemic symptoms lasting at least 10 minutes in combination with either positive cardiac biomarkers (troponin or creatine kinase-MB) or ischemic ST-segment electrocardiographic changes (ST-segment depression or transient ST-segment elevation).
Trained data collectors at each hospital use standardized definitions to abstract the data. Variables include demographic characteristics, clinical presentation, medical history, treatments administered, associated major contraindications to evidence-based therapies, in-hospital outcomes, and discharge recommendations and interventions. Multiple procedures are used to monitor and ensure the quality of the CRUSADE database.10 A variety of performance feedback and quality-improvement interventions are provided to sites, including site reports that benchmark against national standards and other CRUSADE hospitals, as well as educational interventions, such as treatment algorithms and risk stratification tools.
Participating institutions were required to comply with their local regulatory and privacy guidelines and to submit the CRUSADE protocol for review and approval by their institutional review board (or its equivalent). Because data were used primarily at the local site for quality improvement, all sites were granted a waiver of informed consent under the Common Rule.
Between July 1, 2003, and June 30, 2006, 105 383 patients with non–ST-segment elevation acute myocardial infarction were hospitalized at 500 CRUSADE hospitals. From this cohort, we selected patients at all 54 CRUSADE hospitals that participated in the Hospital Quality Incentive Demonstration starting on October 1, 2003 (hereafter termed pay-for-performance hospitals). The patients at the remaining 446 hospitals served as the control group. Race was determined by chart review during data collection.
We evaluated 6 process measures in the care of patients with non–ST-segment elevation acute myocardial infarction. The same measures are used by CMS in the Hospital Quality Incentive Demonstration composite performance scoring system for acute myocardial infarction. These CMS metrics include aspirin at arrival and discharge, β-blocker at arrival and discharge, angiotensin-converting enzyme inhibitor or angiotensin receptor blocker for left ventricular systolic dysfunction, and smoking cessation counseling (Box). We also evaluated an additional 8 process measures that are designated as class I (useful and effective) in the ACC/AHA guidelines15,16 but that are not currently tracked by CMS in the incentive program's composite performance scoring system. These non-CMS process measures include performing an electrocardiogram within 10 minutes of emergency department presentation, use of unfractionated or low-molecular-weight heparin, use of glycoprotein IIb/IIIa inhibitors, cardiac catheterization within 48 hours, clopidogrel at discharge, lipid-lowering medication at discharge, dietary modification counseling, and referral for cardiac rehabilitation.
Box Section Ref IDBox. Process-of-Care Measures for Acute Myocardial Infarction Included in the Analysis
CMS Measures*
Non-CMS Measures†
Glycoprotein IIb/IIIa inhibitor use
Clopidogrel at discharge
Any heparin use
Lipid-lowering medication
Dietary modification counseling
Referral for cardiac rehabilitation
Electrocardiogram within 10 minutes
Cardiac catheterization within 48 hours
*Process measures included by the Centers for Medicare & Medicaid Services (CMS) in the composite process score in the Hospital Quality Incentive Demonstration.
†Process measures not included by CMS in the composite process score in the Hospital Quality Incentive Demonstration.
Patient eligibility for relevant measures was determined according to defined ACC/AHA guideline indications and reported contraindications. Patients who died anytime during their hospital stay or who were transferred to another hospital were excluded from discharge care assessment. In-hospital mortality was defined as death from any cause during a patient's hospital course.
We compiled baseline characteristics between pay-for-performance and control hospitals, both for hospital level variables and for patient-specific risk factors. Median values with interquartile ranges were used to describe continuous variables, and numbers (percentages) were reported for categorical variables.
From July 2003 to June 2006, we evaluated temporal trends in individual and composite processes of care for both CMS and non-CMS measures at pay-for-performance and control hospitals by using nonlinear mixed effects models (NLMIXED, SAS Institute Inc, Cary, NC). For the individual measures, the outcome for a given therapy for which a patient was eligible was defined as yes or no (whether a patient received the therapy or not). For the composite, each therapy for which the patient was eligible contributed an observation and the outcome was an indicator of whether the therapy was given. For example, if a patient received 4 out of 6 CMS processes, this patient would have 6 observations in the analysis data set, 4 of which would be positive. These models incorporated a random slope and intercept for each hospital. Fixed effects included an indicator for pay-for-performance hospital vs control hospital, time period (a continuous variable from 1 to 6 representing half years), and an interaction term between the pay-for-performance indicator variable and the time variable. The odds ratios for process measures reflect the change in odds, per time period, of receiving a given therapy within a hospital group (pay-for-performance or control). The interaction term measures the difference in the rate of change between the 2 hospital groups.
To evaluate the potential impact of pay-for-performance on outcomes, we used the same nonlinear mixed effect model as described previously and identified our outcome as in-patient hospital mortality. In calculating odds ratios for mortality, the model was adjusted for a patient risk score that was calculated by a logistic model including demographic and clinical characteristics previously identified to predict risk.17
We also performed a set of stratified analyses to evaluate the effect of baseline performance on overall improvement. We divided baseline composite CMS and non-CMS scores into tertiles by performance (low, medium, and high), and used the same model, with the same fixed and random effects, as described above.
A 2-sided P value <.05 was established as the level of statistical significance for all tests. All analyses were performed using SAS version 9.2 (SAS Institute Inc, Cary, NC).
Between July 1, 2003, and June 30, 2006, a total of 105 383 patients with non–ST-segment elevation myocardial infarction were treated at 54 pay-for-performance hospitals and 446 control hospitals. Table 1 shows baseline patient and clinical characteristics of the 2 groups. Pay-for-performance hospitals generally were larger than control hospitals, but there were no obvious differences in academic affiliation or geographic distribution. Moreover, baseline patient demographic characteristics, comorbidities, risk factors for coronary artery disease, and presenting signs and symptoms were generally similar between pay-for-performance and control hospitals.
At the patient level, scores for CMS and non-CMS composite process measures were calculated during each of the 6 periods. These composite scores incorporate each of the individual process measures for each respective category. The Figure shows the temporal trends in CMS composite scores over time at pay-for-performance hospitals and control hospitals. Composite measure scores for CMS processes showed significant improvement from July 2003 to June 2006 at both pay for performance and control hospitals. There was no significant difference in the rate of improvement in the composite score between the 2 hospital groups. Pay-for-performance hospitals had an absolute improvement in composite score of 7.2% (from a baseline of 87.0 to 94.2 in June 2006) vs a 5.6% improvement (from 88.0 to 93.6) at control hospitals (1.6% difference in absolute improvement).
The Figure also displays the temporal trends in the non-CMS composite scores over time at pay-for-performance hospitals and control hospitals. These are care processes that are indicated by ACC/AHA guidelines but are not rewarded financially by the incentive project. Similar to the CMS composite score, both pay-for-performance and control hospitals demonstrated significant improvement across the time trend. No significant difference was noted in the rate of improvement of non-CMS composite measures between the 2 hospital groups.
Table 2 shows the results for individual process measures and outcomes. Most process measures showed significant improvement from July 2003 to June 2006. Two of the 6 CMS measures, aspirin prescription at discharge and smoking cessation counseling, had slightly higher rates of improvement at pay-for-performance hospitals than control hospitals. Of the non-CMS measures, only 1 of 8 measures, lipid-lowering medication at discharge, showed a differential rate of improvement at pay-for-performance vs control hospitals. There was a slight reduction in mortality over time at both pay-for-performance and control hospitals. The difference in the rate of mortality reduction between the 2 hospital groups was not statistically significant.
Stratified analyses on baseline score (Table 3) showed an inverse relationship between baseline composite score and improvement for both CMS and non-CMS process measures. Hospitals with lower baseline scores had faster rates of improvement. However, improvement was seen in all groups, including hospitals with the highest baseline performance. There were no differences in the rate of improvement between pay-for-performance and control hospitals across any of these strata.
Among hospitals participating in a voluntary quality-improvement program (CRUSADE), we examined whether hospitals simultaneously participating in a CMS pay-for-performance program showed greater improvement in process-of-care measures. Two of the 6 individual CMS process measures, aspirin at discharge and smoking cessation counseling, showed higher rates of improvement among patients treated at pay-for-performance hospitals, although the magnitude of these differences was small. We found that composite scores of CMS-rewarded care processes demonstrated significant improvement over a relatively short period among both pay-for-performance and control hospitals, but rates of improvement were not significantly different between the 2 hospital groups.
Furthermore, while in-hospital mortality improved in both hospital groups, there was no evidence that improvements in mortality were incrementally greater at pay-for-performance hospitals. In addition, we observed significant, similar improvements at both hospital types for processes of care that were not subject to financial incentives, but again there was no significant difference between the 2 groups. Taken together, these findings indicate that in the first 3 years of the CMS program, financial incentives had limited incremental effect on acute myocardial infarction care processes or outcomes among hospitals participating in a voluntary quality-improvement registry.
There has been substantial debate about the effectiveness and potential consequences of pay-for-performance programs on efforts to improve health care quality.18-20 Our study is among the first to provide an independent evaluation of the CMS-sponsored Hospital Quality Incentive Demonstration project, the largest federally sponsored pay-for-performance program to date. The Centers for Medicare & Medicaid Services and Premier Inc recently published results for years 1 and 2 of the incentive program and reported significant improvement for all 5 clinical conditions tracked, including reduced mortality for acute myocardial infarction.8 However, this study lacked a contemporary control group, which limited their ability to infer direct associations between pay-for-performance and hospital quality improvement.
In a recent CMS-supported analysis of the incentive project by Lindenauer et al,9 the authors studied the effect of financial incentives with public reporting vs public reporting alone on processes of care for acute myocardial infarction. They concluded that pay-for-performance hospitals achieved modestly greater improvements in quality. Our study design and findings extend those of Lindenauer et al in several ways. First, we studied the effect of financial incentives at CRUSADE hospitals, which participate in a voluntary quality-improvement registry in addition to public reporting. Using CRUSADE hospitals, we found that pay-for-performance had a limited incremental impact in overall trends in processes of care for acute myocardial infarction. Using all available patient-level data, we found the absolute improvement in quality attributable to pay-for-performance was smaller (1.6% over almost 3 years) than that reported by Lindenauer (4.3% unadjusted over 2 years at the hospital level). Given the current lack of an evidence base linking pay-for-performance programs to better quality of care, coupled with the challenge of conducting randomized controlled trials in large-scale policy interventions like the Hospital Quality Incentive Demonstration, the utilization of different hospital groups in observational studies will be important to develop a robust evidence base.
Second, our study examined outcomes of acute myocardial infarction care, and we did not find an association between pay-for-performance and mortality among hospitals participating in a voluntary quality-improvement registry. Third, our study allowed the evaluation of the potential for unintended consequences of financial incentives by examining processes of care for acute myocardial infarction that were not rewarded by CMS but are recommended by ACC/AHA guidelines.
Concerns have been raised regarding the potential adverse consequences of financial incentives, including undue provider attention to aspects of care subject to incentives to the detriment of processes that are not subject to incentives.21-26 In our study, adherence to these processes improved significantly over time at both pay-for-performance and control hospitals. The rate of improvement in non-CMS measures was not significantly different between the 2 groups. Our finding that pay-for-performance hospitals had similar rates of improvement as control hospitals in non-CMS measures suggests that financial incentives did not have deleterious effects on other aspects of clinical care, given simultaneous hospital participation in a quality-improvement registry not involving financial incentives.
The results of this study raise concerns about what magnitude of effect pay-for-performance programs should have to justify the administrative burden and potential unintended consequences of financial incentives. The concept of minimal important differences in clinical research has evolved to allow us to interpret studies with large sample sizes.27-29 The minimal important differences process sets a predetermined effect size that would be considered an important result to prevent overinterpretation of overpowered statistical tests in large samples. The field of quality improvement research in general, and pay for performance in particular, would benefit from the establishment of performance improvement goals to allow better adjudication of the incremental benefit. The results of this study suggest only a small incremental benefit of pay for performance in some processes of acute myocardial infarction care without an associated incremental improvement in patient outcomes, even without explicit consideration of the costs of such a program.
There are several issues that should be considered in the interpretation of the results of this study. First, the study was limited to the care of patients with non–ST-segment elevation myocardial infarction and therefore did not include 2 CMS measures—time to thrombolytic administration and percutaneous coronary intervention for ST-segment elevation myocardial infarction. However, our set of processes for acute myocardial infarction is the largest to date among studies involving controlled analysis of the incentive program. Second, we had access to data only from hospitals participating in CRUSADE. However, a comparison of our sample of pay-for-performance CRUSADE hospitals and the Hospital Quality Incentive Demonstration from a previous study showed similar baseline scores for acute myocardial infarction (87.0 in our analysis vs 88.7 in the pay-for-performance cohort), similar rates of performance improvement during the first 2 years of the study (absolute improvement of 5.2% in our analysis vs 6.1% in the pay-for-performance cohort), and generally similar hospital characteristics (teaching status, geographic region, and ownership).7 Baseline hospital characteristics were also similar between the CRUSADE control group and other hospitals nationally with public reporting.
Third, we recognize that although we observed a limited effect of pay for performance over our 3-year study period, a larger effect of the program could be realized after a longer period of observation. However, our 3-year observation period is long compared with most studies of quality improvement interventions,30 and the potential novelty of pay-for-performance among voluntary vanguard centers may have been anticipated to have had its greatest impact during this initial period. As additional data from the Hospital Quality Incentive Demonstration become available, further analysis will be needed to follow these trends. Fourth, the Hospital Quality Incentive Demonstration program used only 1 set of financial incentives on hospitals (rewarding the top 20% of hospitals). The results of this study may therefore not be generalizable to other pay-for-performance models. Additional pay-for-performance models that reward improvement instead of overall performance should be considered.
Fifth, in interpreting this negative epidemiological study, our ability to determine precise power estimates was limited by a combination of factors, including temporal correlation, clustering of patients within hospitals, and variability in process measures between hospitals. However, we based our conclusions on the magnitude of association found and not on the statistical significance alone, and the 95% confidence intervals do not suggest that a substantially larger magnitude of association was likely.
In conclusion, this study is one of the first to evaluate the CMS pay-for-performance pilot project. Among hospitals participating in a voluntary quality-improvement registry, pay-for-performance had limited incremental impact on processes of care and outcomes for acute myocardial infarction. Conversely, we did not find evidence that pay for performance had an adverse impact on improvement in processes of care that were not subject to financial incentives. Additional studies of pay for performance are needed to determine its optimal role in quality-improvement initiatives.
Corresponding Author: Eric D. Peterson, MD, MPH, Outcomes Research and Assessment Group, Duke Clinical Research Institute, PO Box 17969, Durham, NC 27715 (peter016@mc.duke.edu).
Author Contributions: Dr Peterson had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Glickman, Roe, Gibler, Ohman, Peterson.
Acquisition of data: Glickman, Roe, Lytle, Peterson.
Analysis and interpretation of data: Glickman, Ou, DeLong, Roe, Mulgund, Rumsfeld, Schulman, Peterson.
Drafting of the manuscript: Glickman, Peterson.
Critical revision of the manuscript for important intellectual content: Glickman, Ou, DeLong, Roe, Lytle, Mulgund, Rumsfeld, Gibler, Ohman, Schulman, Peterson.
Statistical analysis: Glickman, Ou, DeLong, Mulgund, Peterson.
Obtained funding: Roe, Gibler, Ohman, Peterson.
Administrative, technical, or material support: Roe, Lytle, Gibler, Ohman, Schulman, Peterson.
Study supervision: Schulman, Peterson.
Financial Disclosures: Dr Roe reports receiving speakers' bureau honoraria from Millennium Pharmaceuticals Inc, Bristol-Myers Squibb/Sanofi Pharmaceuticals Partnership, and Schering Corp and research grants from Millennium Pharmaceuticals Inc, Bristol-Myers Squibb/Sanofi-Aventis Pharmaceuticals Partnership, and Schering Corp. Dr Rumsfeld reports that he serves on the scientific advisory board for United Healthcare and is chief science officer for the American College of Cardiology National Cardiovascular Data Registries. Dr Gibler reports receiving research grants from Millennium Pharmaceuticals Inc, Schering Corp, and Bristol-Myers Squibb/Sanofi Pharmaceuticals Partnership. Dr Ohman reports receiving research grants from Millennium Pharmaceuticals Inc, Bristol-Myers Squibb/Sanofi Pharmaceuticals Partnership, Schering Corp, and Berlex. Drs Schulman and Peterson have made available detailed listings of disclosure information at: http://www.dcri.duke.edu/research/coi.jsp. No other authors reported financial disclosures.
Funding/Support: Bristol-Myers Squibb/Sanofi Pharmaceuticals Partnership provided additional funding support. Millennium Pharmaceuticals Inc also funded this work. Dr Peterson is also the recipient of grant R01 AG025312 01A1 from the National Institute on Aging. CRUSADE is funded by the Schering-Plough Corp.
Role of the Sponsor: Although none of the sponsors were directly involved in design and conduct of the study, in the collection, management, analysis, and interpretation of the data, or preparation of the manuscript, the sponsors all reviewed the submitted manuscript. Dr Glickman’s role in this research was made possible by a gift from the Douglas and Stefanie Kahn Charitable Gift Fund.
Acknowledgment: CRUSADE is a National Quality Improvement Initiative of the Duke Clinical Research Institute.
1.Birkmeyer NJ, Birkmeyer JD. Strategies for improving surgical quality—should payers reward excellence or effort?
N Engl J Med. 2006;354:864-87016495401
Google ScholarCrossref 2.Berwick DM, DeParle NA, Eddy DM.
et al. Paying for performance: Medicare should lead.
Health Aff (Millwood). 2003;22:8-1014649428
Google ScholarCrossref 3.Chassin MR, Galvin RW. The urgent need to improve quality of care: Institute of Medicine National Roundtable on Health Care Quality.
JAMA. 1998;280:1000-10059749483
Google ScholarCrossref 4.Grumbach K, Osmond D, Vranizan K, Jaffe D, Bindman AB. Primary care physicians' experience of financial incentives in managed-care systems.
N Engl J Med. 1998;339:1516-15219819451
Google ScholarCrossref 5.Robinson JC. Theory and practice in the design of physician payment incentives.
Milbank Q. 2001;79:149-177
III11439463
Google ScholarCrossref 9.Lindenauer PK, Remus D, Roman S.
et al. Public reporting and pay for performance in hospital quality improvement.
N Engl J Med. 2007;356:486-49617259444
Google ScholarCrossref 10.Peterson ED, Roe MT, Mulgund J.
et al. Association between hospital process performance and outcomes among patients with acute coronary syndromes.
JAMA. 2006;295:1912-192016639050
Google ScholarCrossref 11.Staman KL, Roe MT, Fraulo ES, Lytle BL, Gibler WB, Ohman EM. Quality improvement tools designed to improve adherence to the ACC/AHA guidelines for the care of patients with non-ST-segment acute coronary syndromes: the CRUSADE Quality Improvement Initiative.
Crit Pathways Cardio. 2003;2:34-40
Google Scholar 12.Bhatt DL, Roe MT, Peterson ED.
et al. Utilization of early invasive management strategies for high-risk patients with non–ST-segment elevation acute coronary syndromes: results from the CRUSADE Quality Improvement Initiative.
JAMA. 2004;292:2096-210415523070
Google ScholarCrossref 13.Hoekstra JW, Pollack CV Jr, Roe MT.
et al. Improving the care of patients with non-ST-elevation acute coronary syndromes in the emergency department: the CRUSADE initiative.
Acad Emerg Med. 2002;9:1146-115512414463
Google ScholarCrossref 14.Roe MT, Ohman EM, Pollack CV Jr.
et al. Changing the model of care for patients with acute coronary syndromes.
Am Heart J. 2003;146:605-61214564312
Google ScholarCrossref 15.Braunwald E, Antman EM, Beasley JW.
et al. ACC/AHA guidelines for the management of patients with unstable angina and non-ST-segment elevation myocardial infarction: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Committee on the Management of Patients With UnsTABLE Angina).
J Am Coll Cardiol. 2000;36:970-106210987629
Google ScholarCrossref 16.Braunwald E, Antman EM, Beasley JW.
et al. ACC/AHA 2002 guideline update for the management of patients with unstable angina and non-ST-segment elevation myocardial infarction—summary article: a report of the American College of Cardiology/American Heart Association task force on practice guidelines (Committee on the Management of Patients With Unstable Angina).
J Am Coll Cardiol. 2002;40:1366-137412383588
Google ScholarCrossref 17.Boersma E, Pieper KS, Steyerberg EW.
et al. Predictors of outcome in patients with acute coronary syndromes without persistent ST-segment elevation: results from an international trial of 9461 patients.
Circulation. 2000;101:2557-256710840005
Google ScholarCrossref 18.Rosenthal MB, Frank RG. What is the empirical basis for paying for quality in health care?
Med Care Res Rev. 2006;63:135-15716595409
Google ScholarCrossref 19.Rowe JW. Pay-for-performance and accountability: related themes in improving health care.
Ann Intern Med. 2006;145:695-69917088584
Google ScholarCrossref 20.Rosenthal MB, Fernandopulle R, Song HR. Paying for quality: providers' incentives for quality improvement.
Health Aff (Millwood). 2004;23:127-14115046137
Google ScholarCrossref 21.Doran T, Fullwood C, Gravelle H.
et al. Pay-for-performance programs in family practices in the United Kingdom.
N Engl J Med. 2006;355:375-38416870916
Google ScholarCrossref 22.Marshall M, Smith P. Rewarding results: using financial incentives to improve quality.
Qual Saf Health Care. 2003;12:397-39814645748
Google ScholarCrossref 23.Darr K. The Centers for Medicare and Medicaid Services proposal to pay for performance.
Hosp Top. 2003;81:30-3214719748
Google Scholar 24.Conrad DA, Christianson JB. Penetrating the “black box”: financial incentives for enhancing the quality of physician services.
Med Care Res Rev. 2004;61:(suppl)
37S-68S15375283
Google ScholarCrossref 25.Kohn A. Why incentive plans cannot work.
Harv Bus Rev. 1993;71:54-63
Google Scholar 26.Porter M, Teisberg E. Redefining Health Care: Creating Value-Based Competition on Results. Boston, Mass: Harvard Business School Press; 2006
27.Barrett B, Brown D, Mundt M, Brown R. Sufficiently important differences: expanding the framework of clinical significance.
Med Decis Making. 2005;25:250-26115951453
Google ScholarCrossref 28.Jaeschke R, Singer J, Guyatt GH. Measurement of health status: ascertaining the minimal clinically important difference.
Control Clin Trials. 1989;10:407-4152691207
Google ScholarCrossref 29.Coeytaux RR, Kaufman JS, Chao R, Mann JD, DeVellis RF. Four methods of estimating the minimal important difference score were compared to establish a clinically significant change in Headache Impact Test.
J Clin Epidemiol. 2006;59:374-38016549259
Google ScholarCrossref 30.Petersen LA, Woodard LD, Urech T, Daw C, Sookanan S. Does pay-for-performance improve the quality of health care?
Ann Intern Med. 2006;145:265-27216908917
Google ScholarCrossref