The bars display the number of hospitals with a given composite adherence score (N = 350 hospitals).
NSTE ACS indicates non–ST-segment elevation acute coronary syndrome; NSTEMI, non–ST-segment elevation myocardial infarction. Three hundred fifty hospitals are grouped by composite guideline adherence into quartiles. The left plot shows risk-adjusted mortality rates for overall patients with NSTE ACS for that quartile, and the right plot shows risk-adjusted mortality rates for the NSTEMI subgroup. Standard error bars are also included for each group. All results were adjusted for age, sex, race, body mass index, patient insurance status, admission electrocardiograph (ST depression, transient ST elevation), admission cardiac marker status, presenting signs of heart failure, initial heart rate and systolic blood pressure, history of hypertension, diabetes mellitus, hypercholesterolemia, renal insufficiency, prior myocardial infarction, prior percutaneous coronary intervention, prior coronary artery bypass graft surgery, prior congestive heart failure, prior stroke, current/recent smoker, and family history of coronary disease.
Customize your JAMA Network experience by selecting one or more topics from the list below.
Peterson ED, Roe MT, Mulgund J, et al. Association Between Hospital Process Performance and Outcomes Among Patients With Acute Coronary Syndromes. JAMA. 2006;295(16):1912–1920. doi:10.1001/jama.295.16.1912
Author Affiliations: Duke Clinical Research Institute, Duke University Medical Center, Durham, NC (Drs Peterson, Roe, Delong, Newby, Harrington, and Ohman, and Mss Mulgund and Lytle); Kaiser-Permanente Health System, San Francisco, Calif (Dr Brindis); University of North Carolina, School of Medicine, Chapel Hill (Dr Smith); Pennsylvania Hospital, University of Pennsylvania, Philadelphia (Dr Pollack); and University of Cincinnati School of Medicine, Cincinnati, Ohio (Dr Gibler).
Context Selected care processes are increasingly being used to measure hospital quality; however, data regarding the association between hospital process performance and outcomes are limited.
Objectives To evaluate contemporary care practices consistent with the American College of Cardiology/American Heart Association (ACC/AHA) guideline recommendations, to examine how hospital performance varied among centers, to identify characteristics predictive of higher guideline adherence, and to assess whether hospitals' overall composite guideline adherence was associated with observed and risk-adjusted in-hospital mortality rates.
Design, Setting, and Participants An observational analysis of hospital care in 350 academic and nonacademic US centers of 64 775 patients enrolled in the CRUSADE (Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes With Early Implementation of the ACC/AHA Guidelines) National Quality Improvement Initiative between January 1, 2001, and September 30, 2003, presenting with chest pain and positive electrocardiographic changes or cardiac biomarkers consistent with non–ST-segment elevation acute coronary syndrome (ACS).
Main Outcome Measures Use of 9 ACC/AHA class I guideline-recommended treatments and the correlation among hospitals' use of individual care processes as well as overall composite adherence rates.
Results Overall, the 9 ACC/AHA guideline-recommended treatments were adhered to in 74% of eligible instances. There was modest correlation in hospital performance among the individual ACS process metrics. However, composite adherence performance varied widely (median [interquartile range] composite adherence scores from lowest to highest hospital quartiles, 63% [59%-66%] vs 82% [80%-84%]). Composite guideline adherence rate was significantly associated with in-hospital mortality, with observed mortality rates decreasing from 6.31% for the lowest adherence quartile to 4.15% for the highest adherence quartile (P<.001). After risk adjustment, every 10% increase in composite adherence at a hospital was associated with an analogous 10% decrease in its patients' likelihood of in-hospital mortality (adjusted odds ratio, 0.90; 95% confidence interval, 0.84-0.97; P<.001).
Conclusion A significant association between care process and outcomes was found, supporting the use of broad, guideline-based performance metrics as a means of assessing and helping improve hospital quality.
Assessment of quality in health care delivery plays an increasingly prominent role in contemporary medical practice. Government agencies, professional societies, accreditation organizations, and major insurers all have published sets of performance indicators, including process of care measures proposed to be reflective of institutional quality of care.1-4 These performance indicators are now used for determining hospital referral patterns5 in public reports,6 and even for determining hospital reimbursement.7,8 Although widely used, these process-based performance systems are based on the concept that more consistent use of selected therapies by hospitals will result in better patient outcomes. However, to date, there has been limited published evidence demonstrating that hospital process performance is an accurate marker of centers with better patient outcomes.9,10
Non–ST-segment elevation (NSTE) myocardial infarction (MI) acute coronary syndrome (ACS) accounts for more than 1.6 million annual admissions, representing up to 75% of all cases of MI in US hospitals.11 Appropriate care for patients with NSTE ACS is informed by a wealth of recent randomized controlled trials whose findings have been summarized into national clinical practice guidelines by the American College of Cardiology/American Heart Association (ACC/AHA).11 Despite this evidence, prior studies have demonstrated gaps in the use of evidence-based care of NSTE ACS that are wider than those observed in patients with ST-segment elevation MI.12 To date, however, no information has been available for defining the degree to which NSTE ACS care varies among individual hospitals or for demonstrating an association between hospitals' NSTE ACS process performance and patient outcomes.
Using data from a large quality improvement initiative, the CRUSADE (Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes With Early Implementation of the ACC/AHA Guidelines) National Quality Improvement Initiative,13,14 we characterized the degree to which contemporary NSTE ACS care is consistent with guideline recommendations as well as the variation in specific care processes among 350 US hospitals. We evaluated the degree to which hospital performance varied among individual process metrics and identified hospital characteristics that were predictive of higher adherence to guidelines. Finally, we assessed whether hospitals' overall measure of composite adherence to these ACC/AHA guideline metrics was associated with observed and risk-adjusted in-hospital mortality rates.
CRUSADE is an ongoing voluntary, observational data collection and quality improvement initiative, which began January 1, 2001.13-16 CRUSADE centers collect and submit clinical information regarding the in-hospital care and outcomes of patients with NSTE ACS with high-risk clinical features. All patients must present at a CRUSADE hospital within 24 hours of ischemic symptoms lasting at least 10 minutes in combination with either positive cardiac markers (troponin or creatine kinase) or ischemic ST-segment electrocardiographic changes (ST depression or transient ST-segment elevation). Participating institutions are instructed to submit consecutive eligible patients to the CRUSADE database; however, global onsite validation was not feasible. All participating institutions were required to comply with their local regulatory and privacy guidelines and to submit the CRUSADE protocol for review and approval by their institutional review board (or the equivalent). Because data were used primarily at the local site for quality improvement, all sites were granted a waiver of informed consent under the common rule. The data coordinating center had data use agreement with each site to analyze the aggregate deidentified data for research purposes.
Data are abstracted by a trained data collector at each hospital using standardized definitions. Variables include demographic and clinical information, including clinical presentation, medical history, treatments administered, as well as associated major contraindications to evidence-based therapies, and in-hospital outcomes. Once collected, deidentified data are entered via a Web-based data collection tool and aggregated into an analytical database.17
Various procedures were used to monitor and improve the data quality of the CRUSADE database. At point of entry, values that exceed expected ranges or are inconsistent with other data prompt notification. Additionally, quarterly site reports summarize any data quality problems observed in submitted data. Sites reporting with outlier mortality results (beyond those predicted) and variable case submission rates also receive routine notification and follow-up to ensure that enrollment is nonselective and all adverse events are reported. The resultant degree of missing data are quite low, averaging less than 5% across all collected data elements. Additionally, in 2002, a formal audit was conducted of 25% of randomly selected CRUSADE sites. Trained monitors reabstracted information and found an overall agreement rate of 94.8% between submitted data and reabstracted results.
Between January 1, 2001, and September 30, 2003, 427 CRUSADE hospitals enrolled 77 760 patients with NSTE ACS. We excluded 9155 patients who transferred from a participating hospital, because longitudinal end point assessment was not possible. We further restricted this analysis to CRUSADE hospitals that had submitted at least 40 records and having at least 1 reported death during the study period to define a threshold for stable site-level performance assessment (77 hospitals and 3830 patients were excluded), resulting in 64 775 patients with NSTE ACS who were treated at 350 CRUSADE hospitals.
We evaluated 9 individual ACC/AHA class I (useful and effective) guideline-recommended therapies among patients eligible to receive these therapies, which included 4 acute process-of-care measures (aspirin, β-blocker, heparin, and intravenous glycoprotein IIb/IIIa inhibitors) used within the first 24 hours, as well as 5 discharge regimens (aspirin, β-blocker, clopidogrel, angiotensin-converting enzyme inhibitor, and lipid-lowering medication use). Patient eligibility for each measure was determined according to defined ACC/AHA guideline indications and reported contraindications.11 Patients who died during the first 24 hours were excluded from the denominator for assessment of acute care processes, and those dying anytime during their hospital stay were excluded from the discharge care assessment. Patient composite adherence scores were then calculated as the sum of correct care, provided from the patient's total number of eligible opportunities. Results were then summated at the hospital level. Although composite scores were analyzed as continuous variables, hospitals were also divided for descriptive purposes into quartiles based on these continuous variables.
Demographic and clinical characteristics, medication and procedure use, and in-hospital mortality were compared among these hospital adherence quartiles. Median values with interquartile ranges (IQRs) were used to describe continuous variables, and numbers (percentages) were reported for categorical variables. The associations between baseline characteristics and hospital features and composite hospital scores were assessed using linear regression. The correlation to the adherence rates of the hospitals for individual process measures and between adherence rates and in-hospital mortality were assessed using Pearson correlation coefficients. The association between hospital composite score quartiles and unadjusted outcomes was assessed using the Cochran-Armitage test for trend.
Adjusted mortality rates were determined by using the generalized linear mixed model method.18 The generalized linear mixed model uses a hierarchical approach to allow adjustment not only for risk factors but also within and between site effects. We adjusted for risk factors, which included age, sex, body mass index (calculated as weight in kilograms divided by height in meters squared), race, insurance status, family history of coronary disease, hypertension, diabetes mellitus, current/recent smoker, hypercholesterolemia, prior MI, prior percutaneous coronary intervention, prior coronary artery bypass graft (CABG) surgery, prior congestive heart failure, prior stroke, renal insufficiency (defined as serum creatinine level >2.0 mg/dL [>176.8 μmol/L]), blood pressure, heart rate, ST segment (depression, transient elevation, or neither), presenting signs of congestive heart failure, positive cardiac markers, and a patient's propensity to be treated at a top quartile center. The propensity score was constructed using multivariable logistic regression and contained those patient characteristics associated with being treated at a leading center. We then added a hospital characteristic, the composite adherence score for the hospital caring for the patient, to the mortality model. The C index for the overall mortality model was 0.82.
Additionally, we performed a series of sensitivity analyses. We repeated analyses after limiting our population to those patients with a documented MI based on an increased troponin or creatinine kinase assay within 18 hours of admission and again after limiting our sample to high-risk elderly patients aged 65 years or older. We then repeated our analyses after excluding any patient who died within the first 24 hours of hospitalization and again after including use of in-hospital revascularization procedures as a covariant in the mortality modeling analysis. We also repeated analyses after expanding our patient sample to all 427 CRUSADE centers, including those with less than 40 cases, and repeated once again after limiting the sample to those centers performing CABG surgery (n = 238 hospitals and n = 53 989 patients), in which rates of censure due to transfer were less than 5%.
Finally, we performed a matched-pair propensity analysis as an alternative means of adjusting our findings for potential patient selection bias among sites. Specifically, using the propensity score noted above, we matched pairs of patients treated at leading centers vs treated at nonleading centers. Matching was performed using the greedy algorithm with a maximum of a 5-digit match. We then assessed observed mortality rates among the similar matched patient pairs.
A 2-sided P<.05 was established as the level of statistical significance for all tests. All analyses were performed by using SAS software version 8.2 (SAS Institute, Cary, NC).
Our overall analysis sample comprised 64 775 patients with NSTE ACS who presented to 350 CRUSADE hospitals. The median number of patients treated per center was 139, and the median number of opportunities to provide 1 of the 9 guideline-based treatments was 1042 per center. Overall, care decisions were consistent with guideline recommendations in 74% of total treatment opportunities. Composite guideline adherence scores, however, varied considerably among US hospitals (Figure 1). The hospitals in the highest quartile (quartile 4) had a median (IQR) composite adherence score of 82% (80%-84%) compared with 63% (59%-66%) for hospitals in the lowest adherence quartile (quartile 1).
Table 1 shows the variability in the use of individual guideline-recommended therapies among patients without contraindications. Hospitals with the highest composite adherence score (quartile 4) had higher average performance on all the acute, discharge, and secondary prevention metrics. Within the acute measures, aspirin exhibited the lowest degree of variance, although the difference between the first and fourth quartiles remained significant. In contrast, there were 2- to 3-fold differences in the use of newer ACS therapies, such as intravenous glycoprotein IIb/IIIa inhibitors and use of clopidogrel at discharge. Use of secondary prevention interventions also tended to be 20% to 30% higher among hospitals in quartile 4 vs hospitals in quartile 1.
Clinical characteristics for the overall patient sample and by hospital adherence quartile are shown in Table 2. Patients treated at hospitals with lower composite adherence tended to be slightly older, not white, and had slightly more comorbid illness.
Hospital features associated with guideline adherence quartiles are shown in Table 3. Although larger hospitals and those with teaching affiliations tended to have higher unadjusted composite adherence scores, the only multivariable predictors identified were presence of cardiac revascularization facilities and the proportion of patients treated primarily by a cardiologist.
The correlation between a hospital's use of an individual guideline-recommended therapy, as well as its composite adherence rate, and in-hospital mortality are shown in Table 4. Hospital performance on a single process measure had a modest correlation with that for a different measure (Pearson correlation coefficient range, 0.3-0.5). However, a significant inverse correlation was shown between hospitals' use of individual care processes and in-hospital mortality rates for nearly all therapies. Among individual recommended medications, the highest associations between use and mortality were observed for acute intravenous glycoprotein IIb/IIIa inhibitors, discharge clopidogrel, and discharge lipid-lowering agents. The overall hospital composite adherence score based on all 9 ACC/AHA guideline recommendations also demonstrated a similar negative association with in-hospital mortality (r = −0.30, P<.001).
Unadjusted in-hospital patient outcomes as a function of hospital composite guideline adherence rates for the overall NSTE ACS population and when limited to those patients with a documented MI (NSTEMI, n = 57 260) are shown in Table 5. Overall in-hospital mortality rates observed in the 2 populations were 4.93% (3192/64 775) and 5.28% (3022/57 260), respectively. In-hospital death and combined death and MI rates decreased sequentially as a function of guideline adherence in both the overall and NSTEMI populations (both P<.001). The associations between hospital guideline adherence and rates of stroke and congestive heart failure were less strong.
Figure 2 displays risk-adjusted mortality rates as a function of composite ACC/AHA guideline adherence for the overall NSTE ACS and NSTEMI populations. After adjustment for patient demographic and clinical features, the mortality rate of patients with NSTE ACS decreased from 6.31% for quartile 1 vs 4.15% for quartile 4 (P<.001); adjusted odds ratio (OR) for in-hospital mortality in the highest vs the lowest hospital adherence quartiles was 0.81 (95% confidence interval [CI], 0.68-0.97). When evaluated as a continuous function, every 10% increase in overall composite guideline adherence was associated with a corresponding decrease in a patient's likelihood of death at that hospital by an analogous 10% (adjusted OR, 0.90; 95% CI, 0.84-0.97; P<.001).
Sensitivity analyses demonstrate that the results and conclusions found were robust when tested in a variety of populations and situations. Among patients with NSTEMI, mortality rates decreased from 7.68% (760/9892) for quartile 1 to 4.32% (718/16 622) for quartile 4 (adjusted mortality OR, 0.77; 95% CI, 0.64-0.93; P<.001). Similarly, the hospital adherence-outcome association was unaffected by patient age (formal test of an age-adherence score interaction, P = .39). Among patients aged 65 years or older, mortality rates ranged from 8.88% (671/7557) for quartile 1 to 6.12% (631/10 309) for quartile 4 (adjusted mortality OR, 0.83; 95% CI, 0.69-0.93). Our results remained substantially similar if we adjusted our findings for in-hospital revascularization procedures or excluded those patients with early deaths within 24 hours. Our results were consistent if we expanded our analysis to include all 427 CRUSADE hospitals regardless of sample size (including those hospitals with fewer than 40 cases) or, alternatively, if we limited the sample to only hospitals with CABG surgery facilities in which transfers were minimized (mortality ranged from 4.36% [512/11 731] for quartile 1 to 3.67% [463/12 616] for quartile 4; P = .008).
In a matched-pair propensity analysis, we successfully matched 37 654 patients in quartile 4 to a similar number of patients in the remaining 3 adherence groups. After matching, there were no significant differences in baseline characteristics among the paired samples other than those differences for heart rate. Similar to the overall findings, mortality rates among the matched pairs was lowest among those patients treated at the leading adherence quartile centers (4.17% [786/18 827]) and highest among those treated in the lagging quartile centers (5.47% [260/4750]). In an adjusted analyses, the OR associated with leading centers vs lagging centers was 0.67 (95% CI, 0.52-0.86).
Quality of care has been defined as “the degree to which health service for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.”19 This definition is based on the premise that consistent use of evidence-based care processes will lead to improved patient outcomes. Studying patients with NSTE ACS who were treated at 350 US hospitals, we found that up to 25% of opportunities to provide guideline-recommended care were missed in current practice. However, this overall number belies the variability in care among centers we found, both at the individual and composite-measure levels.
To our knowledge, our study is among the first to link this variability in hospital process performance with patient outcomes. After adjustment, every percentage increase in the guideline adherence rate of hospitals was associated with an equivalent decrease in the likelihood that patients treated at that center would die before discharge. These data therefore support the use of guideline-based process measures as an important means of assessing an institution's quality of care.
The era of accountability, defined as measured performance with consequences, is rapidly arriving for US medical practitioners and health care facilities. Soon, both reputations and incomes of medical practitioners and health care facilities may well be altered depending on how one scores on a limited set of performance metrics. However, debate remains regarding how performance should ideally be assessed. Some advocate that patient outcomes should be the criterion standard for assessing hospital quality. These metrics, however, are inherently unstable due to low clinical event rates and challenges in risk adjustment.20 Alternatively, performance assessment based on a limited set of care processes may be challenged if these metrics are not proven to be associated with overall patient outcomes. Our study results support the concept that composite guideline-based care metrics are closely associated with better patient outcomes, thereby connecting these 2 quality goals.
Previous studies have reported similar degrees of underuse of evidence-based care processes among patients hospitalized with ACS.21-24 More recently, investigators have reported poor association in hospital performance across several patient conditions.25 Our study is consistent with these earlier findings. Even within a single disease condition, we found that a hospital's performance on a given care process may not predict its results in another. For example, there was no association between the hospitals' use of acute β-blockers and their use of discharge clopidogrel. Thus, our study supports the concept that a broad range of process metrics may be needed to fully characterize hospital care practices.
We also found few hospital features that were significant predictors of better performance. In fact, in multivariable analysis, only centers with CABG surgery facilities and those with a higher percentage of patients treated by a cardiologist were significantly associated with higher adherence rates relative to their peers. Thus, payers or the public who wish to identify best-performing hospitals accurately will need to measure care processes directly rather than rely on structural features.
Our study extends former work in this field by demonstrating a strong, dose-dependent association between hospital adherence to care guidelines and their patient acute outcomes. Our findings of an association between higher use of evidence-based therapies and better outcomes lends further support to the work of other studies.9,10,26,27 Although the association was robust, its explanation is most likely multifactorial. First and foremost, each of the individual guideline-based processes examined have a proven impact on patient outcomes within well-run randomized clinical trials.11 Although a direct treatment benefit is certainly a contributing factor to the observed association, it does not fully account for effect. For example, we noted that hospitals' use of several discharge care processes were also indicative of centers' overall outcome results. Because the discharge metrics were assessed only among those patients surviving to hospital discharge, the observed association is likely explained by factors other than a direct therapeutic effect. This could include the benefits of initiation of these therapies before discharge, the correlation of these discharge processes with other acute measures, or other factors.
The association could also be confounded by patient risk and other socioeconomic factors. Prior studies have demonstrated that those patients most sick and thus likely to benefit from intervention paradoxically tend to be less likely to receive treatment.28,29 Additionally, a rich literature documents disparities in evidence-based treatments by age, race, and socioeconomic factors, which can in turn be associated with patient outcomes.30-32 Indeed, our study found that elderly persons, minorities, and those patients with more comorbid disease tended to be treated at centers with lower measured composite adherence. Furthermore, patient transfer issues may further skew our findings because healthy patients are often first transferred, leaving higher-risk patients at centers without tertiary care capacity. However, the overall association of composite quality in our study strongly persisted even after adjustment for clinical and socioeconomic factors. Furthermore, our findings were robust even after we limited our analyses to tertiary revascularization centers in which transfer rates were low.
Alternatively, the association between guideline adherence and outcomes could reflect other care processes at an institution. For example, hospitals with higher guideline adherence were more likely to use an early invasive medical strategy in addition to using more evidence-based medications. However, clinical trials have previously found only marginal differences in early mortality rates among those patients randomized to invasive vs conservative strategies.33,34 The link between guideline adherence and outcomes persisted when we limited our analysis to only centers with revascularization or, alternatively, adjusted our analyses for use of revascularization procedures.
Adherence to evidence-based care processes may be a more general surrogate marker of the hospital's culture and overall quality of care. For example, Bradley et al35 found that evidence-based care processes were predicted by certain cultural features of a center, such as the degree of administrative support, physician champions, feedback, and teamwork. Similarly, the study by Eagle et al27 found that centers that routinely use standardized care processes, such as patient care algorithms, admission order sets, and discharge checklists, tend to have higher adherence to guidelines. Performance on our set of guideline adherence metric processes may be indirectly reflective of a center's culture, business practices, and clinical skills.
Additionally, our study suggests that another characteristic, namely innovation, may be in play. This hypothesis is suggested by the fact that adoption of newer guideline-recommended therapies, such as glycoprotein IIb/IIIa inhibitors, clopidogrel, and lipid-lowering therapy, were all more closely associated with hospital outcome than were many of the well-established treatments, such as β-blockers or angiotensin-converting enzyme inhibitors. Although all of these care measures have proven efficacy in ACS care, our results showed considerably more variation in the use of newer therapies. Therefore, the speed and completeness of adoption of novel effective therapies may identify centers that have the latest evidence in practice and that systematically integrate these novel therapies into their standard care processes. Whether the association is a direct therapeutic one, an indirect reflection of the hospital system and culture, or a combination of these, however, is less important from a policy perspective, because its link to patient outcomes is key to supporting the role of these process performance metrics as indicators of overall hospital quality.
Our study has limitations. First, our study is observational and nonrandomized. The association between care processes and outcomes do not necessarily prove causality and may be confounded by factors previously discussed. Additionally, our conclusions are limited to the care of patients with NSTE ACS, although these patients do represent a substantial majority of all US patients with MI. It will be important to assess whether these findings can be translated across other disease states. We also were limited to the inpatient setting; evaluating the association of care practices in both inpatient and outpatient settings with longitudinal outcomes will also be an important next step. CRUSADE hospitals are self-selected for those institutions interested in quality improvement and thus may not be representative of national care patterns. However, if so, the variability in care observed in our study most likely underestimates that expected in broader community care. Our study does not purport to have the ideal set of process-performance indicators. We selected those indicators defined as useful and effective by current national care guidelines and included both newer and established care indicators. However, other indicator sets may be more or less closely associated with patient outcomes. Finally, it remains unclear whether this process-outcome link for a given set of performance metrics may vary over time. Future studies will need to determine the stability of the process-outcome relationship as quality improvement efforts drive broader care adoption.
In conclusion, our study has several health policy implications. First, current NSTE ACS care is not perfect, with up to 25% of opportunities for guideline-based care being missed in contemporary community practice. Therefore, ongoing quality assessment and perhaps stronger incentive systems, such as public reporting and pay for quality, are needed if we are to overcome this quality chasm. Second, significant variability in hospitals' performance on individual care indicators was found. This argues that multiple metrics will be needed to characterize hospital performance fully. Third, a strong association between hospitals' composite care performance and patient outcomes was observed. Our work supports the central hypothesis of hospital quality improvement; namely, better adherence with evidence-based care practices will result in better outcomes for patients who are treated. Finally, we found the association between process and outcome was at least as strong for emerging therapies as is observed with well-established therapies. Therefore, our performance indicator sets will need to be kept current on an ongoing basis to accurately identify high-quality medical practitioners and health care facilities.
Corresponding Author: Eric D. Peterson, MD, MPH, Duke Clinical Research Institute, 2400 Pratt St, Room 7009, Durham, NC 27705 (email@example.com).
Author Contributions: Dr Peterson had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Peterson, Lytle, Pollack, Newby, Gibler, Ohman.
Acquisition of data: Peterson, Pollack, Harrington, Ohman.
Analysis and interpretation of data: Peterson, Roe, Mulgund, DeLong, Brindis, Smith, Pollack, Newby, Harrington.
Drafting of the manuscript: Peterson, DeLong, Lytle.
Critical revision of the manuscript for important intellectual content: Peterson, Roe, Mulgund, DeLong, Lytle, Brindis, Smith, Pollack, Newby, Harrington, Gibler, Ohman.
Statistical analysis: Peterson, Mulgund, DeLong.
Obtained funding: Peterson, Roe, Harrington, Gibler, Ohman.
Administrative, technical, or material support: Peterson, Roe, Harrington, Ohman.
Study supervision: Peterson, Roe, Ohman.
Financial Disclosures: Drs Peterson, Roe, Brindis, Smith, Pollack, Newby, Harrington, Gibler, and Ohman have reported receiving research support from Schering-Plough Corp, Bristol-Myers Squibb/Sanofi-Aventis Pharmaceuticals Partnership, and Millennium Pharmaceuticals. No other authors reported financial disclosures.
Funding/Support: This study was supported by CRUSADE, a National Quality Improvement Initiative of the Duke Clinical Research Institute, which was funded by the Schering-Plough Corporation. Bristol-Myers Squibb/Sanofi-Aventis Pharmaceuticals Partnership and Millennium Pharmaceuticals provided additional funding support. Dr Peterson is also the recipient of grant R01 AG025312-01A1 from the National Institute on Aging.
Role of the Sponsors: Although none of the sponsors were directly involved in design and conduct of the study, in the collection, management, analysis, and interpretation of the data, or in the preparation of the manuscript, the sponsors all reviewed the submitted manuscript.